Skip to content

Commit 4eea033

Browse files
committed
docstring for resources.chat.completions.create
1 parent 47b4768 commit 4eea033

File tree

5 files changed

+135
-10
lines changed

5 files changed

+135
-10
lines changed

docs/reference/include/openai.resources.py

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,18 @@
11
"""Snippets in this docstring are ingested by other documentation (including library docstrings) during the MkDocs build process.
22
3+
# --8<-- [start:resources]
4+
The `resources` module aggregates classes and functions for interacting with the OpenAI API into several submodules,
5+
each representing a specific resource or feature of the API.
6+
7+
The submodules' classes mirror the structure of the API's endpoints and offer synchronous and asynchronous
8+
communication with the API.
9+
10+
Each resource is accessible as an attribute on the [`OpenAI`][src.openai.OpenAI] and [`AsyncOpenAI`][src.openai.AsyncOpenAI]
11+
clients. To work with a resource, initialize an instance of one of the clients and access the resource as an attribute
12+
on the client instance. For example, to work with the `chat` resource, create an instance of the `OpenAI` client and
13+
access the attributes and methods on `your_client_instance.chat`.
14+
# --8<-- [end:resources]
15+
316
# --8<-- [start:audio]
417
The `audio` module provides classes for handling various audio processing operations, including transcription of audio to text, translation of spoken content, and speech synthesis.
518
@@ -15,7 +28,7 @@
1528
# --8<-- [start:chat]
1629
The `chat` module provides classes for creating and managing chat sessions that leverage OpenAI's language models to generate conversational responses.
1730
18-
The module supports both synchronous and asynchronous operations, offering interfaces for direct interaction with the completion endpoints tailored for chat applications. Designed for developers looking to integrate AI-powered chat functionalities into their applicationsand features like raw and streaming response handling for more flexible integration.
31+
The module supports both synchronous and asynchronous operations, offering interfaces for direct interaction with the completion endpoints tailored for chat applications. Designed for developers looking to integrate AI-powered chat functionalities into their applications and features like raw and streaming response handling for more flexible integration.
1932
# --8<-- [end:chat]
2033
2134
# --8<-- [start:chat_completions]

docs/reference/include/openai_init.py

Lines changed: 26 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,34 @@
88
provides both synchronous and asynchronous API clients, options to configure their behavior, and modules that provide
99
Python code with an API surface to interact with the OpenAI platform.
1010
11-
To get started, check out the documentation for the module representing the [resource][src.openai.resources] you're interested in using for your
12-
project. For example, the [`resources.chat.completions`][src.openai.resources.chat.completions] module is what you'd use
11+
To get started, read the submodule descriptions in [`resources`][src.openai.resources] to determine which best fits your
12+
project. For example, the [`resources.chat`][src.openai.resources.chat] submodule description indicates it's a good fit
1313
for conversational chat-style interactions with an LLM like [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo).
14-
Or, maybe you need the [`resources.audio`][src.openai.resources.audio] module for performing audio transcription, translation, and
15-
speech synthesis in your app.
14+
Or, maybe you need the [`resources.audio`][src.openai.resources.audio] module to perform audio transcription,
15+
translation, and speech synthesis in your app.
1616
17-
Documentation for the library's main API client classes, [`OpenAI`][src.openai.OpenAI] and
18-
[`AsyncOpenAI`][src.openai.AsyncOpenAI], is another good place to start. The clients are the primary contact point for
19-
your code that needs to work with any of the resources available on OpenAI API endpoints.
17+
Once you've determined the resource to use, create an [`OpenAI`][src.openai.OpenAI] or [`AsyncOpenAI`][src.openai.AsyncOpenAI]
18+
client instance and access the instance attribute for that resource on the client object. For example, if you instantiate
19+
an `OpenAI` client object named `client`, youd access the [`OpenAI.chat`][src.openai.OpenAI.chat] instance attribute:
20+
21+
```python
22+
from openai import OpenAI
23+
24+
# Reads API key from OPENAI_API_KEY environment variable
25+
client = OpenAI()
26+
27+
# Use the `chat` resource to interact with the OpenAI chat completions endpoint
28+
completion = client.chat.completions.create(
29+
model="gpt-4",
30+
messages=[
31+
{
32+
"role": "user",
33+
"content": "Say this is a test",
34+
},
35+
],
36+
)
37+
print(completion.choices[0].message.content)
38+
```
2039
2140
For more information about the REST API this package talks to or to find client libraries for other programming
2241
languages, see:

mkdocs-requirements.txt

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,20 @@
1+
annotated-types==0.6.0
2+
anyio==4.3.0
13
Babel==2.14.0
24
black==24.1.1
35
certifi==2024.2.2
46
charset-normalizer==3.3.2
57
click==8.1.7
68
colorama==0.4.6
9+
distro==1.9.0
710
ghp-import==2.1.0
811
griffe==0.40.1
912
griffe-inherited-docstrings==1.0.0
13+
h11==0.14.0
14+
httpcore==1.0.3
15+
httpx==0.26.0
1016
idna==3.6
17+
iniconfig==2.0.0
1118
Jinja2==3.1.3
1219
Markdown==3.5.2
1320
MarkupSafe==2.1.5
@@ -16,22 +23,31 @@ mkdocs==1.5.3
1623
mkdocs-autorefs==0.5.0
1724
mkdocs-gen-files==0.5.0
1825
mkdocs-literate-nav==0.6.1
19-
mkdocs-material==9.5.8
26+
mkdocs-material==9.5.9
2027
mkdocs-material-extensions==1.3.1
2128
mkdocstrings==0.24.0
2229
mkdocstrings-python @ git+ssh://[email protected]/pawamoy-insiders/mkdocstrings-python.git@157224dddefd2f2b979f9e92f0506e44c1548f64
2330
mypy-extensions==1.0.0
31+
openai==1.12.0
2432
packaging==23.2
2533
paginate==0.5.6
2634
pathspec==0.12.1
2735
platformdirs==4.2.0
36+
pluggy==1.4.0
37+
pydantic==2.6.1
38+
pydantic_core==2.16.2
2839
Pygments==2.17.2
2940
pymdown-extensions==10.7
41+
pytest==8.0.1
3042
python-dateutil==2.8.2
3143
PyYAML==6.0.1
3244
pyyaml_env_tag==0.1
3345
regex==2023.12.25
3446
requests==2.31.0
47+
respx==0.20.2
3548
six==1.16.0
49+
sniffio==1.3.0
50+
tqdm==4.66.2
51+
typing_extensions==4.9.0
3652
urllib3==2.2.0
3753
watchdog==4.0.0

src/openai/resources/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# File generated from our OpenAPI spec by Stainless.
2-
2+
""" --8<-- 'docs/reference/include/openai.resources.py:resources' """
33
from .beta import (
44
Beta,
55
AsyncBeta,

src/openai/resources/chat/completions.py

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -660,6 +660,83 @@ def create(
660660
extra_body: Body | None = None,
661661
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
662662
) -> ChatCompletion | Stream[ChatCompletionChunk]:
663+
"""Creates a model response for the given chat conversation, tailored by a variety of customizable parameters.
664+
665+
This method allows for detailed control over the chat completion process, including model selection,
666+
response formatting, and dynamic interaction through streaming.
667+
668+
Args:
669+
messages (Iterable[ChatCompletionMessageParam]):
670+
Messages comprising the conversation so far. Example Python code available at
671+
[How to format inputs to ChatGPT models](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
672+
model (str | Literal[...]):
673+
ID of the model to use. Refer to the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
674+
table for details on which models are compatible with the Chat API.
675+
stream (Literal[True]):
676+
If True, enables streaming of message deltas. Tokens are sent as server-sent events as they become available,
677+
terminating with a `data: [DONE]` message. See [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
678+
for more on this format and [How to stream completions](https://cookbook.openai.com/examples/how_to_stream_completions)
679+
for example Python code.
680+
frequency_penalty (Optional[float], default=NOT_GIVEN):
681+
Adjusts token generation frequency to discourage repetition, with a range between -2.0 and 2.0.
682+
More details at [frequency and presence penalties](https://platform.openai.com/docs/guides/text-generation/parameter-details).
683+
function_call (completion_create_params.FunctionCall, optional):
684+
Deprecated in favor of `tool_choice`. Controls the function call behavior within the model.
685+
functions (Iterable[completion_create_params.Function], optional):
686+
Deprecated in favor of `tools`. Lists functions the model can call.
687+
logit_bias (Optional[Dict[str, int]], default=NOT_GIVEN):
688+
Modifies the likelihood of specified tokens. Accepts a dict mapping token IDs to bias values (-100 to 100).
689+
logprobs (Optional[bool], default=NOT_GIVEN):
690+
Includes log probabilities of output tokens when True. Not available for `gpt-4-vision-preview`.
691+
max_tokens (Optional[int], default=NOT_GIVEN):
692+
Sets the maximum token count for the chat completion. See [How to count tokens with TikToken](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
693+
for token counting examples.
694+
n (Optional[int], default=NOT_GIVEN):
695+
Number of chat completion choices to generate for each message. Affects cost.
696+
presence_penalty (Optional[float], default=NOT_GIVEN):
697+
Adjusts for token presence to promote topic diversity, with a range between -2.0 and 2.0.
698+
response_format (completion_create_params.ResponseFormat, optional):
699+
Specifies the model output format, compatible with GPT-4 Turbo and GPT-3.5 Turbo models newer than
700+
`gpt-3.5-turbo-1106`. JSON mode ensures valid JSON output.
701+
seed (Optional[int], default=NOT_GIVEN):
702+
Seeds the RNG for deterministic outputs. Beta feature.
703+
stop (Union[Optional[str], List[str]], default=NOT_GIVEN):
704+
Sequences indicating when to halt token generation.
705+
temperature (Optional[float], default=NOT_GIVEN):
706+
Controls output randomness. Recommended to adjust this or `top_p`, but not both.
707+
tool_choice (ChatCompletionToolChoiceOptionParam, optional):
708+
Selects a tool or function for the model to use.
709+
tools (Iterable[ChatCompletionToolParam], optional):
710+
Specifies available tools for the model, currently limited to functions.
711+
top_logprobs (Optional[int], default=NOT_GIVEN):
712+
Returns top log probabilities for each token position. Requires `logprobs` to be True.
713+
top_p (Optional[float], default=NOT_GIVEN):
714+
Nucleus sampling parameter, considering only the top probability mass for generation.
715+
user (str | NotGiven):
716+
Unique identifier for the end-user, assisting in abuse monitoring. Learn more at
717+
[End-user IDs](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids).
718+
extra_headers (Headers, optional):
719+
Additional HTTP headers for the request.
720+
extra_query (Query, optional):
721+
Additional query parameters for the request.
722+
extra_body (Body, optional):
723+
Additional body content for the request.
724+
timeout (float | httpx.Timeout | None | NotGiven, default=NOT_GIVEN):
725+
Custom timeout for this request, overriding the default settings.
726+
727+
Returns:
728+
Stream[ChatCompletionChunk]: A stream of chat completion chunks for real-time interaction.
729+
730+
Examples:
731+
>>> create(
732+
... messages=[{"role": "user", "content": "Hello, world!"}],
733+
... model="gpt-3.5-turbo",
734+
... stream=True,
735+
... frequency_penalty=0.5,
736+
... # Additional parameters...
737+
... )
738+
<Stream of ChatCompletionChunk>
739+
"""
663740
return self._post(
664741
"/chat/completions",
665742
body=maybe_transform(

0 commit comments

Comments
 (0)