You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/include/openai.resources.py
+14-1Lines changed: 14 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,18 @@
1
1
"""Snippets in this docstring are ingested by other documentation (including library docstrings) during the MkDocs build process.
2
2
3
+
# --8<-- [start:resources]
4
+
The `resources` module aggregates classes and functions for interacting with the OpenAI API into several submodules,
5
+
each representing a specific resource or feature of the API.
6
+
7
+
The submodules' classes mirror the structure of the API's endpoints and offer synchronous and asynchronous
8
+
communication with the API.
9
+
10
+
Each resource is accessible as an attribute on the [`OpenAI`][src.openai.OpenAI] and [`AsyncOpenAI`][src.openai.AsyncOpenAI]
11
+
clients. To work with a resource, initialize an instance of one of the clients and access the resource as an attribute
12
+
on the client instance. For example, to work with the `chat` resource, create an instance of the `OpenAI` client and
13
+
access the attributes and methods on `your_client_instance.chat`.
14
+
# --8<-- [end:resources]
15
+
3
16
# --8<-- [start:audio]
4
17
The `audio` module provides classes for handling various audio processing operations, including transcription of audio to text, translation of spoken content, and speech synthesis.
5
18
@@ -15,7 +28,7 @@
15
28
# --8<-- [start:chat]
16
29
The `chat` module provides classes for creating and managing chat sessions that leverage OpenAI's language models to generate conversational responses.
17
30
18
-
The module supports both synchronous and asynchronous operations, offering interfaces for direct interaction with the completion endpoints tailored for chat applications. Designed for developers looking to integrate AI-powered chat functionalities into their applicationsand features like raw and streaming response handling for more flexible integration.
31
+
The module supports both synchronous and asynchronous operations, offering interfaces for direct interaction with the completion endpoints tailored for chat applications. Designed for developers looking to integrate AI-powered chat functionalities into their applications and features like raw and streaming response handling for more flexible integration.
"""Creates a model response for the given chat conversation, tailored by a variety of customizable parameters.
664
+
665
+
This method allows for detailed control over the chat completion process, including model selection,
666
+
response formatting, and dynamic interaction through streaming.
667
+
668
+
Args:
669
+
messages (Iterable[ChatCompletionMessageParam]):
670
+
Messages comprising the conversation so far. Example Python code available at
671
+
[How to format inputs to ChatGPT models](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
672
+
model (str | Literal[...]):
673
+
ID of the model to use. Refer to the [model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
674
+
table for details on which models are compatible with the Chat API.
675
+
stream (Literal[True]):
676
+
If True, enables streaming of message deltas. Tokens are sent as server-sent events as they become available,
677
+
terminating with a `data: [DONE]` message. See [Using server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
678
+
for more on this format and [How to stream completions](https://cookbook.openai.com/examples/how_to_stream_completions)
Modifies the likelihood of specified tokens. Accepts a dict mapping token IDs to bias values (-100 to 100).
689
+
logprobs (Optional[bool], default=NOT_GIVEN):
690
+
Includes log probabilities of output tokens when True. Not available for `gpt-4-vision-preview`.
691
+
max_tokens (Optional[int], default=NOT_GIVEN):
692
+
Sets the maximum token count for the chat completion. See [How to count tokens with TikToken](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
693
+
for token counting examples.
694
+
n (Optional[int], default=NOT_GIVEN):
695
+
Number of chat completion choices to generate for each message. Affects cost.
0 commit comments