You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The OpenAI Python library provides access to the OpenAI REST API in Python applications. The library includes type definitions for all request params and response fields, offering clients for both synchronous and asynchronous operations powered by [httpx](https://github.com/encode/httpx).
5
+
<!-- ---8<--- [start:get-started] -->
6
+
7
+
The OpenAI Python library provides access to the OpenAI REST API in Python applications. It includes type definitions for all request params and response fields and has clients for both synchronous and asynchronous operations powered by [httpx](https://github.com/encode/httpx).
6
8
7
9
The OpenAI Python library is generated from OpenAI's [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
8
10
@@ -11,31 +13,37 @@ The OpenAI Python library is generated from OpenAI's [OpenAPI specification](htt
11
13
-[Python](https://www.python.org/) 3.7+
12
14
-[OpenAI API key](https://platform.openai.com/account/api-keys)
13
15
14
-
## Installation
16
+
## Install the package
15
17
16
-
You can install the [openai](https://pypi.org/project/openai/) package from PyPi with `pip`:
18
+
You can the [openai](https://pypi.org/project/openai/) package from PyPi with `pip`:
17
19
18
20
```sh
19
21
# Install the package
20
22
pip install openai
21
23
```
22
24
23
-
### Migration
25
+
## Migrate from earlier versions
26
+
27
+
Released on November 6th 2023, the OpenAI Python library was rewritten for version `1.0.0`.
28
+
29
+
If your project used a pre-v1 version of the library, see the [v1 migration guide](https://github.com/openai/openai-python/discussions/742) for information and scripts that can help you update your code.
30
+
31
+
<!-- ---8<--- [end:get-started] -->
24
32
25
-
Released on November 6th 2023, the OpenAI Python library was rewritten for v1. If your project used a pre-v1 version of the library, see the [v1 migration guide](https://github.com/openai/openai-python/discussions/742) for information and scripts that can help you update your code.
33
+
## Connect
26
34
27
-
## Usage
35
+
<!-- ---8<--- [start:connect] -->
28
36
29
37
To connect to the OpenAI API:
30
38
31
-
1. Populate an `OPENAI_API_KEY` environment variable with your [OpenAI API key](https://platform.openai.com/account/api-keys).
39
+
1. Populate an `OPENAI_API_KEY` environment variable with your [OpenAI API key](https://platform.openai.com/account/api-keys)
32
40
2. Create a synchronous or asynchronous `OpenAI` client object.
33
41
34
-
!!! Tip
35
42
36
-
To reduce the risk of committing your OpenAI API key to source control, we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) and adding `OPENAI_API_KEY="YOUR_API_KEY_HERE"` to your `.env` file.
43
+
!!! Tip
44
+
To reduce the risk of committing your OpenAI API key to source control, you can use [python-dotenv](https://pypi.org/project/python-dotenv/) and add `OPENAI_API_KEY="YOUR_API_KEY_HERE"` to your `.env` file.
37
45
38
-
###Synchronous client
46
+
## Synchronous client
39
47
40
48
Create an instance of the [OpenAI][src.openai.OpenAI] client:
41
49
@@ -78,7 +86,7 @@ for chunk in stream:
78
86
79
87
1. :material-chat: This enables response streaming through Server Side Events (SSE).
80
88
81
-
###Asynchronous client
89
+
## Asynchronous client
82
90
83
91
Create an instance of the [AsyncOpenAI][src.openai.AsyncOpenAI] client and `await` each API call. Functionality between the synchronous and asynchronous clients is otherwise identical.
84
92
@@ -132,7 +140,7 @@ asyncio.run(main())
132
140
133
141
1. :material-chat: This enables response streaming through Server Side Events (SSE).
134
142
135
-
###Module-level global client
143
+
## Module-level global client
136
144
137
145
Similar to pre-v1 versions of the library, there is also a module-level client available for use in REPLs, notebooks, and other scenarios requiring quick "local loop" iteration.
We recommend you *avoid* using this module-level client your application code because:
174
+
We recommend you _avoid_ using this module-level client your application code because:
167
175
168
176
- It can be difficult to reason about where client options are configured.
169
177
- It's impossible to change certain client options without causing the potential for race conditions.
170
178
- It's harder to mock for testing purposes.
171
179
- It's impossible to control cleanup of network connections.
180
+
<!-- ---8<--- [end:connect] -->
172
181
173
182
## Request types
174
183
184
+
<!-- ---8<--- [start:request-response] -->
185
+
175
186
Nested **request** parameters are Python [TypedDicts][typing.TypedDict].
176
187
177
188
For example, the user message in the following [`chat.completions.create()`][src.openai.resources.chat.completions.Completions.create] request is a [`ChatCompletionUserMessageParam`][src.openai.types.chat.chat_completion_user_message_param.ChatCompletionUserMessageParam], which has a base type of [`TypedDict`][typing.TypedDict]:
@@ -222,8 +233,12 @@ The async client uses the same interface. If you pass a [`PathLike`][os.PathLike
222
233
223
234
Typed requests and responses enable type checking, autocompletion, and hover-help documentation in editors that support those features. In Visual Studio Code, for example, you can [enable type checking in Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance) by setting `python.analysis.typeCheckingMode` to `basic` as described in that article's **Settings and Customization** section.
224
235
236
+
<!-- ---8<--- [end:request-response] -->
237
+
225
238
## Handling errors
226
239
240
+
<!-- ---8<--- [start:handle-errors] -->
241
+
227
242
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of [`openai.APIConnectionError`][src.openai.APIConnectionError] is raised.
228
243
229
244
When the API returns a non-success status code (that is, 4xx or 5xx
@@ -255,22 +270,23 @@ except openai.APIStatusError as e:
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
272
-
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
273
-
429 Rate Limit, and >=500 Internal errors are all retried by default.
288
+
289
+
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors are all retried by default.
274
290
275
291
You can use the `max_retries` option to configure or disable retry settings:
The following is a modified version of the [README.md](https://github.com/openai/openai-python/blob/main/README.md) file in the upstream repo. *—[Marsh](https://github.com/mmacy)*
5
+
The pages in this _Get started_ section are adapted from the sections in the [README.md](https://github.com/openai/openai-python/blob/main/README.md) in the upstream repository.
@@ -29,6 +29,4 @@ That said, I use these docs myself and thus intend to keep them (mostly) current
29
29
30
30
1. That means you might encounter inaccuracies or you might not find what you think should be here. In either case, you should refer to [openai/openai-python](https://github.com/openai/openai-python) as the source of truth.
31
31
32
-
!!! quote
33
-
34
-
If these docs help you, yay! If they don't, don't use 'em. Enjoy! *—[Marsh](https://github.com/mmacy)*
0 commit comments