Skip to content

Commit 93916ec

Browse files
chore(internal): temporarily remove some code for migration (#1429)
1 parent 027e11a commit 93916ec

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+578
-5630
lines changed

README.md

+16-148
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,12 @@ The OpenAI Python library provides convenient access to the OpenAI REST API from
66
application. The library includes type definitions for all request params and response fields,
77
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
88

9-
It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/).
10-
119
## Documentation
1210

1311
The REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md).
1412

1513
## Installation
1614

17-
> [!IMPORTANT]
18-
> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.
19-
2015
```sh
2116
# install from PyPI
2217
pip install openai
@@ -51,56 +46,6 @@ we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
5146
to add `OPENAI_API_KEY="My API Key"` to your `.env` file
5247
so that your API Key is not stored in source control.
5348

54-
### Polling Helpers
55-
56-
When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes
57-
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
58-
If an API method results in an action which could benefit from polling there will be a corresponding version of the
59-
method ending in '\_and_poll'.
60-
61-
For instance to create a Run and poll until it reaches a terminal state you can run:
62-
63-
```python
64-
run = client.beta.threads.runs.create_and_poll(
65-
thread_id=thread.id,
66-
assistant_id=assistant.id,
67-
)
68-
```
69-
70-
More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)
71-
72-
### Bulk Upload Helpers
73-
74-
When creating an interacting with vector stores, you can use the polling helpers to monitor the status of operations.
75-
For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.
76-
77-
```python
78-
sample_files = [Path("sample-paper.pdf"), ...]
79-
80-
batch = await client.vector_stores.file_batches.upload_and_poll(
81-
store.id,
82-
files=sample_files,
83-
)
84-
```
85-
86-
### Streaming Helpers
87-
88-
The SDK also includes helpers to process streams and handle the incoming events.
89-
90-
```python
91-
with client.beta.threads.runs.stream(
92-
thread_id=thread.id,
93-
assistant_id=assistant.id,
94-
instructions="Please address the user as Jane Doe. The user has a premium account.",
95-
) as stream:
96-
for event in stream:
97-
# Print the text from text delta events
98-
if event.type == "thread.message.delta" and event.data.delta.content:
99-
print(event.data.delta.content[0].text)
100-
```
101-
102-
More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md)
103-
10449
## Async usage
10550

10651
Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:
@@ -143,12 +88,17 @@ from openai import OpenAI
14388
client = OpenAI()
14489

14590
stream = client.chat.completions.create(
146-
model="gpt-4",
147-
messages=[{"role": "user", "content": "Say this is a test"}],
91+
messages=[
92+
{
93+
"role": "user",
94+
"content": "Say this is a test",
95+
}
96+
],
97+
model="gpt-3.5-turbo",
14898
stream=True,
14999
)
150-
for chunk in stream:
151-
print(chunk.choices[0].delta.content or "", end="")
100+
for chat_completion in stream:
101+
print(chat_completion)
152102
```
153103

154104
The async client uses the exact same interface.
@@ -158,60 +108,20 @@ from openai import AsyncOpenAI
158108

159109
client = AsyncOpenAI()
160110

161-
162-
async def main():
163-
stream = await client.chat.completions.create(
164-
model="gpt-4",
165-
messages=[{"role": "user", "content": "Say this is a test"}],
166-
stream=True,
167-
)
168-
async for chunk in stream:
169-
print(chunk.choices[0].delta.content or "", end="")
170-
171-
172-
asyncio.run(main())
173-
```
174-
175-
## Module-level client
176-
177-
> [!IMPORTANT]
178-
> We highly recommend instantiating client instances instead of relying on the global client.
179-
180-
We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.
181-
182-
```py
183-
import openai
184-
185-
# optional; defaults to `os.environ['OPENAI_API_KEY']`
186-
openai.api_key = '...'
187-
188-
# all client options can be configured just like the `OpenAI` instantiation counterpart
189-
openai.base_url = "https://..."
190-
openai.default_headers = {"x-foo": "true"}
191-
192-
completion = openai.chat.completions.create(
193-
model="gpt-4",
111+
stream = await client.chat.completions.create(
194112
messages=[
195113
{
196114
"role": "user",
197-
"content": "How do I output all files in a directory using Python?",
198-
},
115+
"content": "Say this is a test",
116+
}
199117
],
118+
model="gpt-3.5-turbo",
119+
stream=True,
200120
)
201-
print(completion.choices[0].message.content)
121+
async for chat_completion in stream:
122+
print(chat_completion)
202123
```
203124

204-
The API is the exact same as the standard client instance based API.
205-
206-
This is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.
207-
208-
We recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:
209-
210-
- It can be difficult to reason about where client options are configured
211-
- It's not possible to change certain client options without potentially causing race conditions
212-
- It's harder to mock for testing purposes
213-
- It's not possible to control cleanup of network connections
214-
215125
## Using types
216126

217127
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
@@ -579,48 +489,6 @@ client = OpenAI(
579489

580490
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
581491

582-
## Microsoft Azure OpenAI
583-
584-
To use this library with [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview), use the `AzureOpenAI`
585-
class instead of the `OpenAI` class.
586-
587-
> [!IMPORTANT]
588-
> The Azure API shape differs from the core API shape which means that the static types for responses / params
589-
> won't always be correct.
590-
591-
```py
592-
from openai import AzureOpenAI
593-
594-
# gets the API Key from environment variable AZURE_OPENAI_API_KEY
595-
client = AzureOpenAI(
596-
# https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versioning
597-
api_version="2023-07-01-preview",
598-
# https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
599-
azure_endpoint="https://example-endpoint.openai.azure.com",
600-
)
601-
602-
completion = client.chat.completions.create(
603-
model="deployment-name", # e.g. gpt-35-instant
604-
messages=[
605-
{
606-
"role": "user",
607-
"content": "How do I output all files in a directory using Python?",
608-
},
609-
],
610-
)
611-
print(completion.to_json())
612-
```
613-
614-
In addition to the options provided in the base `OpenAI` client, the following options are provided:
615-
616-
- `azure_endpoint` (or the `AZURE_OPENAI_ENDPOINT` environment variable)
617-
- `azure_deployment`
618-
- `api_version` (or the `OPENAI_API_VERSION` environment variable)
619-
- `azure_ad_token` (or the `AZURE_OPENAI_AD_TOKEN` environment variable)
620-
- `azure_ad_token_provider`
621-
622-
An example of using the client with Azure Active Directory can be found [here](https://github.com/openai/openai-python/blob/main/examples/azure_ad.py).
623-
624492
## Versioning
625493

626494
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:

api.md

-16
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,6 @@ Methods:
8585
- <code title="delete /files/{file_id}">client.files.<a href="./src/openai/resources/files.py">delete</a>(file_id) -> <a href="./src/openai/types/file_deleted.py">FileDeleted</a></code>
8686
- <code title="get /files/{file_id}/content">client.files.<a href="./src/openai/resources/files.py">content</a>(file_id) -> HttpxBinaryResponseContent</code>
8787
- <code title="get /files/{file_id}/content">client.files.<a href="./src/openai/resources/files.py">retrieve_content</a>(file_id) -> str</code>
88-
- <code>client.files.<a href="./src/openai/resources/files.py">wait_for_processing</a>(\*args) -> FileObject</code>
8988

9089
# Images
9190

@@ -227,10 +226,6 @@ Methods:
227226
- <code title="get /vector_stores/{vector_store_id}/files/{file_id}">client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">retrieve</a>(file_id, \*, vector_store_id) -> <a href="./src/openai/types/beta/vector_stores/vector_store_file.py">VectorStoreFile</a></code>
228227
- <code title="get /vector_stores/{vector_store_id}/files">client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">list</a>(vector_store_id, \*\*<a href="src/openai/types/beta/vector_stores/file_list_params.py">params</a>) -> <a href="./src/openai/types/beta/vector_stores/vector_store_file.py">SyncCursorPage[VectorStoreFile]</a></code>
229228
- <code title="delete /vector_stores/{vector_store_id}/files/{file_id}">client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">delete</a>(file_id, \*, vector_store_id) -> <a href="./src/openai/types/beta/vector_stores/vector_store_file_deleted.py">VectorStoreFileDeleted</a></code>
230-
- <code>client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">create_and_poll</a>(\*args) -> VectorStoreFile</code>
231-
- <code>client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">poll</a>(\*args) -> VectorStoreFile</code>
232-
- <code>client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">upload</a>(\*args) -> VectorStoreFile</code>
233-
- <code>client.beta.vector_stores.files.<a href="./src/openai/resources/beta/vector_stores/files.py">upload_and_poll</a>(\*args) -> VectorStoreFile</code>
234229

235230
### FileBatches
236231

@@ -246,9 +241,6 @@ Methods:
246241
- <code title="get /vector_stores/{vector_store_id}/file_batches/{batch_id}">client.beta.vector_stores.file_batches.<a href="./src/openai/resources/beta/vector_stores/file_batches.py">retrieve</a>(batch_id, \*, vector_store_id) -> <a href="./src/openai/types/beta/vector_stores/vector_store_file_batch.py">VectorStoreFileBatch</a></code>
247242
- <code title="post /vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel">client.beta.vector_stores.file_batches.<a href="./src/openai/resources/beta/vector_stores/file_batches.py">cancel</a>(batch_id, \*, vector_store_id) -> <a href="./src/openai/types/beta/vector_stores/vector_store_file_batch.py">VectorStoreFileBatch</a></code>
248243
- <code title="get /vector_stores/{vector_store_id}/file_batches/{batch_id}/files">client.beta.vector_stores.file_batches.<a href="./src/openai/resources/beta/vector_stores/file_batches.py">list_files</a>(batch_id, \*, vector_store_id, \*\*<a href="src/openai/types/beta/vector_stores/file_batch_list_files_params.py">params</a>) -> <a href="./src/openai/types/beta/vector_stores/vector_store_file.py">SyncCursorPage[VectorStoreFile]</a></code>
249-
- <code>client.beta.vector_stores.file_batches.<a href="./src/openai/resources/beta/vector_stores/file_batches.py">create_and_poll</a>(\*args) -> VectorStoreFileBatch</code>
250-
- <code>client.beta.vector_stores.file_batches.<a href="./src/openai/resources/beta/vector_stores/file_batches.py">poll</a>(\*args) -> VectorStoreFileBatch</code>
251-
- <code>client.beta.vector_stores.file_batches.<a href="./src/openai/resources/beta/vector_stores/file_batches.py">upload_and_poll</a>(\*args) -> VectorStoreFileBatch</code>
252244

253245
## Assistants
254246

@@ -301,8 +293,6 @@ Methods:
301293
- <code title="post /threads/{thread_id}">client.beta.threads.<a href="./src/openai/resources/beta/threads/threads.py">update</a>(thread_id, \*\*<a href="src/openai/types/beta/thread_update_params.py">params</a>) -> <a href="./src/openai/types/beta/thread.py">Thread</a></code>
302294
- <code title="delete /threads/{thread_id}">client.beta.threads.<a href="./src/openai/resources/beta/threads/threads.py">delete</a>(thread_id) -> <a href="./src/openai/types/beta/thread_deleted.py">ThreadDeleted</a></code>
303295
- <code title="post /threads/runs">client.beta.threads.<a href="./src/openai/resources/beta/threads/threads.py">create_and_run</a>(\*\*<a href="src/openai/types/beta/thread_create_and_run_params.py">params</a>) -> <a href="./src/openai/types/beta/threads/run.py">Run</a></code>
304-
- <code>client.beta.threads.<a href="./src/openai/resources/beta/threads/threads.py">create_and_run_poll</a>(\*args) -> Run</code>
305-
- <code>client.beta.threads.<a href="./src/openai/resources/beta/threads/threads.py">create_and_run_stream</a>(\*args) -> AssistantStreamManager[AssistantEventHandler] | AssistantStreamManager[AssistantEventHandlerT]</code>
306296

307297
### Runs
308298

@@ -320,12 +310,6 @@ Methods:
320310
- <code title="get /threads/{thread_id}/runs">client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">list</a>(thread_id, \*\*<a href="src/openai/types/beta/threads/run_list_params.py">params</a>) -> <a href="./src/openai/types/beta/threads/run.py">SyncCursorPage[Run]</a></code>
321311
- <code title="post /threads/{thread_id}/runs/{run_id}/cancel">client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">cancel</a>(run_id, \*, thread_id) -> <a href="./src/openai/types/beta/threads/run.py">Run</a></code>
322312
- <code title="post /threads/{thread_id}/runs/{run_id}/submit_tool_outputs">client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">submit_tool_outputs</a>(run_id, \*, thread_id, \*\*<a href="src/openai/types/beta/threads/run_submit_tool_outputs_params.py">params</a>) -> <a href="./src/openai/types/beta/threads/run.py">Run</a></code>
323-
- <code>client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">create_and_poll</a>(\*args) -> Run</code>
324-
- <code>client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">create_and_stream</a>(\*args) -> AssistantStreamManager[AssistantEventHandler] | AssistantStreamManager[AssistantEventHandlerT]</code>
325-
- <code>client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">poll</a>(\*args) -> Run</code>
326-
- <code>client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">stream</a>(\*args) -> AssistantStreamManager[AssistantEventHandler] | AssistantStreamManager[AssistantEventHandlerT]</code>
327-
- <code>client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">submit_tool_outputs_and_poll</a>(\*args) -> Run</code>
328-
- <code>client.beta.threads.runs.<a href="./src/openai/resources/beta/threads/runs/runs.py">submit_tool_outputs_stream</a>(\*args) -> AssistantStreamManager[AssistantEventHandler] | AssistantStreamManager[AssistantEventHandlerT]</code>
329313

330314
#### Steps
331315

examples/.keep

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
File generated from our OpenAPI spec by Stainless.
2+
3+
This directory can be used to store example files demonstrating usage of this SDK.
4+
It is ignored by Stainless code generation and its content (other than this keep file) won't be touched.

0 commit comments

Comments
 (0)