You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(client): add support for streaming raw responses (openai#1072)
As an alternative to `with_raw_response` we now provide
`with_streaming_response` as well. When using these methods you
will have to use a context manager to ensure that the response is
always cleaned up.
Copy file name to clipboardExpand all lines: README.md
+35-2
Original file line number
Diff line number
Diff line change
@@ -414,7 +414,7 @@ if response.my_field is None:
414
414
415
415
### Accessing raw response data (e.g. headers)
416
416
417
-
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call.
417
+
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
418
418
419
419
```py
420
420
from openai import OpenAI
@@ -433,7 +433,40 @@ completion = response.parse() # get the object that `chat.completions.create()`
433
433
print(completion)
434
434
```
435
435
436
-
These methods return an [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
436
+
These methods return an [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
437
+
438
+
For the sync client this will mostly be the same with the exception
439
+
of `content` & `text` will be methods instead of properties. In the
440
+
async client, all methods will be async.
441
+
442
+
A migration script will be provided & the migration in general should
443
+
be smooth.
444
+
445
+
#### `.with_streaming_response`
446
+
447
+
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
448
+
449
+
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
450
+
451
+
As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
452
+
453
+
```python
454
+
with client.chat.completions.with_streaming_response.create(
455
+
messages=[
456
+
{
457
+
"role": "user",
458
+
"content": "Say this is a test",
459
+
}
460
+
],
461
+
model="gpt-3.5-turbo",
462
+
) as response:
463
+
print(response.headers.get("X-My-Header"))
464
+
465
+
for line in response.iter_lines():
466
+
print(line)
467
+
```
468
+
469
+
The context manager is required so that the response will reliably be closed.
0 commit comments