|
2 | 2 |
|
3 | 3 | [](https://npmjs.org/package/openai)
|
4 | 4 |
|
5 |
| -This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript. |
| 5 | +This library provides convenient access to the OpenAI REST API from server-side TypeScript or JavaScript. |
6 | 6 |
|
7 |
| -It is generated from our [OpenAPI specification](https://github.com/openai/openai-openapi) with [Stainless](https://stainlessapi.com/). |
8 |
| - |
9 |
| -To learn how to use the OpenAI API, check out our [API Reference](https://platform.openai.com/docs/api-reference) and [Documentation](https://platform.openai.com/docs). |
| 7 | +The REST API documentation can be found [on platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md). |
10 | 8 |
|
11 | 9 | ## Installation
|
12 | 10 |
|
|
26 | 24 |
|
27 | 25 | ## Usage
|
28 | 26 |
|
29 |
| -The full API of this library can be found in [api.md file](api.md) along with many [code examples](https://github.com/openai/openai-node/tree/master/examples). The code below shows how to get started using the chat completions API. |
| 27 | +The full API of this library can be found in [api.md](api.md). |
30 | 28 |
|
31 | 29 | <!-- prettier-ignore -->
|
32 | 30 | ```js
|
@@ -55,18 +53,14 @@ import OpenAI from 'openai';
|
55 | 53 |
|
56 | 54 | const openai = new OpenAI();
|
57 | 55 |
|
58 |
| -async function main() { |
59 |
| - const stream = await openai.chat.completions.create({ |
60 |
| - model: 'gpt-4', |
61 |
| - messages: [{ role: 'user', content: 'Say this is a test' }], |
62 |
| - stream: true, |
63 |
| - }); |
64 |
| - for await (const chunk of stream) { |
65 |
| - process.stdout.write(chunk.choices[0]?.delta?.content || ''); |
66 |
| - } |
| 56 | +const stream = await openai.chat.completions.create({ |
| 57 | + messages: [{ role: 'user', content: 'Say this is a test' }], |
| 58 | + model: 'gpt-3.5-turbo', |
| 59 | + stream: true, |
| 60 | +}); |
| 61 | +for await (const chatCompletionChunk of stream) { |
| 62 | + console.log(chatCompletionChunk); |
67 | 63 | }
|
68 |
| - |
69 |
| -main(); |
70 | 64 | ```
|
71 | 65 |
|
72 | 66 | If you need to cancel a stream, you can `break` from the loop
|
@@ -97,196 +91,6 @@ main();
|
97 | 91 |
|
98 | 92 | Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
|
99 | 93 |
|
100 |
| -> [!IMPORTANT] |
101 |
| -> Previous versions of this SDK used a `Configuration` class. See the [v3 to v4 migration guide](https://github.com/openai/openai-node/discussions/217). |
102 |
| -
|
103 |
| -### Polling Helpers |
104 |
| - |
105 |
| -When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes |
106 |
| -helper functions which will poll the status until it reaches a terminal state and then return the resulting object. |
107 |
| -If an API method results in an action which could benefit from polling there will be a corresponding version of the |
108 |
| -method ending in 'AndPoll'. |
109 |
| - |
110 |
| -For instance to create a Run and poll until it reaches a terminal state you can run: |
111 |
| - |
112 |
| -```ts |
113 |
| -const run = await openai.beta.threads.runs.createAndPoll(thread.id, { |
114 |
| - assistant_id: assistantId, |
115 |
| -}); |
116 |
| -``` |
117 |
| - |
118 |
| -More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle) |
119 |
| - |
120 |
| -### Bulk Upload Helpers |
121 |
| - |
122 |
| -When creating and interacting with vector stores, you can use the polling helpers to monitor the status of operations. |
123 |
| -For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once. |
124 |
| - |
125 |
| -```ts |
126 |
| -const fileList = [ |
127 |
| - createReadStream('/home/data/example.pdf'), |
128 |
| - ... |
129 |
| -]; |
130 |
| - |
131 |
| -const batch = await openai.vectorStores.fileBatches.uploadAndPoll(vectorStore.id, fileList); |
132 |
| -``` |
133 |
| - |
134 |
| -### Streaming Helpers |
135 |
| - |
136 |
| -The SDK also includes helpers to process streams and handle the incoming events. |
137 |
| - |
138 |
| -```ts |
139 |
| -const run = openai.beta.threads.runs |
140 |
| - .stream(thread.id, { |
141 |
| - assistant_id: assistant.id, |
142 |
| - }) |
143 |
| - .on('textCreated', (text) => process.stdout.write('\nassistant > ')) |
144 |
| - .on('textDelta', (textDelta, snapshot) => process.stdout.write(textDelta.value)) |
145 |
| - .on('toolCallCreated', (toolCall) => process.stdout.write(`\nassistant > ${toolCall.type}\n\n`)) |
146 |
| - .on('toolCallDelta', (toolCallDelta, snapshot) => { |
147 |
| - if (toolCallDelta.type === 'code_interpreter') { |
148 |
| - if (toolCallDelta.code_interpreter.input) { |
149 |
| - process.stdout.write(toolCallDelta.code_interpreter.input); |
150 |
| - } |
151 |
| - if (toolCallDelta.code_interpreter.outputs) { |
152 |
| - process.stdout.write('\noutput >\n'); |
153 |
| - toolCallDelta.code_interpreter.outputs.forEach((output) => { |
154 |
| - if (output.type === 'logs') { |
155 |
| - process.stdout.write(`\n${output.logs}\n`); |
156 |
| - } |
157 |
| - }); |
158 |
| - } |
159 |
| - } |
160 |
| - }); |
161 |
| -``` |
162 |
| - |
163 |
| -More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md) |
164 |
| - |
165 |
| -### Streaming responses |
166 |
| - |
167 |
| -This library provides several conveniences for streaming chat completions, for example: |
168 |
| - |
169 |
| -```ts |
170 |
| -import OpenAI from 'openai'; |
171 |
| - |
172 |
| -const openai = new OpenAI(); |
173 |
| - |
174 |
| -async function main() { |
175 |
| - const stream = await openai.beta.chat.completions.stream({ |
176 |
| - model: 'gpt-4', |
177 |
| - messages: [{ role: 'user', content: 'Say this is a test' }], |
178 |
| - stream: true, |
179 |
| - }); |
180 |
| - |
181 |
| - stream.on('content', (delta, snapshot) => { |
182 |
| - process.stdout.write(delta); |
183 |
| - }); |
184 |
| - |
185 |
| - // or, equivalently: |
186 |
| - for await (const chunk of stream) { |
187 |
| - process.stdout.write(chunk.choices[0]?.delta?.content || ''); |
188 |
| - } |
189 |
| - |
190 |
| - const chatCompletion = await stream.finalChatCompletion(); |
191 |
| - console.log(chatCompletion); // {id: "…", choices: […], …} |
192 |
| -} |
193 |
| - |
194 |
| -main(); |
195 |
| -``` |
196 |
| - |
197 |
| -Streaming with `openai.beta.chat.completions.stream({…})` exposes |
198 |
| -[various helpers for your convenience](helpers.md#events) including event handlers and promises. |
199 |
| - |
200 |
| -Alternatively, you can use `openai.chat.completions.create({ stream: true, … })` |
201 |
| -which only returns an async iterable of the chunks in the stream and thus uses less memory |
202 |
| -(it does not build up a final chat completion object for you). |
203 |
| - |
204 |
| -If you need to cancel a stream, you can `break` from a `for await` loop or call `stream.abort()`. |
205 |
| - |
206 |
| -### Automated function calls |
207 |
| - |
208 |
| -We provide the `openai.beta.chat.completions.runTools({…})` |
209 |
| -convenience helper for using function tool calls with the `/chat/completions` endpoint |
210 |
| -which automatically call the JavaScript functions you provide |
211 |
| -and sends their results back to the `/chat/completions` endpoint, |
212 |
| -looping as long as the model requests tool calls. |
213 |
| - |
214 |
| -If you pass a `parse` function, it will automatically parse the `arguments` for you |
215 |
| -and returns any parsing errors to the model to attempt auto-recovery. |
216 |
| -Otherwise, the args will be passed to the function you provide as a string. |
217 |
| - |
218 |
| -If you pass `tool_choice: {function: {name: …}}` instead of `auto`, |
219 |
| -it returns immediately after calling that function (and only loops to auto-recover parsing errors). |
220 |
| - |
221 |
| -```ts |
222 |
| -import OpenAI from 'openai'; |
223 |
| - |
224 |
| -const client = new OpenAI(); |
225 |
| - |
226 |
| -async function main() { |
227 |
| - const runner = client.beta.chat.completions |
228 |
| - .runTools({ |
229 |
| - model: 'gpt-3.5-turbo', |
230 |
| - messages: [{ role: 'user', content: 'How is the weather this week?' }], |
231 |
| - tools: [ |
232 |
| - { |
233 |
| - type: 'function', |
234 |
| - function: { |
235 |
| - function: getCurrentLocation, |
236 |
| - parameters: { type: 'object', properties: {} }, |
237 |
| - }, |
238 |
| - }, |
239 |
| - { |
240 |
| - type: 'function', |
241 |
| - function: { |
242 |
| - function: getWeather, |
243 |
| - parse: JSON.parse, // or use a validation library like zod for typesafe parsing. |
244 |
| - parameters: { |
245 |
| - type: 'object', |
246 |
| - properties: { |
247 |
| - location: { type: 'string' }, |
248 |
| - }, |
249 |
| - }, |
250 |
| - }, |
251 |
| - }, |
252 |
| - ], |
253 |
| - }) |
254 |
| - .on('message', (message) => console.log(message)); |
255 |
| - |
256 |
| - const finalContent = await runner.finalContent(); |
257 |
| - console.log(); |
258 |
| - console.log('Final content:', finalContent); |
259 |
| -} |
260 |
| - |
261 |
| -async function getCurrentLocation() { |
262 |
| - return 'Boston'; // Simulate lookup |
263 |
| -} |
264 |
| - |
265 |
| -async function getWeather(args: { location: string }) { |
266 |
| - const { location } = args; |
267 |
| - // … do lookup … |
268 |
| - return { temperature, precipitation }; |
269 |
| -} |
270 |
| - |
271 |
| -main(); |
272 |
| - |
273 |
| -// {role: "user", content: "How's the weather this week?"} |
274 |
| -// {role: "assistant", tool_calls: [{type: "function", function: {name: "getCurrentLocation", arguments: "{}"}, id: "123"} |
275 |
| -// {role: "tool", name: "getCurrentLocation", content: "Boston", tool_call_id: "123"} |
276 |
| -// {role: "assistant", tool_calls: [{type: "function", function: {name: "getWeather", arguments: '{"location": "Boston"}'}, id: "1234"}]} |
277 |
| -// {role: "tool", name: "getWeather", content: '{"temperature": "50degF", "preciptation": "high"}', tool_call_id: "1234"} |
278 |
| -// {role: "assistant", content: "It's looking cold and rainy - you might want to wear a jacket!"} |
279 |
| -// |
280 |
| -// Final content: "It's looking cold and rainy - you might want to wear a jacket!" |
281 |
| -``` |
282 |
| - |
283 |
| -Like with `.stream()`, we provide a variety of [helpers and events](helpers.md#events). |
284 |
| - |
285 |
| -Note that `runFunctions` was previously available as well, but has been deprecated in favor of `runTools`. |
286 |
| - |
287 |
| -Read more about various examples such as with integrating with [zod](helpers.md#integrate-with-zod), |
288 |
| -[next.js](helpers.md#integrate-wtih-next-js), and [proxying a stream to the browser](helpers.md#proxy-streaming-to-a-browser). |
289 |
| - |
290 | 94 | ## File uploads
|
291 | 95 |
|
292 | 96 | Request parameters that correspond to file uploads can be passed in many different forms:
|
@@ -361,26 +165,6 @@ Error codes are as followed:
|
361 | 165 | | >=500 | `InternalServerError` |
|
362 | 166 | | N/A | `APIConnectionError` |
|
363 | 167 |
|
364 |
| -## Microsoft Azure OpenAI |
365 |
| - |
366 |
| -To use this library with [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview), use the `AzureOpenAI` |
367 |
| -class instead of the `OpenAI` class. |
368 |
| - |
369 |
| -> [!IMPORTANT] |
370 |
| -> The Azure API shape differs from the core API shape which means that the static types for responses / params |
371 |
| -> won't always be correct. |
372 |
| -
|
373 |
| -```ts |
374 |
| -const openai = new AzureOpenAI(); |
375 |
| - |
376 |
| -const result = await openai.chat.completions.create({ |
377 |
| - model: 'gpt-4-1106-preview', |
378 |
| - messages: [{ role: 'user', content: 'Say hello!' }], |
379 |
| -}); |
380 |
| - |
381 |
| -console.log(result.choices[0]!.message?.content); |
382 |
| -``` |
383 |
| - |
384 | 168 | ### Retries
|
385 | 169 |
|
386 | 170 | Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
|
|
0 commit comments