You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB.
1225
1228
1226
-
The size of individual files can be a maximum of 512 MB. See the [Assistants Tools guide](/docs/assistants/tools) to learn more about the types of files supported. The Fine-tuning API only supports `.jsonl` files.
1229
+
The size of individual files can be a maximum of 512 MB or 2 million tokens for Assistants. See the [Assistants Tools guide](/docs/assistants/tools) to learn more about the types of files supported. The Fine-tuning API only supports `.jsonl` files.
1227
1230
1228
1231
Please [contact us](https://help.openai.com/) if you need to increase these storage limits.
1229
1232
requestBody:
@@ -5453,7 +5456,7 @@ components:
5453
5456
default: null
5454
5457
nullable: true
5455
5458
description: &completions_logprobs_description |
5456
-
Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.
5459
+
Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.
The maximum number of [tokens](/tokenizer) to generate in the completion.
5469
+
The maximum number of [tokens](/tokenizer) that can be generated in the completion.
5467
5470
5468
5471
The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
5469
5472
n:
@@ -5823,6 +5826,7 @@ components:
5823
5826
enum: ["function"]
5824
5827
description: The role of the messages author, in this case `function`.
5825
5828
content:
5829
+
nullable: true
5826
5830
type: string
5827
5831
description: The contents of the function message.
5828
5832
name:
@@ -5835,7 +5839,7 @@ components:
5835
5839
5836
5840
FunctionParameters:
5837
5841
type: object
5838
-
description: "The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.\n\nOmitting `parameters` defines a function with an empty parameter list."
5842
+
description: "The parameters the functions accepts, described as a JSON Schema object. See the [guide](/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.\n\nOmitting `parameters` defines a function with an empty parameter list."
5839
5843
additionalProperties: true
5840
5844
5841
5845
ChatCompletionFunctions:
@@ -6109,9 +6113,20 @@ components:
6109
6113
Modify the likelihood of specified tokens appearing in the completion.
6110
6114
6111
6115
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
6116
+
logprobs:
6117
+
description: Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. This option is currently not available on the `gpt-4-vision-preview` model.
6118
+
type: boolean
6119
+
default: false
6120
+
nullable: true
6121
+
top_logprobs:
6122
+
description: An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used.
6123
+
type: integer
6124
+
minimum: 0
6125
+
maximum: 5
6126
+
nullable: true
6112
6127
max_tokens:
6113
6128
description: |
6114
-
The maximum number of [tokens](/tokenizer) to generate in the chat completion.
6129
+
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion.
6115
6130
6116
6131
The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
6117
6132
type: integer
@@ -6134,7 +6149,7 @@ components:
6134
6149
response_format:
6135
6150
type: object
6136
6151
description: |
6137
-
An object specifying the format that the model must output.
6152
+
An object specifying the format that the model must output. Compatible with `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`.
6138
6153
6139
6154
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
6140
6155
@@ -6212,7 +6227,7 @@ components:
6212
6227
`auto` means the model can pick between generating a message or calling a function.
6213
6228
Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.
6214
6229
6215
-
`none` is the default when no functions are present. `auto`` is the default if functions are present.
6230
+
`none` is the default when no functions are present. `auto` is the default if functions are present.
6216
6231
oneOf:
6217
6232
- type: string
6218
6233
description: >
@@ -6253,6 +6268,7 @@ components:
6253
6268
- finish_reason
6254
6269
- index
6255
6270
- message
6271
+
- logprobs
6256
6272
properties:
6257
6273
finish_reason:
6258
6274
type: string
@@ -6274,6 +6290,50 @@ components:
6274
6290
description: The index of the choice in the list of choices.
description: A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be `null` if there is no bytes representation for the token.
6312
+
type: array
6313
+
items:
6314
+
type: integer
6315
+
nullable: true
6316
+
top_logprobs:
6317
+
description: List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested `top_logprobs` returned.
0 commit comments