import { type ChatCompletionCreateParamsBase } from "https://deno.land/x/openai@v4.61.1/resources/chat/completions.ts";
Properties
A list of messages comprising the conversation so far. Example Python code.
ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
See more information about frequency and presence penalties.
Deprecated in favor of tool_choice
.
Controls which (if any) function is called by the model. none
means the model
will not call a function and instead generates a message. auto
means the model
can pick between generating a message or calling a function. Specifying a
particular function via {"name": "my_function"}
forces the model to call that
function.
none
is the default when no functions are present. auto
is the default if
functions are present.
Deprecated in favor of tools
.
A list of functions the model may generate JSON inputs for.
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the content
of
message
.
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
This value is now deprecated in favor of max_completion_tokens
, and is not
compatible with
o1 series models.
How many chat completion choices to generate for each input message. Note that
you will be charged based on the number of generated tokens across all of the
choices. Keep n
as 1
to minimize costs.
Whether to enable parallel function calling during tool use.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
See more information about frequency and presence penalties.
An object specifying the format that the model must output. Compatible with
GPT-4o,
GPT-4o mini,
GPT-4 Turbo and
all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106
.
Setting to { "type": "json_schema", "json_schema": {...} }
enables Structured
Outputs which ensures the model will match your supplied JSON schema. Learn more
in the
Structured Outputs guide.
Setting to { "type": "json_object" }
enables JSON mode, which ensures the
message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in a long-running and seemingly "stuck" request. Also note that
the message content may be partially cut off if finish_reason="length"
, which
indicates the generation exceeded max_tokens
or the conversation exceeded the
max context length.
This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same seed
and
parameters should return the same result. Determinism is not guaranteed, and you
should refer to the system_fingerprint
response parameter to monitor changes
in the backend.
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
- If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
- If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
- If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
- When not set, the default behavior is 'auto'.
When this parameter is set, the response body will include the service_tier
utilized.
Up to 4 sequences where the API will stop generating further tokens.
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be
sent as data-only
server-sent events
as they become available, with the stream terminated by a data: [DONE]
message.
Example Python code.
Options for streaming response. Only set this when you set stream: true
.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p
but not both.
Controls which (if any) tool is called by the model. none
means the model will
not call any tool and instead generates a message. auto
means the model can
pick between generating a message or calling one or more tools. required
means
the model must call one or more tools. Specifying a particular tool via
{"type": "function", "function": {"name": "my_function"}}
forces the model to
call that tool.
none
is the default when no tools are present. auto
is the default if tools
are present.
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
An integer between 0 and 20 specifying the number of most likely tokens to
return at each token position, each with an associated log probability.
logprobs
must be set to true
if this parameter is used.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature
but not both.
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.