Skip to main content
Deno 2 is finally here 🎉️
Learn more
Module

x/openai/resources/beta/threads/mod.ts>AssistantResponseFormatOption

Deno build of the official Typescript library for the OpenAI API.
Extremely Popular
Go to Latest
type alias AssistantResponseFormatOption
import { type AssistantResponseFormatOption } from "https://deno.land/x/openai@v4.61.1/resources/beta/threads/mod.ts";

Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.

Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

definition:
| "auto"
| Shared.ResponseFormatText
| Shared.ResponseFormatJSONObject
| Shared.ResponseFormatJSONSchema