Skip to main content
Using Deno in production at your company? Earn free Deno merch.
Give us feedback
Module

x/openai/resources/beta/threads/threads.ts>AssistantResponseFormatOption

Deno build of the official Typescript library for the OpenAI API.
Extremely Popular
Go to Latest
type alias AssistantResponseFormatOption
import { type AssistantResponseFormatOption } from "https://deno.land/x/openai@v4.38.5/resources/beta/threads/threads.ts";

Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

definition: "none" | "auto" | AssistantResponseFormat