Skip to main content
Using Deno in production at your company? Earn free Deno merch.
Give us feedback
Module

x/openai/resources/beta/assistants.ts>AssistantCreateParams

Deno build of the official Typescript library for the OpenAI API.
Extremely Popular
Go to Latest
namespace AssistantCreateParams
import { AssistantCreateParams } from "https://deno.land/x/openai@v4.38.5/resources/beta/assistants.ts";

Interfaces

A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

interface AssistantCreateParams
import { type AssistantCreateParams } from "https://deno.land/x/openai@v4.38.5/resources/beta/assistants.ts";

Properties

model:
| (string & { })
| "gpt-4-turbo"
| "gpt-4-turbo-2024-04-09"
| "gpt-4-0125-preview"
| "gpt-4-turbo-preview"
| "gpt-4-1106-preview"
| "gpt-4-vision-preview"
| "gpt-4"
| "gpt-4-0314"
| "gpt-4-0613"
| "gpt-4-32k"
| "gpt-4-32k-0314"
| "gpt-4-32k-0613"
| "gpt-3.5-turbo"
| "gpt-3.5-turbo-16k"
| "gpt-3.5-turbo-0613"
| "gpt-3.5-turbo-1106"
| "gpt-3.5-turbo-0125"
| "gpt-3.5-turbo-16k-0613"

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.

optional
description: string | null

The description of the assistant. The maximum length is 512 characters.

optional
instructions: string | null

The system instructions that the assistant uses. The maximum length is 256,000 characters.

optional
metadata: unknown | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.

optional
name: string | null

The name of the assistant. The maximum length is 256 characters.

optional
response_format: ThreadsAPI.AssistantResponseFormatOption | null

Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.

Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.

Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

optional
temperature: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

optional
tool_resources: AssistantCreateParams.ToolResources | null

A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.

optional
tools: Array<AssistantTool>

A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.

optional
top_p: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.