- v2.7.16Latest
- v2.7.15
- v2.7.14
- v2.7.13
- v2.7.12
- v2.7.12-canary
- v2.7.11
- v2.7.11-canary-3
- v2.7.11-canary-2
- v2.7.11-canary
- v2.7.10
- v2.8.0-canary
- v2.7.9
- v2.7.8
- v2.7.7
- v2.7.6
- v2.7.6-canary
- v2.7.5
- v2.7.4
- v2.7.3
- v2.7.2
- v2.7.1
- v2.7.1-canary
- v2.7.0
- v2.7.0-workflow-alpha.7
- v2.7.0-workflow-alpha.6
- v2.7.0-workflow-alpha.5
- v2.7.0-workflow-alpha.4
- 2.7.0-workflow-alpha.4
- v2.7.0-workflow-alpha.3
- v2.7.0-workflow-alpha.2
- v2.7.0-workflow-alpha.1
- v2.6.5-workflow-url-canary
- v2.6.5
- v2.6.4-workflow-alpha.4
- v2.6.4-workflow-alpha.3
- v2.6.4-workflow-alpha.2
- v2.6.4
- 2.6.4-workflow-alpha.1
- v2.6.3
- v2.6.2
- v2.6.1
- v2.7.0-canary-1
- v2.7.0-canary
- v2.6.0
- v2.5.6-canary
- v2.5.5
- v2.5.5-canary
- v2.5.4
- v2.5.3
- v2.5.2
- v2.5.1
- v2.5.1-canary
- v2.5.0
- v2.4.3
- v2.4.2
- v2.4.1
- v2.4.0
- v2.3.1
- v2.3.1-canary
- v2.3.0
- v.2.3.0
- v2.3.0-canary-0
- v2.2.0
- v2.2.0-canary
- v2.1.11
- v2.1.11-canary
- v2.1.10
- v2.1.9
- v2.1.9-canary-1
- v2.1.9-canary
- v2.1.8
- v2.1.7
- v2.1.6
- v2.1.5
- v2.1.4
- v2.1.3
- v2.1.2
- v2.1.1
- v2.1.1-canary.9
- v2.1.0
- v2.0.0
- v2.0.0-canary.22
- v0.4.0-canary.0
- 0.4.0-canary.0
- v0.3.7-canary.0
- v0.3.6
- v0.3.5
- v0.3.4
- v0.3.3
- v0.3.2
- v0.3.1
- v0.3.0
- v0.3.0-rc.0
- v0.2.0
- v0.2.0-rc.0
- v0.1.8
- v0.1.7
- v0.1.6
- v0.1.5
- v0.1.4
- v0.1.3
Upstash QStash SDK
Note
> This project is in GA Stage. The Upstash Professional Support fully covers this project. It receives regular updates, and bug fixes. The Upstash team is committed to maintaining and improving its functionality.
QStash is an HTTP based messaging and scheduling solution for serverless and edge runtimes.
It is 100% built on stateless HTTP requests and designed for:
- Serverless functions (AWS Lambda …)
- Cloudflare Workers (see the example)
- Fastly Compute@Edge
- Next.js, including edge
- Deno
- Client side web/mobile applications
- WebAssembly
- and other environments where HTTP is preferred over TCP.
How does QStash work?
QStash is the message broker between your serverless apps. You send an HTTP request to QStash, that includes a destination, a payload and optional settings. We durably store your message and will deliver it to the destination API via HTTP. In case the destination is not ready to receive the message, we will retry the message later, to guarentee at-least-once delivery.
Quick Start
Install
npm
npm install @upstash/qstash
Get your authorization token
Go to Upstash Console and copy the QSTASH_TOKEN.
Basic Usage:
Publishing a message
import { Client } from "@upstash/qstash";
/**
* Import a fetch polyfill only if you are using node prior to v18.
* This is not necessary for nextjs, deno or cloudflare workers.
*/
import "isomorphic-fetch";
const c = new Client({
token: "<QSTASH_TOKEN>",
});
const res = await c.publishJSON({
url: "https://my-api...",
// or urlGroup: "the name or id of a url group"
body: {
hello: "world",
},
});
console.log(res);
// { messageId: "msg_xxxxxxxxxxxxxxxx" }
Receiving a message
How to receive a message depends on your http server. The Receiver.verify
method should be called by you as the first step in your handler function.
import { Receiver } from "@upstash/qstash";
const r = new Receiver({
currentSigningKey: "..",
nextSigningKey: "..",
});
const isValid = await r.verify({
/**
* The signature from the `Upstash-Signature` header.
*
* Please note that on some platforms (e.g. Vercel or Netlify) you might
* receive the header in lower case: `upstash-signature`
*
*/
signature: "string";
/**
* The raw request body.
*/
body: "string";
})
Publishing a message to Open AI or any Open AI Compatible LLM
No need for complicated setup your LLM request. We’ll call LLM and schedule it for your serverless needs.
import { Client, openai } from "@upstash/qstash";
const c = new Client({
token: "<QSTASH_TOKEN>",
});
const result = await client.publishJSON({
api: { name: "llm", provider: openai({ token: process.env.OPENAI_API_KEY! }) },
body: {
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "Where is the capital of Turkey?",
},
],
},
callback: "https://oz.requestcatcher.com/",
});
Chatting with your favorite LLM
You can easily start streaming Upstash or OpenAI responses from your favorite framework(Next.js) or library
import { upstash } from "@upstash/qstash";
const response = await client.chat().create({
provider: upstash(), // Optionally, provider: "custom({token: "XXX", baseUrl: "https://api.openai.com"})". This will allow you to call every OpenAI compatible API out there.
model: "meta-llama/Meta-Llama-3-8B-Instruct", // Optionally, model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content: "from now on, foo is whale",
},
{
role: "user",
content: "what exactly is foo?",
},
],
stream: true,
temperature: 0.5,
});
Docs
See the documentation for details.