Skip to main content
Deno 2 is finally here 🎉️
Learn more
Module

x/wmill/gen/index.ts

Open-source developer platform to power your entire infra and turn scripts into webhooks, workflows and UIs. Fastest workflow engine (13x vs Airflow). Open-source alternative to Retool and Temporal.
Go to Latest
import * as wmill from "https://deno.land/x/wmill@v1.423.0/gen/index.ts";

Variables

accept invite to workspace

add granular acls

add owner to folder

add user to workspace

add user to group

add user to instance group

archive flow by path

archive script by hash

archive script by path

archive workspace

is backend up to date

get backend version

cancel all queued jobs for persistent script

cancel queued or running job

cancel jobs based on the given uuids

cancel a job for a suspended flow

cancel a job for a suspended flow

change workspace id

change workspace name

connect callback

connect slack callback

connect slack callback instance

Count jobs by tag

create OAuth account

create app

create flow preview capture

create customer portal session

create draft

create flow

create folder

create group

create http trigger

Create an Input for future use in a script or flow

create instance group

create an HMac signature given a job id and a resume id

create raw app

create resource

create resource_type

create schedule

create script

create token

create token to impersonate a user (require superadmin)

create user

create variable

create workspace

Test connection to the workspace object storage

decline invite to workspace

delete app

delete completed job (erase content but keep run id)

Delete concurrency group

Delete Config

delete draft

delete flow by path

delete folder

delete group

delete http trigger

Delete a Saved Input

delete instance group

delete user invite

delete raw app

delete resource

delete resource_type

Permanently delete file from S3

delete schedule

delete script by hash (erase content but keep hash, require admin)

delete all scripts at a given path (require admin)

delete token

delete user (require admin privilege)

delete variable

delete workspace (require super admin)

disconnect account

disconnect slack

Converts an S3 resource to the set of instructions necessary to connect DuckDB to an S3 bucket

Converts an S3 resource to the set of instructions necessary to connect DuckDB to an S3 bucket

edit auto invite

edit copilot config

edit default scripts for workspace

edit deploy to

edit error handler

edit large file storage settings

edit slack command

edit webhook

edit default app for workspace

edit workspace deploy ui settings

edit workspace git sync settings

encrypt value

executeComponent

does an app exisst at path

exists email

exists flow by path

does http trigger exists

does an app exisst at path

does resource exists

does resource_type exists

does route exists

does schedule exists

exists script by path

exists username

does variable exists at path

exists worker with tag

exists workspace

export instance groups

Download file to S3 bucket

Download file to S3 bucket

get map from resource type to format extension

Upload file to S3 bucket

force cancel queued job

get all instance default tags

get app by path

get app by path with draft

get app by version

get app history by path

Get args from history or saved input

get audit log (requires admin privilege)

get flow preview capture

get completed count

get completed job

get completed job result

get completed job result if job is completed

Get the concurrency key for a job that has concurrency limits enabled

get config

get copilot info

get current user email (if logged in)

get all instance custom tags (tags are used to dispatch jobs to different worker groups)

get db clock

get default scripts for workspace

get deploy to

get flow by path

get flow by path with draft

get flow debug info

get flow history by path

get flow user state at a given key

get flow version

get folder

get folder usage

get global settings

get granular acls

get group

get http trigger

get hub app by id

get hub flow by id

get full hub script by path

get hub script content by path

List Inputs used in previously completed jobs

get instance group

get if workspace is premium

get job

get job args

get job logs

get job metrics

get job progress

get job updates

get large file storage config

get latest key renewal attempt

get license id

get local settings

get log file by path

get log file from object store

get oauth connect

get OIDC token (ee only)

get openapi yaml spec

get premium info

get public app by secret

get public resource

get public secret of app

get queue count

get queue metrics

get app by path

get resource

get resource_type

get resource value

get resource interpolated (variables and resources are fully unrolled)

get resume urls given a job_id, resume_id and a nonce to resume a flow

get root job id

get all runnables in every workspace

get schedule

get script by hash

get script by path

get script by path with draft

get script deployment status

get history of a script by path

get settings

get parent flow job of suspended job

get top hub scripts

get tutorial progress

get current usage outside of premium workspaces

get user (require admin privilege)

get variable

get variable value

get default app for workspace

retrieves the encryption key for this workspace

get workspace name

get usage

global delete user (require super admin)

global username info (require super admin)

global rename user (require super admin)

global export users (require super admin and EE)

global overwrite users (require super admin and EE)

global update user (require super admin)

get current global whoami (if logged in)

invite user to workspace

is default tags per workspace

is domain allowed for auto invi

is owner of path

leave instance

leave workspace

list all apps

list audit logs (requires admin privilege)

list all completed jobs

List all concurrency groups

list configs

list contextual variables

Get intervals of job runtime concurrency

get the ids of all jobs matching the given filters

list all flow paths

list all flows

list folder names

list folders

list global settings

list group names

list groups

list http triggers

list all hub apps

list all hub flows

list hub integrations

List saved Inputs for a Runnable

list instance groups

list all jobs

list log files ordered by timestamp

list oauth connects

list oauth logins

list pending invites for a workspace

list all queued jobs

list all raw apps

list resources

list resource names

list resource_types

list resource_types names

list schedules

list schedules with last 20 jobs

list all scripts paths

list all scripts

list apps for search

list flows for search

list resources for search

list scripts for search

List the file keys available in a workspace object storage

list token

list usernames

list users

list all users as super admin (require to be super amdin)

list users usage

list all workspaces visible to me with user info

list variables

list worker groups

list workers

list all workspace invites

list all workspaces visible to me

list all workspaces as super admin (require to be super admin)

Load a preview of a csv file

Load metadata of the file

Load a preview of the file

Load a preview of a parquet file

Load the table row count

login with password

login with oauth authorization flow

logout

Move a S3 file from one path to the other within the same bucket

run flow by path and wait until completion in openai format

run script by path in openai format

overwrite instance groups

Converts an S3 resource to the set of arguments necessary to connect Polars to an S3 bucket

Converts an S3 resource to the set of arguments necessary to connect Polars to an S3 bucket

preview schedule

query hub scripts by similarity

query resource types by similarity

raw script by hash

raw script by path

raw script by path with a token (mostly used by lsp to be used with import maps to resolve scripts)

refresh token

refresh the current token

remove granular acls

remove owner to folder

remove user from instance group

remove user to group

renew license key

restart a completed flow at a given step

get job result by id

resume a job for a suspended flow as an owner

resume a job for a suspended flow

resume a job for a suspended flow

run code-workflow task

run flow by path

run flow preview

run a one-off dependencies job

run script by hash

run script by path

run script preview

run a job that sends a message to Slack

run flow by path and wait until completion

run script by path

run script by path with get

Returns the s3 resource associated to the provided path, or the workspace default S3 resource

Search through jobs with a string query

send stats

set automatic billing

Set default error or recoevery handler

set environment variable

set flow user state at a given key

post global settings

set job metrics

set password

set enabled schedule

update the encryption key for this workspace

star item

test critical channels

test license key

test metadata

test object storage config

test smtp

Toggle ON and OFF the workspace error handler for a given flow

Toggle ON and OFF the workspace error handler for a given script

unarchive workspace

unstar item

update app

update app history

update flow preview capture

Update config

update flow

update flow history

update folder

update group

update http trigger

Update an Input

update instance group

update app

update resource

update resource_type

update resource value

update schedule

update history of a script

update tutorial progress

update user (require admin privilege)

update variable

whether http triggers are used

get email from username

whoami

whois

Type Aliases

filter on type of operation

filter on created after (exclusive) timestamp

filter on jobs containing those args as a json subset (@> in postgres)

filter on started before (inclusive) timestamp

Override the cache time to live (in seconds). Can not be used to disable caching, only override with a new cache ttl

filter on created after (exclusive) timestamp

filter on created before (inclusive) timestamp

mask to filter exact matching user creator

filter on created_at for non non started job and started_at otherwise after (exclusive) timestamp

filter on created_at for non non started job and started_at otherwise after (exclusive) timestamp but only for the completed jobs

filter on created_at for non non started job and started_at otherwise before (inclusive) timestamp

List of headers's keys (separated with ',') whove value are added to the args Header's key lowercased and '-'' replaced to '_' such that 'Content-Type' becomes the 'content_type' arg key

filter on job kind (values 'preview', 'script', 'dependencies', 'flow') separated by,

mask to filter exact matching job's label (job labels are completed jobs with as a result an object containing a string in the array at key 'wm_labels')

The job id to assign to the created job. if missing, job is chosen randomly using the ULID scheme. If a job id already exists in the queue or as a completed job, the request to create one will fail (Bad Request)

filter on exact or prefix name of operation

order by desc order (default true)

which page to return (start at 1, default 1)

The parent job that is at the origin and responsible for the execution of this script if any

The base64 encoded payload that has been encoded as a JSON. e.g how to encode such payload encodeURIComponent encodeURIComponent(btoa(JSON.stringify({a: 2})))

number of items to return for a given page (default 30, max 100)

The maximum size of the queue for which the request would get rejected if that job would push it above that limit

filter on exact or prefix name of resource

filter on jobs containing those result as a json subset (@> in postgres)

filter on running jobs

filter on jobs scheduled_for before now (hence waitinf for a worker)

mask to filter by schedule path

mask to filter exact matching path

mask to filter exact matching path

mask to filter matching starting path

filter on started after (exclusive) timestamp

filter on started before (inclusive) timestamp

filter on successful jobs

filter on suspended jobs

filter on jobs with a given tag/worker group

filter on exact username of user

Override the tag to use