- 0.22.1Latest
- 0.22.0
- 0.21.10
- 0.21.9
- 0.21.8
- 0.21.7
- 0.21.6
- 0.21.5
- 0.21.4
- 0.21.3
- 0.21.2
- 0.21.1
- 0.21.0
- 0.20.48
- 0.20.47
- 0.20.46
- 0.20.45
- 0.20.44
- 0.20.43
- 0.20.42
- 0.20.41
- 0.20.40
- 0.20.39
- 0.20.38
- 0.20.37
- 0.20.36
- 0.20.35
- 0.20.34
- 0.20.33
- 0.20.32
- 0.20.31
- 0.20.30
- 0.20.29
- 0.20.28
- 0.20.27
- 0.20.26
- 0.20.25
- 0.20.24
- 0.20.23
- 0.20.22
- 0.20.21
- 0.20.20
- 0.20.19
- 0.20.18
- 0.20.17
- 0.20.16
- 0.20.15
- 0.20.14
- 0.20.13
- 0.20.12
- 0.20.11
- 0.20.10
- 0.20.9
- 0.20.8
- 0.20.7
- 0.20.6
- 0.20.5
- 0.20.4
- 0.20.3
- 0.20.2
- 0.20.1
- 0.20.0
- 0.19.13
- 0.19.12
- 0.19.11
- 0.19.10
- 0.19.9
- 0.19.8
- 0.19.7
- 0.19.6
- 0.19.5
- 0.19.4
- 0.19.3
- 0.19.2
- 0.19.1
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.4
- 0.14.3
- 0.14.2
- 0.14.1
- 0.14.0
- 0.13.8
- 0.13.7
- 0.13.6
- 0.13.5
- 0.13.4
- 0.13.3
- 0.13.2
- 0.13.1
- 0.13.0
- 0.12.2
- 0.12.1
- 0.12.0
- 0.11.1
- 0.11.0
- 0.10.0
- 0.9.2
- 0.9.1
- 0.9.0
- 0.8.0
- 0.7.0
- 0.6.1
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
- 0.0.8
- 0.0.7
- 0.0.6
- 0.0.5
- 0.0.4
- 0.0.3
- 0.0.2
- 0.0.1
- 0.0.0
proc
An easy way to run processes like a shell script - in Deno.
proc
lets you write process-handling code in readable, idiomatic Typescript
using async/await
and AsyncIterator
promisy goodness. It provides a variety
of powerful and flexible input and output handlers, making using processes
comfortable and intuitive. And proc
handles closing and shutting down
process-related resources in a sane manner - because you have enough to worry
about, right?
Documentation
deno doc --reload https://deno.land/x/proc/mod.ts 2> /dev/null
Examples
Related Projects
Input and Output Types
Processes really just deal with one type of data - bytes, in streams. Many programs will take this one step further and internally translate to and from text data, processing this data one line at a time.
proc
treats process data as either Uint8Array
or AsyncIterable<Uint8Array>
for byte data, or string
or AsyncIterable<string>
(as lines of text) for
text. It defines a set of standard input and output handlers that provide both
type information and data handling behavior to the runner.
An Example
To get you started, here is a simple example where we pass a string
to a
process and get back a Uint8Array
.
/**
* Use `gzip` to compress some text.
* @param text The text to compress.
* @return The text compressed into bytes.
*/
async function gzip(text: string): Promise<Uint8Array> {
/* I am using a string for input and a Uint8Array (bytes) for output. */
const pr: Runner<string, Uint8Array> = runner(
stringInput(),
bytesOutput(),
)();
return await pr.run({ cmd: ["gzip", "-c"] }, text);
}
console.dir(await gzip("Hello, world."));
/* prints an array of bytes to console. */
Input Types
Name | Description |
---|---|
emptyInput() |
There is no process input. |
stringInput() |
Process input is a string . |
stringArrayInput() |
Process input is a string[] . |
bytesInput() |
Process input is a Uint8Array . |
readerInput() * |
Process input is a Deno.Reader & Deno.Closer . |
readerUnbufferedInput() * |
Process input is a Deno.Reader & Deno.Closer , unbuffered. |
stringIterableInput() |
Process input is an AsyncIterable<string> . |
stringIterableUnbufferedInput() |
Process input is an AsyncIterable<string> , unbuffered. |
bytesIterableInput() |
Process input is an AsyncIterable<Uint8Array> . |
bytesIterableUnbufferedInput() |
Process input is an AsyncIterable<Uint8Array> , unbuffered. |
* - readerInput()
and readerUnbufferedInput()
are special input
types that do not have corresponding output types.
Output Types
Name | Description |
---|---|
stringOutput() |
Process output is a string . |
stringArrayOutput() |
Process output is a string[] . |
bytesOutput() |
Process output is a Uint8Array . |
stringIterableOutput() |
Process output is an AsyncIterable<string> . |
stringIterableUnbufferedOutput() |
Process output is an AsyncIterable<string> , unbuffered. |
bytesIterableOutput() |
Process output is an AsyncIterable<Uint8Array> . |
bytesIterableUnbufferedOutput() |
Process output is an AsyncIterable<Uint8Array> , unbuffered. |
stderrToStdoutStringIterableOutput() * |
stdout and stderr are converted to text lines (string ) and multiplexed together. |
* - Special output handler that mixes stdout
and stderr
together.
stdout
must be text data. stdout
is unbuffered to allow the text lines to be
multiplexed as accurately as possible.
âšď¸ You must fully consume
Iterable
outputs. If you only partially consumeIterable
s, process errors will not propagate properly. For correct behavior, we have to return all the data from the process streams before we can propagate an error.
Running a Command
proc
is easiest to use with a wildcard import.
import * as proc from "https://deno.land/x/proc@0.0.0/mod.ts";
First, create a template. The template is a static definition and may be reused. The input and output handlers determine the data types used by your runner.
const template = proc.runner(proc.emptyInput(), proc.stringOutput());
Next, create a runner by binding the template to a group.
const pg = proc.group();
const runner: proc.Runner<void, string> = template(pg);
Finally, use the runner to execute a command.
try {
console.log(
runner.run({cmd: ["ls", "-la"]});
);
} finally {
pg.close();
}
Key Concepts
Process Basics
Processes accept input through stdin
and output data to stdout
. These two
streams may be interpreted either as byte data or as text data, depending on the
use case.
There is another output stream called stderr
. This is typically used for
logging and/or details about any errors that occur. stderr
is always
interpreted as text. In most cases it just gets dumped to the stderr
stream of
the parent process, but you have some control over how it is handled.
In some cases (Java processes come to mind), stdout
and stderr
are roughly
interchangable, with logging and error messages written to either output stream
in a sloppy manner. The stderrToStdoutStringIterableOutput()
output handler
gives you an option for handling both streams together.
Processes return a numeric exit code when they exit. 0
means success, and any
other number means something went wrong. proc
deals with error conditions on
process exit by throwing a ProcessExitError
. You should never have to poll for
process status.
Asynchronous Iterables
JavaScript introduced the AsyncIterable
as part of the 2015 spec. This is an
asynchronous protocol, so it works well with the streamed data to and from a
process.
proc
heavily relies on AsyncIterable
.
See JavaScript Iteration Protocols (MDN).
Streaming code executes differently than you may be used to. Errors work
differently too, being passed from iterable to iterable rather than failing
directly. Bugs in this kind of code can be difficult to figure out. To help with
this, proc
can chain its errors. You can turn this feature on by calling a
function:
proc.enableChaining(true);
This can produce some really long error chains that you may not want to work with in production, so this feature is turned off by default.
Preventing Resource Leakage
Processes are system resources, like file handles. This means they need special
handling. We have to take special care to close each process, and we also have
to close all the resources associated with each process - stdin
, stdout
, and
stderr
. Also, depending on how a Deno process shuts down, it may leave behind
orphan child processes in certain cases (this behavior is well documented but
annoying nonetheless) if measures arenât taken specifically to prevent this.
In other words, working with Denoâs process API is more complicated than it looks.
To address the problem of leakage, proc
uses group()
to group related
process lifetimes. When you are done using a group of processes, you just close
the group. This cleans up everything all at once. Itâs easy. Itâs foolproof.
If you forget to close a group, or if your Deno process exits while you have some processes open, the group takes care of cleaning things up in that case too. Note that a group cannot be garbage-collected until it is explicitly closed.
const pr = runner(emptyInput(), stringOutput());
const pg = group();
try {
console.log(
await pr(pg).run({
cmd: ["ls", "-la"],
}),
);
} finally {
pg.close();
}
If you donât specify a group when running a command, the global group will be used. This is fine if the processes you run are all âwell behavedâ and/or if you are doing a short run of just a few processes.
Performance Considerations
In general, Uint8Array
s are faster than string
s. This is because processes
really just deal with bytes, so text in JavaScript has to be converted to and
from UTF-8
both coming and going. Also, lines of text tend to be smaller than
the ideal byte buffer size (there is a bit of overhead for every line or buffer
passed).
Iterable (or streaming) data allows commands to run in parallel, streaming data from one to the next as soon as it becomes available. Non-streaming data (bytes, string, or arrays of these) has to be fully resolved before it can be passed to the next process, so commands run this way run one at a time - serially.
Buffered data is sometimes a lot faster than unbuffered data, but it really depends. As a general rule, use the buffered handlers if you want the best performance. If you need output from the process as soon as it is available, that is when you would normally use unbuffered data.
To sum it all up, when you have a lot of data, the fastest way to run processes
is to connect them together with buffered AsyncIterable<Uint8Array>
s or to
pipe them together using a bash
script - though you give up some ability to
capture error conditions with the later. AsyncIterable<Uint8Array>
(default
buffered) is iterable/streaming buffered byte data, so commands can run in
parallel, chunk size is optimal, and there is no overhead for text/line
conversion.
AsyncIterable<string>
is reasonably fast, and youâll use it if you want to
process string data in the Deno process. This data has to be converted from
lines of text to bytes into and out of the process, so there is significant
amount of overhead. Iterating over lots of very small strings does not perform
well.
If you donât have a lot of data to process, it doesnât really matter which form you use.