import { type TokenizeContext } from "https://deno.land/x/dendron_exports@v0.2.2/deps/micromark.ts";
A context object that helps w/ tokenizing markdown constructs.
Properties
The previous code.
Current code.
Whether we’re currently interrupting.
Take for example:
a
# b
At 2:1, we’re “interrupting”.
The current construct.
Constructs that are not partial
are set here.
share state set when parsing containers.
Containers are parsed in separate phases: their first line (tokenize
),
continued lines (continuation.tokenize
), and finally exit
.
This record can be used to store some information between these hooks.
Current list of events.
The relevant parsing context.
Get the chunks that span a token (or location).
Get the source text that spans a token (or location).
Get the current place.
Define a skip
As containers (block quotes, lists), “nibble” a prefix from the margins, where a line starts after that prefix is defined here. When the tokenizers moves after consuming a line ending corresponding to the line number in the given point, the tokenizer shifts past the prefix based on the column in the shifted point.
Write a slice of chunks.
The eof code (null
) can be used to signal the end of the stream.
Internal boolean shared with micromark-extension-gfm-task-list-item
to
signal whether the tokenizer is tokenizing the first content of a list item
construct.
Internal boolean shared with micromark-extension-gfm-table
whose body
rows are not affected by normal interruption rules.
“Normal” rules are, for example, that an empty list item can’t interrupt:
a
*
The above is one paragraph. These rules don’t apply to table body rows:
| a |
| - |
*
The above list interrupts the table.