Skip to main content
Deno 2 is finally here šŸŽ‰ļø
Learn more

htmltok - HTML and XML tokenizer and normalizer

Documentation Index

This library splits HTML code to semantic units like ā€œbeginning of open tagā€, ā€œattribute nameā€, ā€œattribute valueā€, ā€œcommentā€, etc. It respects preprocessing instructions (like <?...?>), so can be used to implement HTML-based templating languages.

Also this library can tokenize XML markup. However itā€™s HTML5-centric. When decoding named entities, HTML5 ones will be recognized and decoded (however decoding is beyond tokenization, and happens only when you call Token.getValue()).

During tokenization, this library finds errors in markup, like not closed tags, duplicate attribute names, etc., and suggests fixes. It can be used to convert HTML to canonical form.

Example

// To download and run this example:
// curl 'https://raw.githubusercontent.com/jeremiah-shaulov/htmltok/v2.0.1/README.md' | perl -ne '$y=$1 if /^```(.)?/;  print $_ if $y&&$m;  $m=$y&&($m||m~<example-p9mn>~)' > /tmp/example-p9mn.ts
// deno run /tmp/example-p9mn.ts

import {htmltok, TokenType} from 'https://deno.land/x/htmltok@v2.0.1/mod.ts';
import {assertEquals} from 'jsr:@std/assert@1.0.7/equals';

const source =
`	<meta name=viewport content="width=device-width, initial-scale=1.0">
    <div title="&quot;Title&quot;">
        Text.
    </div>
`;

assertEquals
(	[...htmltok(source)].map(v => Object.assign<Record<never, never>, unknown>({}, v)),
    [	{nLine: 1,  nColumn: 1,  level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TEXT,                         text: "\t"},
        {nLine: 1,  nColumn: 5,  level: 0, tagName: "meta",    isSelfClosing: false, isForeign: false, type: TokenType.TAG_OPEN_BEGIN,               text: "<meta"},
        {nLine: 1,  nColumn: 10, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TAG_OPEN_SPACE,               text: " "},
        {nLine: 1,  nColumn: 11, level: 0, tagName: "meta",    isSelfClosing: false, isForeign: false, type: TokenType.ATTR_NAME,                    text: "name"},
        {nLine: 1,  nColumn: 15, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.ATTR_EQ,                      text: "="},
        {nLine: 1,  nColumn: 16, level: 0, tagName: "meta",    isSelfClosing: false, isForeign: false, type: TokenType.ATTR_VALUE,                   text: "viewport"},
        {nLine: 1,  nColumn: 24, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TAG_OPEN_SPACE,               text: " "},
        {nLine: 1,  nColumn: 25, level: 0, tagName: "meta",    isSelfClosing: false, isForeign: false, type: TokenType.ATTR_NAME,                    text: "content"},
        {nLine: 1,  nColumn: 32, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.ATTR_EQ,                      text: "="},
        {nLine: 1,  nColumn: 33, level: 0, tagName: "meta",    isSelfClosing: false, isForeign: false, type: TokenType.ATTR_VALUE,                   text: "\"width=device-width, initial-scale=1.0\""},
        {nLine: 1,  nColumn: 72, level: 0, tagName: "",        isSelfClosing: true,  isForeign: false, type: TokenType.TAG_OPEN_END,                 text: ">"},
        {nLine: 1,  nColumn: 73, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TEXT,                         text: "\n\t"},
        {nLine: 2,  nColumn: 5,  level: 0, tagName: "div",     isSelfClosing: false, isForeign: false, type: TokenType.TAG_OPEN_BEGIN,               text: "<div"},
        {nLine: 2,  nColumn: 9,  level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TAG_OPEN_SPACE,               text: " "},
        {nLine: 2,  nColumn: 10, level: 0, tagName: "div",     isSelfClosing: false, isForeign: false, type: TokenType.ATTR_NAME,                    text: "title"},
        {nLine: 2,  nColumn: 15, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.ATTR_EQ,                      text: "="},
        {nLine: 2,  nColumn: 16, level: 0, tagName: "div",     isSelfClosing: false, isForeign: false, type: TokenType.ATTR_VALUE,                   text: "\"&quot;Title&quot;\""},
        {nLine: 2,  nColumn: 35, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TAG_OPEN_END,                 text: ">"},
        {nLine: 2,  nColumn: 36, level: 1, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TEXT,                         text: "\n\t\tText.\n\t"},
        {nLine: 4,  nColumn: 5,  level: 0, tagName: "div",     isSelfClosing: false, isForeign: false, type: TokenType.TAG_CLOSE,                    text: "</div>"},
        {nLine: 4,  nColumn: 11, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.MORE_REQUEST,                 text: "\n"},
        {nLine: 4,  nColumn: 11, level: 0, tagName: "",        isSelfClosing: false, isForeign: false, type: TokenType.TEXT,                         text: "\n"},
    ]
);

for (const token of htmltok(source))
{	//console.log(token.debug());
    if (token.type == TokenType.ATTR_VALUE)
    {	console.log(`Attribute value: ${token.getValue()}`);
    }
}

Prints:

Attribute value: viewport
Attribute value: width=device-width, initial-scale=1.0
Attribute value: "Title"

htmltok() - Tokenize string

function htmltok(source: string, settings: Settings={}, hierarchy: string[]=[], tabWidth: number=4, nLine: number=1, nColumn: number=1): Generator<Token, void, string>

This function returns iterator over tokens found in given HTML source string.

htmltok() arguments:

  • source - HTML or XML string.
  • settings - Affects how the code will be parsed.
  • hierarchy - If you pass an array object, this object will be modified during tokenization process - after yielding each next token. In this array you can observe current elements nesting hierarchy. For normal operation you need to pass empty array, but if you resume parsing from some point, you can provide initial hierarchy. All tag names here are lowercased.
  • tabWidth - Width of TAB stops. Affects nColumn of returned tokens.
  • nLine - Will start counting lines from this line number.
  • nColumn - Will start counting lines (and columns) from this column number.

This function returns Token iterator.

Before giving the last token in the source, this function generates TokenType.MORE_REQUEST. You can ignore it, or you can react by calling the following it.next(more) function of the iterator with a string argument, that contains code continuation. In this case this code will be appended to the last token, and the tokenization process will continue.

// To download and run this example:
// curl 'https://raw.githubusercontent.com/jeremiah-shaulov/htmltok/v2.0.1/README.md' | perl -ne '$y=$1 if /^```(.)?/;  print $_ if $y&&$m;  $m=$y&&($m||m~<example-65ya>~)' > /tmp/example-65ya.ts
// deno run /tmp/example-65ya.ts

import {htmltok, TokenType} from 'https://deno.land/x/htmltok@v2.0.1/mod.ts';

let source =
`	<meta name=viewport content="width=device-width, initial-scale=1.0">
    <div title="&quot;Title&quot;">
        Text.
    </div>
`;

function read()
{	const part = source.slice(0, 10);
    source = source.slice(10);
    return part;
}

const it = htmltok(read());
let token;
L:while ((token = it.next().value))
{	while (token.type == TokenType.MORE_REQUEST)
    {	token = it.next(read()).value;
        if (!token)
        {	break L;
        }
    }

    console.log(token.debug());
}

Token

class Token
{
Ā  Ā  šŸ”§ constructor(text: string, type: TokenType, nLine: number=1, nColumn: number=1, level: number=0, tagName: string=ā€ā€œ, isSelfClosing: boolean=false, isForeign: boolean=false)
Ā  Ā  šŸ“„ text: string
Ā  Ā  šŸ“„ type: TokenType
Ā  Ā  šŸ“„ nLine: number
Ā  Ā  šŸ“„ nColumn: number
Ā  Ā  šŸ“„ level: number
Ā  Ā  šŸ“„ tagName: string
Ā  Ā  šŸ“„ isSelfClosing: boolean
Ā  Ā  šŸ“„ isForeign: boolean
Ā  Ā  āš™ toString(): string
Ā  Ā  āš™ normalized(): string
Ā  Ā  āš™ debug(): string
Ā  Ā  āš™ getValue(): string
}

Token.toString() method returns original token (Token.text), except for TokenType.MORE_REQUEST and FIX_STRUCTURE_* token types, for which it returns empty string.

Token.normalized() - returns token text, as itā€™s suggested according to HTML normalization rules.

Token.debug() - returns Token object stringified for console.log().

Token.getValue() - returns decoded value of the token.

TokenType

const enum TokenType
{
Ā  Ā  TEXT = 0
Ā  Ā  CDATA = 1
Ā  Ā  ENTITY = 2
Ā  Ā  COMMENT = 3
Ā  Ā  DTD = 4
Ā  Ā  PI = 5
Ā  Ā  TAG_OPEN_BEGIN = 6
Ā  Ā  TAG_OPEN_SPACE = 7
Ā  Ā  ATTR_NAME = 8
Ā  Ā  ATTR_EQ = 9
Ā  Ā  ATTR_VALUE = 10
Ā  Ā  TAG_OPEN_END = 11
Ā  Ā  TAG_CLOSE = 12
Ā  Ā  RAW_LT = 13
Ā  Ā  RAW_AMP = 14
Ā  Ā  JUNK = 15
Ā  Ā  JUNK_DUP_ATTR_NAME = 16
Ā  Ā  FIX_STRUCTURE_TAG_OPEN = 17
Ā  Ā  FIX_STRUCTURE_TAG_OPEN_SPACE = 18
Ā  Ā  FIX_STRUCTURE_TAG_CLOSE = 19
Ā  Ā  FIX_STRUCTURE_ATTR_QUOT = 20
Ā  Ā  MORE_REQUEST = 21
}

  • TokenType.TEXT - Text (character data). It doesnā€™t contain entities and preprocessing instructions, as they are returned as separate tokens.
  • TokenType.CDATA - The CDATA block, like <![CDATA[...]]>. It can occure in XML mode (Settings.mode === 'xml'), and in svg and math elements in HTML mode. In other places <![CDATA[...]]> is returned as TokenType.JUNK. This token can contain preprocessing instructions in itā€™s Token.text.
  • TokenType.ENTITY - One character reference, like &apos;, &#39; or &#x27;. This token also can contain preprocessing instructions in itā€™s Token.text, like &a<?...?>o<?...?>;.
  • TokenType.COMMENT - HTML comment, like <!--...-->. It can contain preprocessing instructions.
  • TokenType.DTD - Document type declaration, like <!...>. It can contain preprocessing instructions.
  • TokenType.PI - Preprocessing instruction, like <?...?>.
  • TokenType.TAG_OPEN_BEGIN - < char followed by tag name, like <script. Tag name can contain preprocessing instructions, like <sc<?...?>ip<?...?>. Token.tagName contains lowercased (if not XML and thereā€™re no preprocessing instructions) tag name.
  • TokenType.TAG_OPEN_SPACE - Any number of whitespace characters (can include newline chars) inside opening tag markup. It separates tag name and attributes, and can occure between attributes, and at the end of opening tag.
  • TokenType.ATTR_NAME - Attribute name. It can contain preprocessing instructions, like a<?...?>b<?...?>. Token.getValue() returns lowercased (if not XML and thereā€™re no preprocessing instructions) attribute name.
  • TokenType.ATTR_EQ - = char after attribute name. Itā€™s always followed by TokenType.ATTR_VALUE (optionally preceded by TokenType.TAG_OPEN_SPACE). If = is not followed by attribute value, itā€™s returned as TokenType.JUNK.
  • TokenType.ATTR_VALUE - Attribute value. It can be quoted in " or ', or it can be unquoted. This token type can contain entities and preprocessing instructions, like "a<?...?>&lt;<?...?>". Token.getValue() returns unquoted text with decoded entities, but preprocessing instructions are left intact.
  • TokenType.TAG_OPEN_END - > or /> chars that terminate opening tag. Token.isSelfClosing indicates whether this tag doesnā€™t have corresponding closing tag.
  • TokenType.TAG_CLOSE - Closing tag token, like </script >. It can contain preprocessing instructions, like </sc<?...?>ip<?...?>>.
  • TokenType.RAW_LT - < char, that is not part of markup (just appears in text). Typically you want to convert it to &lt;.
  • TokenType.RAW_AMP - & char, that is not part of markup (just appears in text). Typically you want to convert it to &amp;.
  • TokenType.JUNK - Characters that are not in place. Typically you want to remove them. This token type can appear in the following situations:
    • Characters in opening tag, that canā€™t be interpreted as attributes. For example repeating = char, or / at the end of opening tag, which must have corresponding closing tag.
    • Unnecessary quotes around attribute value, if requested to unquote attributes.
    • Attribute values of duplicate attributes.
    • Closing tag, that was not opened.
    • CDATA not in XML or foreign tags.
  • TokenType.JUNK_DUP_ATTR_NAME - Name of duplicate attribute.
  • TokenType.FIX_STRUCTURE_TAG_OPEN - FIX_STRUCTURE_* token types donā€™t represent text in source code, but are generated by the tokenizer to suggest markup fixes. FIX_STRUCTURE_TAG_OPEN is automatically inserted opening tag, like <b>. Token text cannot contain preprocessing instructions. Consider the following markup: <b>BOLD<u>BOLD-UND</b>UND</u> many browsers will interpret this as <b>BOLD<u>BOLD-UND</u></b><u>UND</u>. Also this tokenizer will suggest </u> as TokenType.FIX_STRUCTURE_TAG_CLOSE, and <u> as TokenType.FIX_STRUCTURE_TAG_OPEN.
  • TokenType.FIX_STRUCTURE_TAG_OPEN_SPACE - One space character that is suggested between attributes in situations like <meta name="name"content="content">.
  • TokenType.FIX_STRUCTURE_TAG_CLOSE - Autogenerated closing tag, like </td>. Itā€™s generated when closing tag is missing in the source markup.
  • TokenType.FIX_STRUCTURE_ATTR_QUOT - One autogenerated quote character to surround attribute value, if Settings.quoteAttributes was requested, or when Settings.mode === 'xml'.
  • TokenType.MORE_REQUEST - Before returning the last token found in the source string, htmltok() generate this meta-token. If then you call it.next(more) with a nonempty string argument, this string will be appended to the last token, and the tokenization will continue.

Settings

interface Settings
{
Ā  Ā  šŸ“„ mode?: ā€œhtmlā€ | ā€œxmlā€
Ā  Ā  šŸ“„ noCheckAttributes?: boolean
Ā  Ā  šŸ“„ quoteAttributes?: boolean
Ā  Ā  šŸ“„ unquoteAttributes?: boolean
}

  • mode - Tokenize in either HTML, or XML mode. In XML mode, tag and attribute names are case-sensitive, and thereā€™s no special treatment for tags like <script>, <style>, <textarea> and <title>. Also thereā€™re no self-closing by definition tags, and /> can be used in any tag to make it self-closing. Also XML mode implies Settings.quoteAttributes.
  • noCheckAttributes - If true, will not try to determine duplicate attribute names. This can save some computing resources.
  • quoteAttributes - If true, will generate TokenType.FIX_STRUCTURE_ATTR_QUOT tokens to suggest quotes around unquoted attribute values.
  • unquoteAttributes - If true, will return quotes around attribute values as TokenType.JUNK, if such quotes are not necessary. HTML5 standard allows unquoted attributes (unlike XML), and removing quotes can make markup lighter, and more readable by humans and robots.

HTML normalization

htmltok() can be used to normalize HTML, that is, to fix markup errors. This includes closing unclosed tags, quoting attributes (in XML or if Settings.quoteAttributes is set), etc.

import {htmltok} from 'https://deno.land/x/htmltok@v2.0.1/mod.ts';

const html = `<a target=_blank>Click here`;
const normalHtml = [...htmltok(html, {quoteAttributes: true})].map(t => t.normalized()).join('');
console.log(normalHtml);

Prints:

<a target="_blank">Click here</a>

Preprocessing instructions

This tokenizer allows you to make template parsers that will utilize ā€œpreprocessing instructionsā€ feature of XML-like markup languages. However thereā€™s one limitation. The PIs must not cross markup boundaries.

If you want to execute preprocessing instructions before parsing markup, itā€™s very simple to do, and you donā€™t need htmltok for this (just str.replace(/<\?[\S\s]*?\?>/g, exec)). Creating parsers that first recognize the markup structure, and maybe split it, and execute PIs in later steps, requires to deal with PIs as part of markup, and htmltok can help here.

The following is code that has inter-markup PIs, and itā€™s not suitable for htmltok:

<!-- Crosses markup boundaries -->
<?='<div'?> id="main"></div>

The following is alright:

<!-- Doesn't cross markup boundaries -->
<<?='div'?> id="main"></<?='div'?>>

htmltokStream() - Tokenize ReadableStream

function htmltokStream(source: ReadableStream<Uint8Array>, settings: Settings={}, hierarchy: string[]=[], tabWidth: number=4, nLine: number=1, nColumn: number=1, decoder: TextDecoder=defaultDecoder): AsyncGenerator<Token, void, any>

This function allows to tokenize a ReadableStream<Uint8Array> stream of HTML or XML source code. It never generates TokenType.MORE_REQUEST.

If decoder is provided, will use it to convert bytes to text.

import {htmltokReader} from 'https://deno.land/x/htmltok@v2.0.1/mod.ts';
import {readerFromStreamReader} from 'https://deno.land/std@0.167.0/streams/reader_from_stream_reader.ts';

const res = await fetch("https://example.com/");
const reader = readerFromStreamReader(res.body!.getReader());
for await (const token of htmltokReader(reader))
{	console.log(token.debug());
}

htmlDecode() - Decode HTML5 entities

function htmlDecode(str: string, skipPi: boolean=false): string

This function decodes entities (character references), like &apos;, &#39; or &#x27;. If skipPi is true, it will operate only on parts between preprocessing instructions.

import {htmlDecode} from 'https://deno.land/x/htmltok@v2.0.1/mod.ts';

console.log(htmlDecode(`Text&amp;text<?&amp;?>text`)); // prints: Text&text<?&?>text
console.log(htmlDecode(`Text&amp;text<?&amp;?>text`, true)); // prints: Text&text<?&amp;?>text