Converts a Wikidot markup source string into a flat array of Tokens.
The lexer is single-pass and greedy: it tries the longest-matching
multi-character pattern first (e.g. [[[ before [[, ** before *).
Context-sensitive constructs (line-start headings, blockquote markers)
are disambiguated via the lineStart state flag.
For convenience, use the standalone tokenize function instead
of constructing a Lexer directly.
Converts a Wikidot markup source string into a flat array of Tokens.
The lexer is single-pass and greedy: it tries the longest-matching multi-character pattern first (e.g.
[[[before[[,**before*). Context-sensitive constructs (line-start headings, blockquote markers) are disambiguated via thelineStartstate flag.For convenience, use the standalone tokenize function instead of constructing a
Lexerdirectly.