iterator based tokenizer for writing parsers
# createTokenizer
(regexps) – Create a {@link TokenizerFactory} for the given RegExps.
src/index.ts#L19
To capture, RegExps must use a named group.
const tokenize = createTokenizer(
/(?<ident>[a-z]+)/, // named groups determine token `group`
/(?<number>[0-9]+)/
)
# regexps
– RegExps to match.
RegExp []
createTokenizer(regexps) =>
# Token
– Token interface
src/match-to-token/dist/types/token.d.ts#L5
# as
(value, group)
src/match-to-token/dist/types/token.d.ts#L25 # is
(group, value)
src/match-to-token/dist/types/token.d.ts#L24 # create
(value, group, source)
src/match-to-token/dist/types/token.d.ts#L6 # TokenizerCallableIterable
– Can be called to return next <a href="https://github.com/stagas/match-to-token#token">Token</a> or can be used as an
Iterable
on for-of and spread operations.
src/index.ts#L74 - & Iterable<Token>
# TokenizerFactory
src/index.ts#L67 # (input) – Create a {@link TokenizerCallableIterable} for given input string.
// using next()
const next = tokenize('hello 123')
console.log(next()) // => {group: 'ident', value: 'hello', index: 0}
console.log(next()) // => {group: 'number', value: '123', index: 6}
console.log(next()) // => undefined
// using for of
for (const token of tokenize('hello 123')) {
console.log(token)
// => {group: 'ident', value: 'hello', index: 0}
// => {group: 'number', value: '123', index: 6}
}
// using spread
const tokens = [...tokenize('hello 123')]
console.log(tokens)
// => [
// {group: 'ident', value: 'hello', index: 0},
// {group: 'number', value: '123', index: 6}
// ]
# input
– The string to tokenize.
string
(input) =>
# createTokenizer
(regexps) – Create a {@link TokenizerFactory} for the given RegExps.
src/index.ts#L19
To capture, RegExps must use a named group.
const tokenize = createTokenizer(
/(?<ident>[a-z]+)/, // named groups determine token `group`
/(?<number>[0-9]+)/
)
# regexps
– RegExps to match.
RegExp []
createTokenizer(regexps) =>
- match-to-token by stagas – transform a RegExp named group match to a more useful object
All contributions are welcome!