Skip to content

Commit

Permalink
Updated documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
orbitalquark committed Apr 11, 2021
1 parent 753c417 commit 87a3687
Showing 1 changed file with 26 additions and 36 deletions.
62 changes: 26 additions & 36 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -410,30 +410,21 @@ Instead of matching _n_ keywords with _n_ `P('keyword_`_`n`_`')` ordered choices
convenience function: [`lexer.word_match()`](#lexer.word_match). It is much easier and more efficient to
write word matches like:

local keyword = token(lexer.KEYWORD, lexer.word_match[[
keyword_1 keyword_2 ... keyword_n
]])

local case_insensitive_keyword = token(lexer.KEYWORD, lexer.word_match([[
KEYWORD_1 keyword_2 ... KEYword_n
]], true))

local hyphened_keyword = token(lexer.KEYWORD, lexer.word_match[[
keyword-1 keyword-2 ... keyword-n
]])

In order to more easily separate or categorize keyword sets, you can use Lua line comments
within keyword strings. Such comments will be ignored. For example:

local keyword = token(lexer.KEYWORD, lexer.word_match[[
-- Version 1 keywords.
keyword_11, keyword_12 ... keyword_1n
-- Version 2 keywords.
keyword_21, keyword_22 ... keyword_2n
...
-- Version N keywords.
keyword_m1, keyword_m2 ... keyword_mn
]])
local keyword = token(lexer.KEYWORD, lexer.word_match{
'keyword_1', 'keyword_2', ..., 'keyword_n'
})

local case_insensitive_keyword = token(lexer.KEYWORD, lexer.word_match({
'KEYWORD_1', 'keyword_2', ..., 'KEYword_n'
}, true))

local hyphened_keyword = token(lexer.KEYWORD, lexer.word_match{
'keyword-1', 'keyword-2', ..., 'keyword-n'
})

For short keyword lists, you can use a single string of words. For example:

local keyword = token(lexer.KEYWORD, lexer.word_match('key_1 key_2 ... key_n'))

**Comments**

Expand Down Expand Up @@ -827,8 +818,8 @@ recommended that you migrate yours. The migration process is fairly straightforw
your tokens and patterns using [`lex:add_rule()`](#lexer.add_rule).
4. Similarly, any custom token names should have their styles immediately defined using
[`lex:add_style()`](#lexer.add_style).
5. Convert any table arguments passed to [`lexer.word_match()`](#lexer.word_match) to a space-separated string
of words.
5. Optionally convert any table arguments passed to [`lexer.word_match()`](#lexer.word_match) to a
space-separated string of words.
6. Replace any calls to `lexer.embed(M, child, ...)` and `lexer.embed(parent, M, ...)` with
[`lex:embed`](#lexer.embed)`(child, ...)` and `parent:embed(lex, ...)`, respectively.
7. Define fold points with simple calls to [`lex:add_fold_point()`](#lexer.add_fold_point). No
Expand Down Expand Up @@ -886,8 +877,8 @@ Following the migration steps would yield:
local lex = lexer.new('legacy')

lex:add_rule('whitespace', token(lexer.WHITESPACE, lexer.space^1))
lex:add_rule('keyword', token(lexer.KEYWORD, word_match[[foo bar baz]]))
lex:add_rule('custom', token('custom', P('quux')))
lex:add_rule('keyword', token(lexer.KEYWORD, word_match('foo bar baz')))
lex:add_rule('custom', token('custom', 'quux'))
lex:add_style('custom', lexer.styles.keyword .. {bold = true})
lex:add_rule('identifier', token(lexer.IDENTIFIER, lexer.word))
lex:add_rule('string', token(lexer.STRING, lexer.range('"')))
Expand Down Expand Up @@ -1605,26 +1596,25 @@ Return:
* pattern

<a id="lexer.word_match"></a>
#### `lexer.word_match`(words, case\_insensitive, word\_chars)
#### `lexer.word_match`(word\_list, case\_insensitive, word\_chars)

Creates and returns a pattern that matches any single word in string *words*.
Creates and returns a pattern that matches any single word in list or string *words*.
*case_insensitive* indicates whether or not to ignore case when matching words.
This is a convenience function for simplifying a set of ordered choice word patterns.
If *words* is a multi-line string, it may contain Lua line comments (`--`) that will ultimately
be ignored.

Fields:

* `words`: A string list of words separated by spaces.
* `word_list`: A list of words or a string list of words separated by spaces.
* `case_insensitive`: Optional boolean flag indicating whether or not the word match is
case-insensitive. The default value is `false`.
* `word_chars`: Unused legacy parameter.

Usage:

* `local keyword = token(lexer.KEYWORD, word_match[[foo bar baz]])`
* `local keyword = token(lexer.KEYWORD, word_match([[foo-bar foo-baz bar-foo bar-baz
baz-foo baz-bar]], true))`
* `local keyword = token(lexer.KEYWORD, word_match{'foo', 'bar', 'baz'})`
* `local keyword = token(lexer.KEYWORD, word_match({'foo-bar', 'foo-baz', 'bar-foo',
'bar-baz', 'baz-foo', 'baz-bar'}, true))`
* `local keyword = token(lexer.KEYWORD, word_match('foo bar baz'))`

Return:

Expand Down

0 comments on commit 87a3687

Please sign in to comment.