Skip to content

Commit 284965a

Browse files
Add keyword marker token filter docs #8065 (#8134) (#8756)
1 parent 09a37a5 commit 284965a

File tree

3 files changed

+129
-2
lines changed

3 files changed

+129
-2
lines changed

_analyzers/token-filters/elision.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Parameter | Required/Optional | Data type | Description
2424

2525
## Example
2626

27-
The default set of French elisions is `l'`, `m'`, `t'`, `qu'`, `n'`, `s'`, `j'`, `d'`, `c'`, `jusqu'`, `quoiqu'`, `lorsqu'`, and `puisqu'`. You can update this by configuring the `french_elision` token filter. The following example request creates a new index named `french_texts` and configures an analyzer with the `french_elision` filter:
27+
The default set of French elisions is `l'`, `m'`, `t'`, `qu'`, `n'`, `s'`, `j'`, `d'`, `c'`, `jusqu'`, `quoiqu'`, `lorsqu'`, and `puisqu'`. You can update this by configuring the `french_elision` token filter. The following example request creates a new index named `french_texts` and configures an analyzer with a `french_elision` filter:
2828

2929
```json
3030
PUT /french_texts

_analyzers/token-filters/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Token filter | Underlying Lucene token filter| Description
3434
[`hyphenation_decompounder`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/hyphenation-decompounder) | [HyphenationCompoundWordTokenFilter](https://lucene.apache.org/core/9_8_0/analysis/common/org/apache/lucene/analysis/compound/HyphenationCompoundWordTokenFilter.html) | Uses XML-based hyphenation patterns to find potential subwords in compound words and checks the subwords against the specified word list. The token output contains only the subwords found in the word list.
3535
[`keep_types`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/keep-types/) | [TypeTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/TypeTokenFilter.html) | Keeps or removes tokens of a specific type.
3636
[`keep_words`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/keep-words/) | [KeepWordFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeepWordFilter.html) | Checks the tokens against the specified word list and keeps only those that are in the list.
37-
`keyword_marker` | [KeywordMarkerFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeywordMarkerFilter.html) | Marks specified tokens as keywords, preventing them from being stemmed.
37+
[`keyword_marker`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/keyword-marker/) | [KeywordMarkerFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeywordMarkerFilter.html) | Marks specified tokens as keywords, preventing them from being stemmed.
3838
`keyword_repeat` | [KeywordRepeatFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html) | Emits each incoming token twice: once as a keyword and once as a non-keyword.
3939
`kstem` | [KStemFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/en/KStemFilter.html) | Provides kstem-based stemming for the English language. Combines algorithmic stemming with a built-in dictionary.
4040
`kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins).
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
---
2+
layout: default
3+
title: Keyword marker
4+
parent: Token filters
5+
nav_order: 200
6+
---
7+
8+
# Keyword marker token filter
9+
10+
The `keyword_marker` token filter is used to prevent certain tokens from being altered by stemmers or other filters. The `keyword_marker` token filter does this by marking the specified tokens as `keywords`, which prevents any stemming or other processing. This ensures that specific words remain in their original form.
11+
12+
## Parameters
13+
14+
The `keyword_marker` token filter can be configured with the following parameters.
15+
16+
Parameter | Required/Optional | Data type | Description
17+
:--- | :--- | :--- | :---
18+
`ignore_case` | Optional | Boolean | Whether to ignore the letter case when matching keywords. Default is `false`.
19+
`keywords` | Required if either `keywords_path` or `keywords_pattern` is not set | List of strings | The list of tokens to mark as keywords.
20+
`keywords_path` | Required if either `keywords` or `keywords_pattern` is not set | String | The path (relative to the `config` directory or absolute) to the list of keywords.
21+
`keywords_pattern` | Required if either `keywords` or `keywords_path` is not set | String | A [regular expression](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html) used for matching tokens to be marked as keywords.
22+
23+
24+
## Example
25+
26+
The following example request creates a new index named `my_index` and configures an analyzer with a `keyword_marker` filter. The filter marks the word `example` as a keyword:
27+
28+
```json
29+
PUT /my_index
30+
{
31+
"settings": {
32+
"analysis": {
33+
"analyzer": {
34+
"custom_analyzer": {
35+
"type": "custom",
36+
"tokenizer": "standard",
37+
"filter": ["lowercase", "keyword_marker_filter", "stemmer"]
38+
}
39+
},
40+
"filter": {
41+
"keyword_marker_filter": {
42+
"type": "keyword_marker",
43+
"keywords": ["example"]
44+
}
45+
}
46+
}
47+
}
48+
}
49+
```
50+
{% include copy-curl.html %}
51+
52+
## Generated tokens
53+
54+
Use the following request to examine the tokens generated using the analyzer:
55+
56+
```json
57+
GET /my_index/_analyze
58+
{
59+
"analyzer": "custom_analyzer",
60+
"text": "Favorite example"
61+
}
62+
```
63+
{% include copy-curl.html %}
64+
65+
The response contains the generated tokens. Note that while the word `favorite` was stemmed, the word `example` was not stemmed because it was marked as a keyword:
66+
67+
```json
68+
{
69+
"tokens": [
70+
{
71+
"token": "favorit",
72+
"start_offset": 0,
73+
"end_offset": 8,
74+
"type": "<ALPHANUM>",
75+
"position": 0
76+
},
77+
{
78+
"token": "example",
79+
"start_offset": 9,
80+
"end_offset": 16,
81+
"type": "<ALPHANUM>",
82+
"position": 1
83+
}
84+
]
85+
}
86+
```
87+
88+
You can further examine the impact of the `keyword_marker` token filter by adding the following parameters to the `_analyze` query:
89+
90+
```json
91+
GET /my_index/_analyze
92+
{
93+
"analyzer": "custom_analyzer",
94+
"text": "This is an OpenSearch example demonstrating keyword marker.",
95+
"explain": true,
96+
"attributes": "keyword"
97+
}
98+
```
99+
{% include copy-curl.html %}
100+
101+
This will produce additional details in the response similar to the following:
102+
103+
```json
104+
{
105+
"name": "porter_stem",
106+
"tokens": [
107+
...
108+
{
109+
"token": "example",
110+
"start_offset": 22,
111+
"end_offset": 29,
112+
"type": "<ALPHANUM>",
113+
"position": 4,
114+
"keyword": true
115+
},
116+
{
117+
"token": "demonstr",
118+
"start_offset": 30,
119+
"end_offset": 43,
120+
"type": "<ALPHANUM>",
121+
"position": 5,
122+
"keyword": false
123+
},
124+
...
125+
]
126+
}
127+
```

0 commit comments

Comments
 (0)