Skip to content

Latest commit

 

History

History
210 lines (175 loc) · 6.65 KB

20_Custom_Analyzers.asciidoc

File metadata and controls

210 lines (175 loc) · 6.65 KB

Custom analyzers

While Elasticsearch comes with a number of analyzers available out of the box, the real power comes from the ability to create your own custom analyzers by combining character filters, tokenizers and token filters in a configuration which suits your particular data.

In [analysis-intro] we said that analyzer is a wrapper which combines three functions into a single package, which are executed in sequence:

Character filters

Character filters are used to `tidy up'' a string before it is tokenized. For instance, if our text is in HTML format, it will contain HTML tags like `<p> or <div> that we don’t want to be indexed. We can use the {ref}/analysis-htmlstrip-charfilter.html[html_strip character filter] to remove all HTML tags and to convert HTML entities like Á into the corresponding Unicode character: Á.

An analyzer may have zero or more character filters.

Tokenizers

An analyzer must have a single tokenizer. The tokenizer breaks the string up into individual terms or tokens. The {ref}/analysis-standard-tokenizer.html[standard tokenizer], which is used in the standard analyzer, breaks up a string into invidual terms on word boundaries, and removes most punctuation, but other tokenizers exist which have different behaviour.

For instance, the {ref}/analysis-keyword-tokenizer.html[keyword tokenizer] outputs exactly the same string as it received, without any tokenization. The {ref}/analysis-whitespace-tokenizer.html[whitespace tokenizer] splits text on whitespace only. The {ref}/analysis-pattern-tokenizer.html[pattern tokenizer] can be used to split text on a matching regular expression.

Token filters

After tokenization, the resulting token stream is passed through any specified token filters, in the order in which they are specified.

Token filters may change, add or remove tokens. We have already mentioned the {ref}/analysis-lowercase-tokenfilter.html[lowercase] and {ref}/analysis-stop-tokenfilter.html[stop token filters], but there are many more available in Elasticsearch. {ref}/analysis-stemmer-tokenfilter.html[Stemming token filters] `stem'' words to their root form. The {ref}/analysis-asciifolding-tokenfilter.html[`ascii_folding filter] removes diacritics, converting a term like "très" into "tres". The {ref}/analysis-ngram-tokenfilter.html[ngram] and {ref}/analysis-edgengram-tokenfilter.html[edge_ngram token filters] can produce tokens suitable for partial matching or autocomplete.

In [search-in-depth], we will discuss examples of where and how to use these tokenizers and filters. But first, we need to explain how to create a custom analyzer.

Creating a custom analyzer

In the same way as we configured the es_std analyzer above, we can configure character filters, tokenizers and token filters in their respective sections under analysis:

PUT /my_index
{
    "settings": {
        "analysis": {
            "char_filter": { ... custom character filters ... },
            "tokenizer":   { ...    custom tokenizers     ... },
            "filter":      { ...   custom token filters   ... },
            "analyzer":    { ...    custom analyzers      ... }
        }
    }
}

As an example, let’s set up a custom analyzer which will:

  1. Strip out HTML using the html_strip character filter.

  2. Replace & characters with " and ", using a custom mapping character filter:

    "char_filter": {
        "&_to_and": {
            "type":       "mapping",
            "mappings": [ "&=> and "]
        }
    }
  3. Tokenize words, using the standard tokenizer.

  4. Lowercase terms, using the lowercase token filter.

  5. Remove a custom list of stopwords, using a custom stop token filter:

    "filter": {
        "my_stopwords": {
            "type":        "stop",
            "stopwords": [ "the", "a" ]
        }
    }

Our analyzer definition combines the predefined tokenizer and filters with the custom filters that we have configured above:

"analyzer": {
    "my_analyzer": {
        "type":           "custom",
        "char_filter":  [ "html_strip", "&_to_and" ],
        "tokenizer":      "standard",
        "filter":       [ "lowercase", "my_stopwords" ]
    }
}

To put it all together, the whole create-index request looks like this:

PUT /my_index
{
    "settings": {
        "analysis": {
            "char_filter": {
                "&_to_and": {
                    "type":       "mapping",
                    "mappings": [ "&=> and "]
            }},
            "filter": {
                "my_stopwords": {
                    "type":       "stop",
                    "stopwords": [ "the", "a" ]
            }},
            "analyzer": {
                "my_analyzer": {
                    "type":         "custom",
                    "char_filter":  [ "html_strip", "&_to_and" ],
                    "tokenizer":    "standard",
                    "filter":       [ "lowercase", "my_stopwords" ]
            }}
}}}

After creating the index, use the analyze API to test the new analyzer:

GET /my_index/_analyze?analyzer=my_analyzer
The quick & brown fox

The abbreviated results below show that our analyzer is working correctly:

{
  "tokens" : [
      { "token" :   "quick",    "position" : 2 },
      { "token" :   "and",      "position" : 3 },
      { "token" :   "brown",    "position" : 4 },
      { "token" :   "fox",      "position" : 5 }
    ]
}

The analyzer is not much use unless we tell Elasticsearch where to use it. We can apply it to a string field with a mapping such as:

PUT /my_index/_mapping/my_type
{
    "properties": {
        "title": {
            "type":      "string",
            "analyzer":  "my_analyzer"
        }
    }
}