Skip to content

Commit

Permalink
Clarify filters can be used while creating a normalizer. (elastic#103826
Browse files Browse the repository at this point in the history
  • Loading branch information
a03nikki authored Mar 12, 2024
1 parent e2c9767 commit ca8e9af
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions docs/reference/analysis/normalizers.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ token. As a consequence, they do not have a tokenizer and only accept a subset
of the available char filters and token filters. Only the filters that work on
a per-character basis are allowed. For instance a lowercasing filter would be
allowed, but not a stemming filter, which needs to look at the keyword as a
whole. The current list of filters that can be used in a normalizer is
following: `arabic_normalization`, `asciifolding`, `bengali_normalization`,
whole. The current list of filters that can be used in a normalizer definition
are: `arabic_normalization`, `asciifolding`, `bengali_normalization`,
`cjk_width`, `decimal_digit`, `elision`, `german_normalization`,
`hindi_normalization`, `indic_normalization`, `lowercase`, `pattern_replace`,
`persian_normalization`, `scandinavian_folding`, `serbian_normalization`,
`sorani_normalization`, `trim`, `uppercase`.

Elasticsearch ships with a `lowercase` built-in normalizer. For other forms of
normalization a custom configuration is required.
normalization, a custom configuration is required.

[discrete]
=== Custom normalizers
Expand Down

0 comments on commit ca8e9af

Please sign in to comment.