简单分析器
simple
分析器是一个非常基本的分析器,它在非字母字符处将文本分解成术语,并将术语转换为小写。与 standard
分析器不同,simple
分析器将除字母字符以外的所有内容都视为分隔符,这意味着它不识别数字、标点符号或特殊字符作为词元的一部分。
示例
使用以下命令创建一个名为 my_simple_index
的索引,并使用 simple
分析器:
PUT /my_simple_index
{
"mappings": {
"properties": {
"my_field": {
"type": "text",
"analyzer": "simple"
}
}
}
}
配置自定义分析器
使用以下命令配置一个自定义分析器索引,该分析器等同于添加了 html_strip
字符过滤器的 simple
分析器:
PUT /my_custom_simple_index
{
"settings": {
"analysis": {
"char_filter": {
"html_strip": {
"type": "html_strip"
}
},
"tokenizer": {
"my_lowercase_tokenizer": {
"type": "lowercase"
}
},
"analyzer": {
"my_custom_simple_analyzer": {
"type": "custom",
"char_filter": ["html_strip"],
"tokenizer": "my_lowercase_tokenizer",
"filter": ["lowercase"]
}
}
}
},
"mappings": {
"properties": {
"my_field": {
"type": "text",
"analyzer": "my_custom_simple_analyzer"
}
}
}
}
生成的词元
使用以下请求检查使用该分析器生成的词元
POST /my_custom_simple_index/_analyze
{
"analyzer": "my_custom_simple_analyzer",
"text": "<p>The slow turtle swims over to dogs © 2024!</p>"
}
响应包含生成的词元
{
"tokens": [
{"token": "the","start_offset": 3,"end_offset": 6,"type": "word","position": 0},
{"token": "slow","start_offset": 7,"end_offset": 11,"type": "word","position": 1},
{"token": "turtle","start_offset": 12,"end_offset": 18,"type": "word","position": 2},
{"token": "swims","start_offset": 19,"end_offset": 24,"type": "word","position": 3},
{"token": "over","start_offset": 25,"end_offset": 29,"type": "word","position": 4},
{"token": "to","start_offset": 30,"end_offset": 32,"type": "word","position": 5},
{"token": "dogs","start_offset": 33,"end_offset": 37,"type": "word","position": 6}
]
}