Namespace Lucene.Net.Analysis.Cn
Analyzer for Chinese, which indexes unigrams (individual chinese characters).
Three analyzers are provided for Chinese, each of which treats Chinese text in a different way.
- StandardAnalyzer: Index unigrams (individual Chinese characters) as a token.
- CJKAnalyzer (in the Lucene.Net.Analysis.Cjk namespace): Index bigrams (overlapping groups of two adjacent Chinese characters) as tokens.
- SmartChineseAnalyzer (in the Lucene.Net.Analysis.SmartCn package): Index words (attempt to segment Chinese text into words) as tokens.
Example phrase: "我是中国人"
- StandardAnalyzer: 我-是-中-国-人
- CJKAnalyzer: 我是-是中-中国-国人
- SmartChineseAnalyzer: 我-是-中国-人
Classes
ChineseAnalyzer
An Lucene.Net.Analysis.Analyzer that tokenizes text with ChineseTokenizer and filters with ChineseFilter
ChineseFilter
A Lucene.Net.Analysis.TokenFilter with a stop word table.
- Numeric tokens are removed.
- English tokens must be larger than 1 character.
- One Chinese character as one Chinese word.
- Add Chinese stop words, such as \ue400
- Dictionary based Chinese word extraction
- Intelligent Chinese word extraction
ChineseFilterFactory
Factory for ChineseFilter
ChineseTokenizer
Tokenize Chinese text as individual chinese characters.
The difference between ChineseTokenizer and CJKTokenizer is that they have different token parsing logic.
For example, if the Chinese text "C1C2C3C4" is to be indexed:
- The tokens returned from ChineseTokenizer are C1, C2, C3, C4.
- The tokens returned from the CJKTokenizer are C1C2, C2C3, C3C4.
Therefore the index created by CJKTokenizer is much larger.
The problem is that when searching for C1, C1C2, C1C3, C4C2, C1C2C3 ... the ChineseTokenizer works, but the CJKTokenizer will not work.
ChineseTokenizerFactory
Factory for ChineseTokenizer