Class SynonymFilter
Matches single or multi word synonyms in a token stream. This token stream cannot properly handle position increments != 1, ie, you should place this filter before filtering out stop words.
Note that with the current implementation, parsing is greedy, so whenever multiple parses would apply, the rule starting the earliest and parsing the most tokens wins. For example if you have these rules:
a -> x
a b -> y
b c d -> z
Then input a b c d e
parses to y b c
d
, ie the 2nd rule "wins" because it started
earliest and matched the most input tokens of other rules
starting at that point.
A future improvement to this filter could allow non-greedy parsing, such that the 3rd rule would win, and also separately allow multiple parses, such that all 3 rules would match, perhaps even on a rule by rule basis.
NOTE: when a match occurs, the output tokens
associated with the matching rule are "stacked" on top of
the input stream (if the rule had
keepOrig=true
) and also on top of another
matched rule's output tokens. This is not a correct
solution, as really the output should be an arbitrary
graph/lattice. For example, with the above match, you
would expect an exact Lucene.Net.Search.PhraseQuery"y b
c"
to match the parsed tokens, but it will fail to
do so. This limitation is necessary because Lucene's
Lucene.Net.Analysis.TokenStream (and index) cannot yet represent an arbitrary
graph.
NOTE: If multiple incoming tokens arrive on the same position, only the first token at that position is used for parsing. Subsequent tokens simply pass through and are not parsed. A future improvement would be to allow these tokens to also be matched.
Implements
Inherited Members
Namespace: Lucene.Net.Analysis.Synonym
Assembly: Lucene.Net.Analysis.Common.dll
Syntax
public sealed class SynonymFilter : TokenFilter, IDisposable
Constructors
SynonymFilter(TokenStream, SynonymMap, bool)
Matches single or multi word synonyms in a token stream. This token stream cannot properly handle position increments != 1, ie, you should place this filter before filtering out stop words.
Note that with the current implementation, parsing is greedy, so whenever multiple parses would apply, the rule starting the earliest and parsing the most tokens wins. For example if you have these rules:
a -> x
a b -> y
b c d -> z
Then input a b c d e
parses to y b c
d
, ie the 2nd rule "wins" because it started
earliest and matched the most input tokens of other rules
starting at that point.
A future improvement to this filter could allow non-greedy parsing, such that the 3rd rule would win, and also separately allow multiple parses, such that all 3 rules would match, perhaps even on a rule by rule basis.
NOTE: when a match occurs, the output tokens
associated with the matching rule are "stacked" on top of
the input stream (if the rule had
keepOrig=true
) and also on top of another
matched rule's output tokens. This is not a correct
solution, as really the output should be an arbitrary
graph/lattice. For example, with the above match, you
would expect an exact Lucene.Net.Search.PhraseQuery"y b
c"
to match the parsed tokens, but it will fail to
do so. This limitation is necessary because Lucene's
Lucene.Net.Analysis.TokenStream (and index) cannot yet represent an arbitrary
graph.
NOTE: If multiple incoming tokens arrive on the same position, only the first token at that position is used for parsing. Subsequent tokens simply pass through and are not parsed. A future improvement would be to allow these tokens to also be matched.
Declaration
public SynonymFilter(TokenStream input, SynonymMap synonyms, bool ignoreCase)
Parameters
Type | Name | Description |
---|---|---|
TokenStream | input | input tokenstream |
SynonymMap | synonyms | synonym map |
bool | ignoreCase | case-folds input for matching with ToLower(int, CultureInfo)
in using InvariantCulture.
Note, if you set this to |
Fields
TYPE_SYNONYM
Matches single or multi word synonyms in a token stream. This token stream cannot properly handle position increments != 1, ie, you should place this filter before filtering out stop words.
Note that with the current implementation, parsing is greedy, so whenever multiple parses would apply, the rule starting the earliest and parsing the most tokens wins. For example if you have these rules:
a -> x
a b -> y
b c d -> z
Then input a b c d e
parses to y b c
d
, ie the 2nd rule "wins" because it started
earliest and matched the most input tokens of other rules
starting at that point.
A future improvement to this filter could allow non-greedy parsing, such that the 3rd rule would win, and also separately allow multiple parses, such that all 3 rules would match, perhaps even on a rule by rule basis.
NOTE: when a match occurs, the output tokens
associated with the matching rule are "stacked" on top of
the input stream (if the rule had
keepOrig=true
) and also on top of another
matched rule's output tokens. This is not a correct
solution, as really the output should be an arbitrary
graph/lattice. For example, with the above match, you
would expect an exact Lucene.Net.Search.PhraseQuery"y b
c"
to match the parsed tokens, but it will fail to
do so. This limitation is necessary because Lucene's
Lucene.Net.Analysis.TokenStream (and index) cannot yet represent an arbitrary
graph.
NOTE: If multiple incoming tokens arrive on the same position, only the first token at that position is used for parsing. Subsequent tokens simply pass through and are not parsed. A future improvement would be to allow these tokens to also be matched.
Declaration
public const string TYPE_SYNONYM = "SYNONYM"
Field Value
Type | Description |
---|---|
string |
Methods
IncrementToken()
Consumers (i.e., Lucene.Net.Index.IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate Lucene.Net.Util.IAttributes with the attributes of the next token.
The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use Lucene.Net.Util.AttributeSource.CaptureState() to create a copy of the current attribute state. this method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to Lucene.Net.Util.AttributeSource.AddAttribute<T>() and Lucene.Net.Util.AttributeSource.GetAttribute<T>(), references to all Lucene.Net.Util.IAttributes that this stream uses should be retrieved during instantiation. To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in Lucene.Net.Analysis.TokenStream.IncrementToken().Declaration
public override bool IncrementToken()
Returns
Type | Description |
---|---|
bool | false for end of stream; true otherwise |
Overrides
Reset()
This method is called by a consumer before it begins consumption using Lucene.Net.Analysis.TokenStream.IncrementToken().
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh. If you override this method, always callbase.Reset()
, otherwise
some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will
throw InvalidOperationException on further usage).
Declaration
public override void Reset()
Overrides
Remarks
NOTE:
The default implementation chains the call to the input Lucene.Net.Analysis.TokenStream, so
be sure to call base.Reset()
when overriding this method.