Fork me on GitHub
  • API

    Show / Hide Table of Contents

    Class CompoundWordTokenFilterBase

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Inheritance
    object
    AttributeSource
    TokenStream
    TokenFilter
    CompoundWordTokenFilterBase
    DictionaryCompoundWordTokenFilter
    HyphenationCompoundWordTokenFilter
    Implements
    IDisposable
    Inherited Members
    TokenFilter.m_input
    TokenFilter.End()
    TokenFilter.Dispose(bool)
    TokenStream.Dispose()
    AttributeSource.GetAttributeFactory()
    AttributeSource.GetAttributeClassesEnumerator()
    AttributeSource.GetAttributeImplsEnumerator()
    AttributeSource.AddAttributeImpl(Attribute)
    AttributeSource.AddAttribute<T>()
    AttributeSource.HasAttributes
    AttributeSource.HasAttribute<T>()
    AttributeSource.GetAttribute<T>()
    AttributeSource.ClearAttributes()
    AttributeSource.CaptureState()
    AttributeSource.RestoreState(AttributeSource.State)
    AttributeSource.GetHashCode()
    AttributeSource.Equals(object)
    AttributeSource.ReflectAsString(bool)
    AttributeSource.ReflectWith(IAttributeReflector)
    AttributeSource.CloneAttributes()
    AttributeSource.CopyTo(AttributeSource)
    AttributeSource.ToString()
    object.Equals(object, object)
    object.GetType()
    object.MemberwiseClone()
    object.ReferenceEquals(object, object)
    Namespace: Lucene.Net.Analysis.Compound
    Assembly: Lucene.Net.Analysis.Common.dll
    Syntax
    public abstract class CompoundWordTokenFilterBase : TokenFilter, IDisposable

    Constructors

    CompoundWordTokenFilterBase(LuceneVersion, TokenStream, CharArraySet)

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected CompoundWordTokenFilterBase(LuceneVersion matchVersion, TokenStream input, CharArraySet dictionary)
    Parameters
    Type Name Description
    LuceneVersion matchVersion
    TokenStream input
    CharArraySet dictionary

    CompoundWordTokenFilterBase(LuceneVersion, TokenStream, CharArraySet, bool)

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected CompoundWordTokenFilterBase(LuceneVersion matchVersion, TokenStream input, CharArraySet dictionary, bool onlyLongestMatch)
    Parameters
    Type Name Description
    LuceneVersion matchVersion
    TokenStream input
    CharArraySet dictionary
    bool onlyLongestMatch

    CompoundWordTokenFilterBase(LuceneVersion, TokenStream, CharArraySet, int, int, int, bool)

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected CompoundWordTokenFilterBase(LuceneVersion matchVersion, TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, bool onlyLongestMatch)
    Parameters
    Type Name Description
    LuceneVersion matchVersion
    TokenStream input
    CharArraySet dictionary
    int minWordSize
    int minSubwordSize
    int maxSubwordSize
    bool onlyLongestMatch

    Fields

    DEFAULT_MAX_SUBWORD_SIZE

    The default for maximal length of subwords that get propagated to the output of this filter

    Declaration
    public const int DEFAULT_MAX_SUBWORD_SIZE = 15
    Field Value
    Type Description
    int

    DEFAULT_MIN_SUBWORD_SIZE

    The default for minimal length of subwords that get propagated to the output of this filter

    Declaration
    public const int DEFAULT_MIN_SUBWORD_SIZE = 2
    Field Value
    Type Description
    int

    DEFAULT_MIN_WORD_SIZE

    The default for minimal word length that gets decomposed

    Declaration
    public const int DEFAULT_MIN_WORD_SIZE = 5
    Field Value
    Type Description
    int

    m_dictionary

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly CharArraySet m_dictionary
    Field Value
    Type Description
    CharArraySet

    m_matchVersion

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly LuceneVersion m_matchVersion
    Field Value
    Type Description
    LuceneVersion

    m_maxSubwordSize

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly int m_maxSubwordSize
    Field Value
    Type Description
    int

    m_minSubwordSize

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly int m_minSubwordSize
    Field Value
    Type Description
    int

    m_minWordSize

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly int m_minWordSize
    Field Value
    Type Description
    int

    m_offsetAtt

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly IOffsetAttribute m_offsetAtt
    Field Value
    Type Description
    IOffsetAttribute

    m_onlyLongestMatch

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly bool m_onlyLongestMatch
    Field Value
    Type Description
    bool

    m_termAtt

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly ICharTermAttribute m_termAtt
    Field Value
    Type Description
    ICharTermAttribute

    m_tokens

    Base class for decomposition token filters.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CompoundWordTokenFilterBase:
    • As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0 supplementary characters in strings and char arrays provided as compound word dictionaries.
    • As of 4.4, CompoundWordTokenFilterBase doesn't update offsets.
    Declaration
    protected readonly Queue<CompoundWordTokenFilterBase.CompoundToken> m_tokens
    Field Value
    Type Description
    Queue<CompoundWordTokenFilterBase.CompoundToken>

    Methods

    Decompose()

    Decomposes the current m_termAtt and places CompoundWordTokenFilterBase.CompoundToken instances in the m_tokens list. The original token may not be placed in the list, as it is automatically passed through this filter.

    Declaration
    protected abstract void Decompose()

    IncrementToken()

    Consumers (i.e., Lucene.Net.Index.IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate Lucene.Net.Util.IAttributes with the attributes of the next token.

    The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use Lucene.Net.Util.AttributeSource.CaptureState() to create a copy of the current attribute state.

    this method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to Lucene.Net.Util.AttributeSource.AddAttribute<T>() and Lucene.Net.Util.AttributeSource.GetAttribute<T>(), references to all Lucene.Net.Util.IAttributes that this stream uses should be retrieved during instantiation.

    To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in Lucene.Net.Analysis.TokenStream.IncrementToken().
    Declaration
    public override sealed bool IncrementToken()
    Returns
    Type Description
    bool

    false for end of stream; true otherwise

    Overrides
    Lucene.Net.Analysis.TokenStream.IncrementToken()

    Reset()

    This method is called by a consumer before it begins consumption using Lucene.Net.Analysis.TokenStream.IncrementToken().

    Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.

    If you override this method, always call base.Reset(), otherwise some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will throw InvalidOperationException on further usage).
    Declaration
    public override void Reset()
    Overrides
    Lucene.Net.Analysis.TokenFilter.Reset()
    Remarks

    NOTE: The default implementation chains the call to the input Lucene.Net.Analysis.TokenStream, so be sure to call base.Reset() when overriding this method.

    Implements

    IDisposable
    Back to top Copyright © 2024 The Apache Software Foundation, Licensed under the Apache License, Version 2.0
    Apache Lucene.Net, Lucene.Net, Apache, the Apache feather logo, and the Apache Lucene.Net project logo are trademarks of The Apache Software Foundation.
    All other marks mentioned may be trademarks or registered trademarks of their respective owners.