Fork me on GitHub
  • API

    Show / Hide Table of Contents

    Class CharTokenizer

    An abstract base class for simple, character-oriented tokenizers.

    You must specify the required Lucene.Net.Util.LuceneVersion compatibility when creating CharTokenizer:

    • As of 3.1, CharTokenizer uses an int based API to normalize and detect token codepoints. See IsTokenChar(int) and Normalize(int) for details.

    A new CharTokenizer API has been introduced with Lucene 3.1. This API moved from UTF-16 code units to UTF-32 codepoints to eventually add support for supplementary characters. The old char based API has been deprecated and should be replaced with the int based methods IsTokenChar(int) and Normalize(int).

    As of Lucene 3.1 each CharTokenizer - constructor expects a Lucene.Net.Util.LuceneVersion argument. Based on the given Lucene.Net.Util.LuceneVersion either the new API or a backwards compatibility layer is used at runtime. For Lucene.Net.Util.LuceneVersion < 3.1 the backwards compatibility layer ensures correct behavior even for indexes build with previous versions of Lucene. If a Lucene.Net.Util.LuceneVersion >= 3.1 is used CharTokenizer requires the new API to be implemented by the instantiated class. Yet, the old char based API is not required anymore even if backwards compatibility must be preserved. CharTokenizer subclasses implementing the new API are fully backwards compatible if instantiated with Lucene.Net.Util.LuceneVersion < 3.1.

    Note: If you use a subclass of CharTokenizer with Lucene.Net.Util.LuceneVersion >= 3.1 on an index build with a version < 3.1, created tokens might not be compatible with the terms in your index.

    Inheritance
    object
    AttributeSource
    TokenStream
    Tokenizer
    CharTokenizer
    LetterTokenizer
    WhitespaceTokenizer
    IndicTokenizer
    RussianLetterTokenizer
    Implements
    IDisposable
    Inherited Members
    Tokenizer.m_input
    Tokenizer.Dispose(bool)
    Tokenizer.CorrectOffset(int)
    Tokenizer.SetReader(TextReader)
    TokenStream.Dispose()
    AttributeSource.GetAttributeFactory()
    AttributeSource.GetAttributeClassesEnumerator()
    AttributeSource.GetAttributeImplsEnumerator()
    AttributeSource.AddAttributeImpl(Attribute)
    AttributeSource.AddAttribute<T>()
    AttributeSource.HasAttributes
    AttributeSource.HasAttribute<T>()
    AttributeSource.GetAttribute<T>()
    AttributeSource.ClearAttributes()
    AttributeSource.CaptureState()
    AttributeSource.RestoreState(AttributeSource.State)
    AttributeSource.GetHashCode()
    AttributeSource.Equals(object)
    AttributeSource.ReflectAsString(bool)
    AttributeSource.ReflectWith(IAttributeReflector)
    AttributeSource.CloneAttributes()
    AttributeSource.CopyTo(AttributeSource)
    AttributeSource.ToString()
    object.Equals(object, object)
    object.GetType()
    object.MemberwiseClone()
    object.ReferenceEquals(object, object)
    Namespace: Lucene.Net.Analysis.Util
    Assembly: Lucene.Net.Analysis.Common.dll
    Syntax
    public abstract class CharTokenizer : Tokenizer, IDisposable

    Constructors

    CharTokenizer(LuceneVersion, AttributeFactory, TextReader)

    Creates a new CharTokenizer instance

    Declaration
    protected CharTokenizer(LuceneVersion matchVersion, AttributeSource.AttributeFactory factory, TextReader input)
    Parameters
    Type Name Description
    LuceneVersion matchVersion

    Lucene version to match

    AttributeSource.AttributeFactory factory

    the attribute factory to use for this Lucene.Net.Analysis.Tokenizer

    TextReader input

    the input to split up into tokens

    CharTokenizer(LuceneVersion, TextReader)

    Creates a new CharTokenizer instance

    Declaration
    protected CharTokenizer(LuceneVersion matchVersion, TextReader input)
    Parameters
    Type Name Description
    LuceneVersion matchVersion

    Lucene version to match

    TextReader input

    the input to split up into tokens

    Methods

    End()

    This method is called by the consumer after the last token has been consumed, after Lucene.Net.Analysis.TokenStream.IncrementToken() returned false (using the new Lucene.Net.Analysis.TokenStream API). Streams implementing the old API should upgrade to use this feature.

    This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

    Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.

    If you override this method, always call base.End();.
    Declaration
    public override sealed void End()
    Overrides
    Lucene.Net.Analysis.TokenStream.End()
    Exceptions
    Type Condition
    IOException

    If an I/O error occurs

    IncrementToken()

    Consumers (i.e., Lucene.Net.Index.IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate Lucene.Net.Util.IAttributes with the attributes of the next token.

    The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use Lucene.Net.Util.AttributeSource.CaptureState() to create a copy of the current attribute state.

    this method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to Lucene.Net.Util.AttributeSource.AddAttribute<T>() and Lucene.Net.Util.AttributeSource.GetAttribute<T>(), references to all Lucene.Net.Util.IAttributes that this stream uses should be retrieved during instantiation.

    To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in Lucene.Net.Analysis.TokenStream.IncrementToken().
    Declaration
    public override sealed bool IncrementToken()
    Returns
    Type Description
    bool

    false for end of stream; true otherwise

    Overrides
    Lucene.Net.Analysis.TokenStream.IncrementToken()

    IsTokenChar(int)

    Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.

    Declaration
    protected abstract bool IsTokenChar(int c)
    Parameters
    Type Name Description
    int c
    Returns
    Type Description
    bool

    Normalize(int)

    Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

    Declaration
    protected virtual int Normalize(int c)
    Parameters
    Type Name Description
    int c
    Returns
    Type Description
    int

    Reset()

    This method is called by a consumer before it begins consumption using Lucene.Net.Analysis.TokenStream.IncrementToken().

    Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.

    If you override this method, always call base.Reset(), otherwise some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will throw InvalidOperationException on further usage).
    Declaration
    public override void Reset()
    Overrides
    Lucene.Net.Analysis.Tokenizer.Reset()

    Implements

    IDisposable
    Back to top Copyright © 2024 The Apache Software Foundation, Licensed under the Apache License, Version 2.0
    Apache Lucene.Net, Lucene.Net, Apache, the Apache feather logo, and the Apache Lucene.Net project logo are trademarks of The Apache Software Foundation.
    All other marks mentioned may be trademarks or registered trademarks of their respective owners.