Fork me on GitHub
  • API

    Show / Hide Table of Contents

    Class JapaneseTokenizer

    Tokenizer for Japanese that uses morphological analysis.

    Inheritance
    object
    AttributeSource
    TokenStream
    Tokenizer
    JapaneseTokenizer
    Implements
    IDisposable
    Inherited Members
    Tokenizer.SetReader(TextReader)
    TokenStream.Dispose()
    AttributeSource.GetAttributeFactory()
    AttributeSource.GetAttributeClassesEnumerator()
    AttributeSource.GetAttributeImplsEnumerator()
    AttributeSource.AddAttributeImpl(Attribute)
    AttributeSource.AddAttribute<T>()
    AttributeSource.HasAttributes
    AttributeSource.HasAttribute<T>()
    AttributeSource.GetAttribute<T>()
    AttributeSource.ClearAttributes()
    AttributeSource.CaptureState()
    AttributeSource.RestoreState(AttributeSource.State)
    AttributeSource.GetHashCode()
    AttributeSource.Equals(object)
    AttributeSource.ReflectAsString(bool)
    AttributeSource.ReflectWith(IAttributeReflector)
    AttributeSource.CloneAttributes()
    AttributeSource.CopyTo(AttributeSource)
    AttributeSource.ToString()
    object.Equals(object, object)
    object.GetType()
    object.ReferenceEquals(object, object)
    Namespace: Lucene.Net.Analysis.Ja
    Assembly: Lucene.Net.Analysis.Kuromoji.dll
    Syntax
    public sealed class JapaneseTokenizer : Tokenizer, IDisposable
    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    Constructors

    JapaneseTokenizer(AttributeFactory, TextReader, UserDictionary, bool, JapaneseTokenizerMode)

    Create a new JapaneseTokenizer.

    Declaration
    public JapaneseTokenizer(AttributeSource.AttributeFactory factory, TextReader input, UserDictionary userDictionary, bool discardPunctuation, JapaneseTokenizerMode mode)
    Parameters
    Type Name Description
    AttributeSource.AttributeFactory factory

    The AttributeFactory to use.

    TextReader input

    TextReader containing text.

    UserDictionary userDictionary

    Optional: if non-null, user dictionary.

    bool discardPunctuation

    true if punctuation tokens should be dropped from the output.

    JapaneseTokenizerMode mode

    Tokenization mode.

    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    JapaneseTokenizer(TextReader, UserDictionary, bool, JapaneseTokenizerMode)

    Create a new JapaneseTokenizer.

    Uses the default AttributeFactory.
    Declaration
    public JapaneseTokenizer(TextReader input, UserDictionary userDictionary, bool discardPunctuation, JapaneseTokenizerMode mode)
    Parameters
    Type Name Description
    TextReader input

    TextReader containing text.

    UserDictionary userDictionary

    Optional: if non-null, user dictionary.

    bool discardPunctuation

    true if punctuation tokens should be dropped from the output.

    JapaneseTokenizerMode mode

    Tokenization mode.

    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    Fields

    DEFAULT_MODE

    Default tokenization mode. Currently this is SEARCH.

    Declaration
    public static readonly JapaneseTokenizerMode DEFAULT_MODE
    Field Value
    Type Description
    JapaneseTokenizerMode
    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    Properties

    GraphvizFormatter

    Expert: set this to produce graphviz (dot) output of the Viterbi lattice

    Declaration
    public GraphvizFormatter GraphvizFormatter { get; set; }
    Property Value
    Type Description
    GraphvizFormatter
    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    Methods

    Dispose(bool)

    Releases resources associated with this stream.

    If you override this method, always call base.Dispose(disposing), otherwise some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will throw InvalidOperationException on reuse).
    Declaration
    protected override void Dispose(bool disposing)
    Parameters
    Type Name Description
    bool disposing
    Overrides
    Tokenizer.Dispose(bool)
    Remarks

    NOTE: The default implementation closes the input TextReader, so be sure to call base.Dispose(disposing) when overriding this method.

    End()

    This method is called by the consumer after the last token has been consumed, after Lucene.Net.Analysis.TokenStream.IncrementToken() returned false (using the new Lucene.Net.Analysis.TokenStream API). Streams implementing the old API should upgrade to use this feature.

    This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

    Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.

    If you override this method, always call base.End();.
    Declaration
    public override void End()
    Overrides
    Lucene.Net.Analysis.TokenStream.End()
    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.
    Exceptions
    Type Condition
    IOException

    If an I/O error occurs

    IncrementToken()

    Consumers (i.e., Lucene.Net.Index.IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate Lucene.Net.Util.IAttributes with the attributes of the next token.

    The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use Lucene.Net.Util.AttributeSource.CaptureState() to create a copy of the current attribute state.

    this method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to Lucene.Net.Util.AttributeSource.AddAttribute<T>() and Lucene.Net.Util.AttributeSource.GetAttribute<T>(), references to all Lucene.Net.Util.IAttributes that this stream uses should be retrieved during instantiation.

    To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in Lucene.Net.Analysis.TokenStream.IncrementToken().
    Declaration
    public override bool IncrementToken()
    Returns
    Type Description
    bool

    false for end of stream; true otherwise

    Overrides
    Lucene.Net.Analysis.TokenStream.IncrementToken()
    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    Reset()

    This method is called by a consumer before it begins consumption using Lucene.Net.Analysis.TokenStream.IncrementToken().

    Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.

    If you override this method, always call base.Reset(), otherwise some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will throw InvalidOperationException on further usage).
    Declaration
    public override void Reset()
    Overrides
    Lucene.Net.Analysis.Tokenizer.Reset()
    Remarks

    This tokenizer sets a number of additional attributes:

    • IBaseFormAttribute containing base form for inflected adjectives and verbs.
    • IPartOfSpeechAttribute containing part-of-speech.
    • IReadingAttribute containing reading and pronunciation.
    • IInflectionAttribute containing additional part-of-speech information for inflected forms.

    This tokenizer uses a rolling Viterbi search to find the least cost segmentation (path) of the incoming characters. For tokens that appear to be compound (> length 2 for all Kanji, or > length 7 for non-Kanji), we see if there is a 2nd best segmentation of that token after applying penalties to the long tokens. If so, and the Mode is SEARCH, we output the alternate segmentation as well.

    Implements

    IDisposable
    Back to top Copyright © 2024 The Apache Software Foundation, Licensed under the Apache License, Version 2.0
    Apache Lucene.Net, Lucene.Net, Apache, the Apache feather logo, and the Apache Lucene.Net project logo are trademarks of The Apache Software Foundation.
    All other marks mentioned may be trademarks or registered trademarks of their respective owners.