Class ClassicTokenizer
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Implements
Inherited Members
Namespace: Lucene.Net.Analysis.Standard
Assembly: Lucene.Net.Analysis.Common.dll
Syntax
public sealed class ClassicTokenizer : Tokenizer, IDisposable
Constructors
ClassicTokenizer(LuceneVersion, AttributeFactory, TextReader)
Creates a new ClassicTokenizer with a given Lucene.Net.Util.AttributeSource.AttributeFactory
Declaration
public ClassicTokenizer(LuceneVersion matchVersion, AttributeSource.AttributeFactory factory, TextReader input)
Parameters
| Type | Name | Description |
|---|---|---|
| LuceneVersion | matchVersion | |
| AttributeSource.AttributeFactory | factory | |
| TextReader | input |
ClassicTokenizer(LuceneVersion, TextReader)
Creates a new instance of the ClassicTokenizer. Attaches
the input to the newly created JFlex scanner.
Declaration
public ClassicTokenizer(LuceneVersion matchVersion, TextReader input)
Parameters
| Type | Name | Description |
|---|---|---|
| LuceneVersion | matchVersion | lucene compatibility version |
| TextReader | input | The input reader |
Fields
ACRONYM
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int ACRONYM = 2
Field Value
| Type | Description |
|---|---|
| int |
ACRONYM_DEP
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int ACRONYM_DEP = 8
Field Value
| Type | Description |
|---|---|
| int |
ALPHANUM
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int ALPHANUM = 0
Field Value
| Type | Description |
|---|---|
| int |
APOSTROPHE
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int APOSTROPHE = 1
Field Value
| Type | Description |
|---|---|
| int |
CJ
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int CJ = 7
Field Value
| Type | Description |
|---|---|
| int |
COMPANY
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int COMPANY = 3
Field Value
| Type | Description |
|---|---|
| int |
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int EMAIL = 4
Field Value
| Type | Description |
|---|---|
| int |
HOST
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int HOST = 5
Field Value
| Type | Description |
|---|---|
| int |
NUM
A grammar-based tokenizer constructed with JFlex (and then ported to .NET)
This should be a good tokenizer for most European-language documents:
- Splits words at punctuation characters, removing punctuation. However, a dot that's not followed by whitespace is considered part of a token.
- Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.
- Recognizes email addresses and internet hostnames as one token.
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
ClassicTokenizer was named StandardTokenizer in Lucene versions prior to 3.1. As of 3.1, StandardTokenizer implements Unicode text segmentation, as specified by UAX#29.
Declaration
public const int NUM = 6
Field Value
| Type | Description |
|---|---|
| int |
TOKEN_TYPES
String token types that correspond to token type int constants
Declaration
public static readonly string[] TOKEN_TYPES
Field Value
| Type | Description |
|---|---|
| string[] |
Properties
MaxTokenLength
Set the max allowed token length. Any token longer than this is skipped.
Declaration
public int MaxTokenLength { get; set; }
Property Value
| Type | Description |
|---|---|
| int |
Methods
Dispose(bool)
Releases resources associated with this stream.
If you override this method, always callbase.Dispose(disposing), otherwise
some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will
throw InvalidOperationException on reuse).
Declaration
protected override void Dispose(bool disposing)
Parameters
| Type | Name | Description |
|---|---|---|
| bool | disposing |
Overrides
Remarks
NOTE:
The default implementation closes the input TextReader, so
be sure to call base.Dispose(disposing) when overriding this method.
End()
This method is called by the consumer after the last token has been
consumed, after Lucene.Net.Analysis.TokenStream.IncrementToken() returned false
(using the new Lucene.Net.Analysis.TokenStream API). Streams implementing the old API
should upgrade to use this feature.
base.End();.
Declaration
public override sealed void End()
Overrides
Exceptions
| Type | Condition |
|---|---|
| IOException | If an I/O error occurs |
IncrementToken()
Consumers (i.e., Lucene.Net.Index.IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate Lucene.Net.Util.IAttributes with the attributes of the next token.
The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use Lucene.Net.Util.AttributeSource.CaptureState() to create a copy of the current attribute state. this method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to Lucene.Net.Util.AttributeSource.AddAttribute<T>() and Lucene.Net.Util.AttributeSource.GetAttribute<T>(), references to all Lucene.Net.Util.IAttributes that this stream uses should be retrieved during instantiation. To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in Lucene.Net.Analysis.TokenStream.IncrementToken().Declaration
public override sealed bool IncrementToken()
Returns
| Type | Description |
|---|---|
| bool | false for end of stream; true otherwise |
Overrides
Reset()
This method is called by a consumer before it begins consumption using Lucene.Net.Analysis.TokenStream.IncrementToken().
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh. If you override this method, always callbase.Reset(), otherwise
some internal state will not be correctly reset (e.g., Lucene.Net.Analysis.Tokenizer will
throw InvalidOperationException on further usage).
Declaration
public override void Reset()