|
| Token () |
| Constructs a Token will null text.
|
|
| Token (int start, int end) |
| Constructs a Token with null text and start & end offsets.
|
|
| Token (int start, int end, String typ) |
| Constructs a Token with null text and start & end offsets plus the Token type.
|
|
| Token (int start, int end, int flags) |
| Constructs a Token with null text and start & end offsets plus flags. NOTE: flags is EXPERIMENTAL.
|
|
| Token (String text, int start, int end) |
| Constructs a Token with the given term text, and start & end offsets. The type defaults to "word." NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text.
|
|
| Token (System.String text, int start, int end, System.String typ) |
| Constructs a Token with the given text, start and end offsets, & type. NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text.
|
|
| Token (System.String text, int start, int end, int flags) |
| Constructs a Token with the given text, start and end offsets, & type. NOTE: for better indexing speed you should instead use the char[] termBuffer methods to set the term text.
|
|
| Token (char[] startTermBuffer, int termBufferOffset, int termBufferLength, int start, int end) |
| Constructs a Token with the given term buffer (offset & length), start and end offsets
|
|
void | SetTermBuffer (char[] buffer, int offset, int length) |
| Copies the contents of buffer, starting at offset for length characters, into the termBuffer array.
|
|
void | SetTermBuffer (System.String buffer) |
| Copies the contents of buffer into the termBuffer array.
|
|
void | SetTermBuffer (System.String buffer, int offset, int length) |
| Copies the contents of buffer, starting at offset and continuing for length characters, into the termBuffer array.
|
|
char[] | TermBuffer () |
| Returns the internal termBuffer character array which you can then directly alter. If the array is too small for your token, use ResizeTermBuffer(int) to increase it. After altering the buffer be sure to call SetTermLength to record the number of valid characters that were placed into the termBuffer.
|
|
virtual char[] | ResizeTermBuffer (int newSize) |
| Grows the termBuffer to at least size newSize, preserving the existing content. Note: If the next operation is to change the contents of the term buffer use SetTermBuffer(char[], int, int), SetTermBuffer(String), or SetTermBuffer(String, int, int) to optimally combine the resize with the setting of the termBuffer.
|
|
int | TermLength () |
| Return number of valid characters (length of the term) in the termBuffer array.
|
|
void | SetTermLength (int length) |
| Set number of valid characters (length of the term) in the termBuffer array. Use this to truncate the termBuffer or to synchronize with external manipulation of the termBuffer. Note: to grow the size of the array, use ResizeTermBuffer(int) first.
|
|
virtual void | SetOffset (int startOffset, int endOffset) |
| Set the starting and ending offset. See StartOffset() and EndOffset()
|
|
override String | ToString () |
|
override void | Clear () |
| Resets the term text, payload, flags, and positionIncrement, startOffset, endOffset and token type to default.
|
|
override System.Object | Clone () |
| Shallow clone. Subclasses must override this if they need to clone any members deeply,
|
|
virtual Token | Clone (char[] newTermBuffer, int newTermOffset, int newTermLength, int newStartOffset, int newEndOffset) |
| Makes a clone, but replaces the term buffer & start/end offset in the process. This is more efficient than doing a full clone (and then calling setTermBuffer) because it saves a wasted copy of the old termBuffer.
|
|
override bool | Equals (Object obj) |
|
override int | GetHashCode () |
| Subclasses must implement this method and should compute a hashCode similar to this:
|
|
virtual Token | Reinit (char[] newTermBuffer, int newTermOffset, int newTermLength, int newStartOffset, int newEndOffset, System.String newType) |
| Shorthand for calling Clear, SetTermBuffer(char[], int, int), StartOffset, EndOffset, Type
|
|
virtual Token | Reinit (char[] newTermBuffer, int newTermOffset, int newTermLength, int newStartOffset, int newEndOffset) |
| Shorthand for calling Clear, SetTermBuffer(char[], int, int), StartOffset, EndOffset Type on Token.DEFAULT_TYPE
|
|
virtual Token | Reinit (System.String newTerm, int newStartOffset, int newEndOffset, System.String newType) |
| Shorthand for calling Clear, SetTermBuffer(String), StartOffset, EndOffset Type
|
|
virtual Token | Reinit (System.String newTerm, int newTermOffset, int newTermLength, int newStartOffset, int newEndOffset, System.String newType) |
| Shorthand for calling Clear, SetTermBuffer(String, int, int), StartOffset, EndOffset Type
|
|
virtual Token | Reinit (System.String newTerm, int newStartOffset, int newEndOffset) |
| Shorthand for calling Clear, SetTermBuffer(String), StartOffset, EndOffset Type on Token.DEFAULT_TYPE
|
|
virtual Token | Reinit (System.String newTerm, int newTermOffset, int newTermLength, int newStartOffset, int newEndOffset) |
| Shorthand for calling Clear, SetTermBuffer(String, int, int), StartOffset, EndOffset Type on Token.DEFAULT_TYPE
|
|
virtual void | Reinit (Token prototype) |
| Copy the prototype token's fields into this one. Note: Payloads are shared.
|
|
virtual void | Reinit (Token prototype, System.String newTerm) |
| Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.
|
|
virtual void | Reinit (Token prototype, char[] newTermBuffer, int offset, int length) |
| Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.
|
|
override void | CopyTo (Attribute target) |
|
override System.String | ToString () |
| The default implementation of this method accesses all declared fields of this object and prints the values in the following syntax:
|
|
abstract override bool | Equals (System.Object other) |
| All values used for computation of GetHashCode() should be checked here for equality.
|
|
|
virtual int | PositionIncrement [get, set] |
| Set the position increment. This determines the position of this token relative to the previous Token in a TokenStream, used in phrase searching.
|
|
string | Term [get] |
| Returns the Token's term text.
|
|
virtual int | StartOffset [get, set] |
| Gets or sets this Token's starting offset, the position of the first character corresponding to this token in the source text. Note that the difference between endOffset() and startOffset() may not be equal to TermLength, as the term text may have been altered by a stemmer or some other filter.
|
|
virtual int | EndOffset [get, set] |
| Gets or sets this Token's ending offset, one greater than the position of the last character corresponding to this token in the source text. The length of the token in the source text is (endOffset - startOffset).
|
|
string | Type [get, set] |
| Returns this Token's lexical type. Defaults to "word".
|
|
virtual int | Flags [get, set] |
| EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long.
|
|
virtual Payload | Payload [get, set] |
| Returns this Token's payload.
|
|
string | Term [get] |
| Returns the Token's term text.
|
|
string | Type [get, set] |
| Gets or sets this Token's lexical type. Defaults to "word".
|
|
int | PositionIncrement [get, set] |
| Gets or sets the position increment. The default value is one.
|
|
int | Flags [get, set] |
| EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long.
|
|
int | StartOffset [get] |
| Returns this Token's starting offset, the position of the first character corresponding to this token in the source text. Note that the difference between endOffset() and startOffset() may not be equal to termText.length(), as the term text may have been altered by a stemmer or some other filter.
|
|
int | EndOffset [get] |
| Returns this Token's ending offset, one greater than the position of the last character corresponding to this token in the source text. The length of the token in the source text is (endOffset - startOffset).
|
|
Payload | Payload [get, set] |
| Returns this Token's payload.
|
|
A Token is an occurrence of a term from the text of a field. It consists of a term's text, the start and end offset of the term in the text of the field, and a type string.
The start and end offsets permit applications to re-associate a token with its source text, e.g., to display highlighted query terms in a document browser, or to show matching text fragments in a <abbr title="KeyWord In Context">KWIC</abbr> display, etc.
The type is a string, assigned by a lexical analyzer (a.k.a. tokenizer), naming the lexical or syntactic class that the token belongs to. For example an end of sentence marker token might be implemented with type "eos". The default token type is "word".
A Token can optionally have metadata (a.k.a. Payload) in the form of a variable length byte array. Use TermPositions.PayloadLength and TermPositions.GetPayload(byte[], int) to retrieve the payloads from the index.
NOTE: As of 2.9, Token implements all IAttribute interfaces that are part of core Lucene and can be found in the Lucene.Net.Analysis.Tokenattributes namespace. Even though it is not necessary to use Token anymore, with the new TokenStream API it can be used as convenience class that implements all IAttributes, which is especially useful to easily switch from the old to the new TokenStream API.
Tokenizers and TokenFilters should try to re-use a Token instance when possible for best performance, by implementing the TokenStream.IncrementToken() API. Failing that, to create a new Token you should first use one of the constructors that starts with null text. To load the token from a char[] use SetTermBuffer(char[], int, int). To load from a String use SetTermBuffer(String) or SetTermBuffer(String, int, int). Alternatively you can get the Token's termBuffer by calling either TermBuffer(), if you know that your text is shorter than the capacity of the termBuffer or ResizeTermBuffer(int), if there is any possibility that you may need to grow the buffer. Fill in the characters of your term into this buffer, with string.ToCharArray(int, int) if loading from a string, or with Array.Copy(Array, long, Array, long, long), and finally call SetTermLength(int) to set the length of the term text. See LUCENE-969 for details.
Typical Token reuse patterns:
-
Copying text from a string (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.reinit(string, startOffset, endOffset[, type]);
-
Copying some text from a string (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.reinit(string, 0, string.length(), startOffset, endOffset[, type]);
-
Copying text from char[] buffer (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.reinit(buffer, 0, buffer.length, startOffset, endOffset[, type]);
-
Copying some text from a char[] buffer (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.reinit(buffer, start, end - start, startOffset, endOffset[, type]);
-
Copying from one one Token to another (type is reset to DEFAULT_TYPE if not specified):
return reusableToken.reinit(source.termBuffer(), 0, source.termLength(), source.startOffset(), source.endOffset()[, source.type()]);
A few things to note:
-
clear() initializes all of the fields to default values. This was changed in contrast to Lucene 2.4, but should affect no one.
-
Because
TokenStreams
can be chained, one cannot assume that the Token's
current type is correct.
-
The startOffset and endOffset represent the start and offset in the source text, so be careful in adjusting them.
-
When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.
- See Also
- Lucene.Net.Index.Payload
Definition at line 118 of file Token.cs.