Lucene.Net  3.0.3
Lucene.Net is a .NET port of the Java Lucene Indexing Library
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Properties
Classes | Public Member Functions | Public Attributes | Static Public Attributes | Properties | List of all members
Lucene.Net.Analysis.Standard.StandardAnalyzer Class Reference

Filters StandardTokenizer with StandardFilter, LowerCaseFilter and StopFilter, using a list of English stop words. More...

Inherits Lucene.Net.Analysis.Analyzer.

Public Member Functions

 StandardAnalyzer (Version matchVersion)
 Builds an analyzer with the default stop words (STOP_WORDS_SET).
 
 StandardAnalyzer (Version matchVersion, ISet< string > stopWords)
 Builds an analyzer with the given stop words.
 
 StandardAnalyzer (Version matchVersion, System.IO.FileInfo stopwords)
 Builds an analyzer with the stop words from the given file.
 
 StandardAnalyzer (Version matchVersion, System.IO.TextReader stopwords)
 Builds an analyzer with the stop words from the given reader.
 
override TokenStream TokenStream (System.String fieldName, System.IO.TextReader reader)
 Constructs a StandardTokenizer filtered by a StandardFilter , a LowerCaseFilter and a StopFilter.
 
override TokenStream ReusableTokenStream (System.String fieldName, System.IO.TextReader reader)
 
- Public Member Functions inherited from Lucene.Net.Analysis.Analyzer
abstract TokenStream TokenStream (String fieldName, System.IO.TextReader reader)
 Creates a TokenStream which tokenizes all the text in the provided Reader. Must be able to handle null field name for backward compatibility.
 
virtual TokenStream ReusableTokenStream (String fieldName, System.IO.TextReader reader)
 Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method. Callers that do not need to use more than one TokenStream at the same time from this analyzer should use this method for better performance.
 
virtual int GetPositionIncrementGap (String fieldName)
 Invoked before indexing a Fieldable instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between Fieldable instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across Fieldable instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across Fieldable instance boundaries.
 
virtual int GetOffsetGap (IFieldable field)
 Just like GetPositionIncrementGap, except for Token offsets instead. By default this returns 1 for tokenized fields and, as if the fields were joined with an extra space character, and 0 for un-tokenized fields. This method is only called if the field produced at least one token for indexing.
 
void Close ()
 Frees persistent resources used by this Analyzer
 
virtual void Dispose ()
 

Public Attributes

const int DEFAULT_MAX_TOKEN_LENGTH = 255
 Default maximum allowed token length
 

Static Public Attributes

static readonly ISet< string > STOP_WORDS_SET
 An unmodifiable set containing some common English words that are usually not useful for searching.
 

Properties

virtual int MaxTokenLength [get, set]
 Set maximum allowed token length. If a token is seen that exceeds this length then it is discarded. This setting only takes effect the next time tokenStream or reusableTokenStream is called.
 

Additional Inherited Members

- Protected Member Functions inherited from Lucene.Net.Analysis.Analyzer
virtual void Dispose (bool disposing)
 

Detailed Description

Filters StandardTokenizer with StandardFilter, LowerCaseFilter and StopFilter, using a list of English stop words.

You must specify the required Version compatibility when creating StandardAnalyzer:

Definition at line 42 of file StandardAnalyzer.cs.

Constructor & Destructor Documentation

Lucene.Net.Analysis.Standard.StandardAnalyzer.StandardAnalyzer ( Version  matchVersion)

Builds an analyzer with the default stop words (STOP_WORDS_SET).

Parameters
matchVersionLucene version to match see above

Definition at line 60 of file StandardAnalyzer.cs.

Lucene.Net.Analysis.Standard.StandardAnalyzer.StandardAnalyzer ( Version  matchVersion,
ISet< string >  stopWords 
)

Builds an analyzer with the given stop words.

Parameters
matchVersionLucene version to match See above />
Parameters
stopWordsstop words

Definition at line 70 of file StandardAnalyzer.cs.

Lucene.Net.Analysis.Standard.StandardAnalyzer.StandardAnalyzer ( Version  matchVersion,
System.IO.FileInfo  stopwords 
)

Builds an analyzer with the stop words from the given file.

See Also
WordlistLoader.GetWordSet(System.IO.FileInfo)
Parameters
matchVersionLucene version to match See above />
Parameters
stopwordsFile to read stop words from

Definition at line 87 of file StandardAnalyzer.cs.

Lucene.Net.Analysis.Standard.StandardAnalyzer.StandardAnalyzer ( Version  matchVersion,
System.IO.TextReader  stopwords 
)

Builds an analyzer with the stop words from the given reader.

See Also
WordlistLoader.GetWordSet(System.IO.TextReader)
Parameters
matchVersionLucene version to match See above />
Parameters
stopwordsReader to read stop words from

Definition at line 100 of file StandardAnalyzer.cs.

Member Function Documentation

override TokenStream Lucene.Net.Analysis.Standard.StandardAnalyzer.ReusableTokenStream ( System.String  fieldName,
System.IO.TextReader  reader 
)

Definition at line 139 of file StandardAnalyzer.cs.

override TokenStream Lucene.Net.Analysis.Standard.StandardAnalyzer.TokenStream ( System.String  fieldName,
System.IO.TextReader  reader 
)

Constructs a StandardTokenizer filtered by a StandardFilter , a LowerCaseFilter and a StopFilter.

Definition at line 107 of file StandardAnalyzer.cs.

Member Data Documentation

const int Lucene.Net.Analysis.Standard.StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH = 255

Default maximum allowed token length

Definition at line 124 of file StandardAnalyzer.cs.

readonly ISet<string> Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS_SET
static

An unmodifiable set containing some common English words that are usually not useful for searching.

Definition at line 54 of file StandardAnalyzer.cs.

Property Documentation

virtual int Lucene.Net.Analysis.Standard.StandardAnalyzer.MaxTokenLength
getset

Set maximum allowed token length. If a token is seen that exceeds this length then it is discarded. This setting only takes effect the next time tokenStream or reusableTokenStream is called.

Definition at line 134 of file StandardAnalyzer.cs.


The documentation for this class was generated from the following file: