Lucene.Net
3.0.3
Lucene.Net is a port of the Lucene search engine library, written in C# and targeted at .NET runtime users.

Expert: Scoring API. Subclasses implement search scoring. More...
Inherited by Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.
Public Member Functions  
virtual float  ComputeNorm (System.String field, FieldInvertState state) 
Compute the normalization value for a field, given the accumulated state of term processing for this field (see FieldInvertState).  
abstract float  LengthNorm (System.String fieldName, int numTokens) 
Computes the normalization value for a field given the total number of terms contained in a field. These values, together with field boosts, are stored in an index and multipled into scores for hits on each field by the search code.  
abstract float  QueryNorm (float sumOfSquaredWeights) 
Computes the normalization value for a query given the sum of the squared weights of each of the query terms. This value is then multipled into the weight of each query term.  
virtual float  Tf (int freq) 
Computes a score factor based on a term or phrase's frequency in a document. This value is multiplied by the Idf(int, int) factor for each term in the query and these products are then summed to form the initial score for a document.  
abstract float  SloppyFreq (int distance) 
Computes the amount of a sloppy phrase match, based on an edit distance. This value is summed for each sloppy phrase match in a document to form the frequency that is passed to Tf(float).  
abstract float  Tf (float freq) 
Computes a score factor based on a term or phrase's frequency in a document. This value is multiplied by the Idf(int, int) factor for each term in the query and these products are then summed to form the initial score for a document.  
virtual IDFExplanation  IdfExplain (Term term, Searcher searcher) 
Computes a score factor for a simple term and returns an explanation for that score factor.  
virtual IDFExplanation  IdfExplain (ICollection< Term > terms, Searcher searcher) 
Computes a score factor for a phrase.  
abstract float  Idf (int docFreq, int numDocs) 
Computes a score factor based on a term's document frequency (the number of documents which contain the term). This value is multiplied by the Tf(int) factor for each term in the query and these products are then summed to form the initial score for a document.  
abstract float  Coord (int overlap, int maxOverlap) 
Computes a score factor based on the fraction of all query terms that a document contains. This value is multiplied into scores.  
virtual float  ScorePayload (int docId, System.String fieldName, int start, int end, byte[] payload, int offset, int length) 
Calculate a scoring factor based on the data in the payload. Overriding implementations are responsible for interpreting what is in the payload. Lucene makes no assumptions about what is in the byte array. The default implementation returns 1.  
Static Public Member Functions  
static float  DecodeNorm (byte b) 
Decodes a normalization factor stored in an index.  
static float[]  GetNormDecoder () 
Returns a table for decoding normalization bytes.  
static byte  EncodeNorm (float f) 
Encodes a normalization factor for storage in an index.  
Public Attributes  
const int  NO_DOC_ID_PROVIDED =  1 
Protected Member Functions  
Similarity ()  
Properties  
static Similarity  Default [get, set] 
Gets or sets the default Similarity implementation used by indexing and search code. This is initially an instance of DefaultSimilarity.  
Expert: Scoring API.
Subclasses implement search scoring.
The score of query q
for document d
correlates to the cosinedistance or dotproduct between document and query vectors in a Vector Space Model (VSM) of Information Retrieval. A document whose vector is closer to the query vector in that model is scored higher.
The score is computed as follows:

where
tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score. The default computation for tf(t in d) in DefaultSimilarity is:
 
tf(t in d)   =    frequency^{<big>½</big>} 
 
idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score. The default computation for idf(t) in DefaultSimilarity is:
 
idf(t)  =    1 + log <big>(</big> 
 <big>)</big> 
 
coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in coord(q,d) by the Similarity in effect at search time.
 
queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time.
The default computation in DefaultSimilarity is:
 
queryNorm(q)   =   queryNorm(sumOfSquaredWeights)   =   

 
The sum of squared weights (of the query terms) is computed by the query Lucene.Net.Search.Weight object. For example, a boolean query computes this value as:
 
GetSumOfSquaredWeights   =   q.Boost ^{<big>2</big>}  ·   <big><big><big>∑</big></big></big>  <big><big>(</big></big> idf(t)  ·  t.Boost <big><big>) ^{2} </big></big> 
t in q 
 
t.Boost is a search time boost of term t in the query q as specified in the query text (see query syntax), or as set by application calls to Lucene.Net.Search.Query.Boost. Notice that there is really no direct API for accessing a boost of one term in a multi term query, but rather multi terms are represented in a query as multi TermQuery objects, and so the boost of a term in the query is accessible by calling the subquery Lucene.Net.Search.Query.Boost.
 
norm(t,d) encapsulates a few (indexing time) boost and length factors:
When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:
 
norm(t,d)   =   doc.Boost  ·  LengthNorm(field)  ·   <big><big><big>∏</big></big></big>  field.Boost 
field f in d named as t 
 
However the resulted norm value is encoded as a single byte before being stored. At search time, the norm byte value is read from the index directory and decoded back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss  it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75. Also notice that search time is too late to modify this norm part of scoring, e.g. by using a different Similarity for search.
 
Definition at line 293 of file Similarity.cs.

protected 
Definition at line 295 of file Similarity.cs.

virtual 
Compute the normalization value for a field, given the accumulated state of term processing for this field (see FieldInvertState).
Implementations should calculate a float value based on the field state and then return that value.
For backward compatibility this method by default calls LengthNorm(String, int) passing FieldInvertState.Length as the second argument, and then multiplies this value by FieldInvertState.Boost.
WARNING: This API is new and experimental and may suddenly change.
field  field name 
state  current processing state for this field 
Reimplemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.
Definition at line 437 of file Similarity.cs.

pure virtual 
Computes a score factor based on the fraction of all query terms that a document contains. This value is multiplied into scores.
The presence of a large portion of the query terms indicates a better match with the query, so implementations of this method usually return larger values when the ratio between these parameters is large and smaller values when the ratio between them is small.
overlap  the number of query terms matched in the document 
maxOverlap  the total number of terms in the query 
Implemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.

static 
Decodes a normalization factor stored in an index.
Definition at line 403 of file Similarity.cs.

static 
Encodes a normalization factor for storage in an index.
The encoding uses a threebit mantissa, a fivebit exponent, and the zeroexponent point at 15, thus representing values from around 7x10^9 to 2x10^9 with about one significant decimal digit of accuracy. Zero is also represented. Negative numbers are rounded up to zero. Values too large to represent are rounded down to the largest representable value. Positive values too small to represent are rounded up to the smallest positive representable value.
Definition at line 498 of file Similarity.cs.

static 
Returns a table for decoding normalization bytes.
Definition at line 411 of file Similarity.cs.

pure virtual 
Computes a score factor based on a term's document frequency (the number of documents which contain the term). This value is multiplied by the Tf(int) factor for each term in the query and these products are then summed to form the initial score for a document.
Terms that occur in fewer documents are better indicators of topic, so implementations of this method usually return larger values for rare terms, and smaller values for common terms.
docFreq  the number of documents which contain the term 
numDocs  the total number of documents in the collection 
Implemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.

virtual 
Computes a score factor for a simple term and returns an explanation for that score factor.
The default implementation uses:
idf(searcher.docFreq(term), searcher.MaxDoc);
Note that Searcher.MaxDoc is used instead of Lucene.Net.Index.IndexReader.NumDocs() because it is proportional to Searcher.DocFreq(Term) , i.e., when one is inaccurate, so is the other, and in the same direction.
term  the term in question 
searcher  the document collection being searched 
<throws> IOException </throws>
Definition at line 582 of file Similarity.cs.

virtual 
Computes a score factor for a phrase.
The default implementation sums the idf factor for each term in the phrase.
terms  the terms in the phrase 
searcher  the document collection being searched 
<throws> IOException </throws>
Definition at line 606 of file Similarity.cs.

pure virtual 
Computes the normalization value for a field given the total number of terms contained in a field. These values, together with field boosts, are stored in an index and multipled into scores for hits on each field by the search code.
Matches in longer fields are less precise, so implementations of this method usually return smaller values when numTokens
is large, and larger values when numTokens
is small.
Note that the return values are computed under Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document) and then stored using EncodeNorm(float). Thus they have limited precision, and documents must be reindexed if this method is altered.
fieldName  the name of the field 
numTokens  the total number of tokens contained in fields named fieldName of doc. 
a normalization factor for hits on this field of this document
Implemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.

pure virtual 
Computes the normalization value for a query given the sum of the squared weights of each of the query terms. This value is then multipled into the weight of each query term.
This does not affect ranking, but rather just attempts to make scores from different queries comparable.
sumOfSquaredWeights  the sum of the squares of query term weights 
Implemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.

virtual 
Calculate a scoring factor based on the data in the payload. Overriding implementations are responsible for interpreting what is in the payload. Lucene makes no assumptions about what is in the byte array. The default implementation returns 1.
docId  The docId currently being scored. If this value is NO_DOC_ID_PROVIDED, then it should be assumed that the PayloadQuery implementation does not provide document information 
fieldName  The fieldName of the term this payload belongs to 
start  The start position of the payload 
end  The end position of the payload 
payload  The payload byte array to be scored 
offset  The offset into the payload array 
length  The length in the array 
An implementation dependent float to be used as a scoring factor
Definition at line 684 of file Similarity.cs.

pure virtual 
Computes the amount of a sloppy phrase match, based on an edit distance. This value is summed for each sloppy phrase match in a document to form the frequency that is passed to Tf(float).
A phrase match with a small edit distance to a document passage more closely matches the document, so implementations of this method usually return larger values when the edit distance is small and smaller values when it is large.
distance  the edit distance of this sloppy phrase match 
Implemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.

virtual 
Computes a score factor based on a term or phrase's frequency in a document. This value is multiplied by the Idf(int, int) factor for each term in the query and these products are then summed to form the initial score for a document.
Terms and phrases repeated in a document indicate the topic of the document, so implementations of this method usually return larger values when freq
is large, and smaller values when freq
is small.
The default implementation calls Tf(float).
freq  the frequency of a term within a document 
Definition at line 521 of file Similarity.cs.

pure virtual 
Computes a score factor based on a term or phrase's frequency in a document. This value is multiplied by the Idf(int, int) factor for each term in the query and these products are then summed to form the initial score for a document.
Terms and phrases repeated in a document indicate the topic of the document, so implementations of this method usually return larger values when freq
is large, and smaller values when freq
is small.
freq  the frequency of a term within a document 
Implemented in Lucene.Net.Search.DefaultSimilarity, and Lucene.Net.Search.SimilarityDelegator.
const int Lucene.Net.Search.Similarity.NO_DOC_ID_PROVIDED =  1 
Definition at line 381 of file Similarity.cs.

staticgetset 
Gets or sets the default Similarity implementation used by indexing and search code. This is initially an instance of DefaultSimilarity.
Definition at line 392 of file Similarity.cs.