Expert: Scoring API.

Subclasses implement search scoring.

The score of query

CopyC#
q
for document
CopyC#
d
correlates to the cosine-distance or dot-product between document and query vectors in a Vector Space Model (VSM) of Information Retrieval. A document whose vector is closer to the query vector in that model is scored higher. The score is computed as follows:

score(q,d)   =   coord(q,d)  ·  queryNorm(q)  ·  (tf(t in d)  ·  idf(t)2  ·  t.getBoost() ·  norm(t,d))
t in q

where

  1. tf(t in d) correlates to the term's frequency, defined as the number of times term t appears in the currently scored document d. Documents that have more occurrences of a given term receive a higher score. The default computation for tf(t in d) in {@link Lucene.Net.Search.DefaultSimilarity#Tf(float) DefaultSimilarity} is:
     
    {@link Lucene.Net.Search.DefaultSimilarity#Tf(float) tf(t in d)}   =   frequency½

     
  2. idf(t) stands for Inverse Document Frequency. This value correlates to the inverse of docFreq (the number of documents in which the term t appears). This means rarer terms give higher contribution to the total score. The default computation for idf(t) in {@link Lucene.Net.Search.DefaultSimilarity#Idf(int, int) DefaultSimilarity} is:
     
    {@link Lucene.Net.Search.DefaultSimilarity#Idf(int, int) idf(t)}  =   1 + log (
    numDocs
    –––––––––
    docFreq+1
    )

     
  3. coord(q,d) is a score factor based on how many of the query terms are found in the specified document. Typically, a document that contains more of the query's terms will receive a higher score than another document with fewer query terms. This is a search time factor computed in {@link #Coord(int, int) coord(q,d)} by the Similarity in effect at search time.
     
  4. queryNorm(q) is a normalizing factor used to make scores between queries comparable. This factor does not affect document ranking (since all ranked documents are multiplied by the same factor), but rather just attempts to make scores from different queries (or even different indexes) comparable. This is a search time factor computed by the Similarity in effect at search time. The default computation in {@link Lucene.Net.Search.DefaultSimilarity#QueryNorm(float) DefaultSimilarity} is:
     
    queryNorm(q)   =   {@link Lucene.Net.Search.DefaultSimilarity#QueryNorm(float) queryNorm(sumOfSquaredWeights)}   =  
    1
    ––––––––––––––
    sumOfSquaredWeights½

     
    The sum of squared weights (of the query terms) is computed by the query {@link Lucene.Net.Search.Weight} object. For example, a {@link Lucene.Net.Search.BooleanQuery boolean query} computes this value as:
     
    {@link Lucene.Net.Search.Weight#SumOfSquaredWeights() sumOfSquaredWeights}   =   {@link Lucene.Net.Search.Query#GetBoost() q.getBoost()} 2  ·  (idf(t)  ·  t.getBoost()) 2
    t in q

     
  5. t.getBoost() is a search time boost of term t in the query q as specified in the query text (see query syntax), or as set by application calls to {@link Lucene.Net.Search.Query#SetBoost(float) setBoost()}. Notice that there is really no direct API for accessing a boost of one term in a multi term query, but rather multi terms are represented in a query as multi {@link Lucene.Net.Search.TermQuery TermQuery} objects, and so the boost of a term in the query is accessible by calling the sub-query {@link Lucene.Net.Search.Query#GetBoost() getBoost()}.
     
  6. norm(t,d) encapsulates a few (indexing time) boost and length factors:
    • Document boost - set by calling {@link Lucene.Net.Documents.Document#SetBoost(float) doc.setBoost()} before adding the document to the index.
    • Field boost - set by calling {@link Lucene.Net.Documents.Fieldable#SetBoost(float) field.setBoost()} before adding the field to a document.
    • {@link #LengthNorm(String, int) lengthNorm(field)} - computed when the document is added to the index in accordance with the number of tokens of this field in the document, so that shorter fields contribute more to the score. LengthNorm is computed by the Similarity class in effect at indexing.

    When a document is added to the index, all the above factors are multiplied. If the document has multiple fields with the same name, all their boosts are multiplied together:
     
    norm(t,d)   =   {@link Lucene.Net.Documents.Document#GetBoost() doc.getBoost()}  ·  {@link #LengthNorm(String, int) lengthNorm(field)}  ·  {@link Lucene.Net.Documents.Fieldable#GetBoost() f.getBoost}()
    field f in d named as t

     
    However the resulted norm value is {@link #EncodeNorm(float) encoded} as a single byte before being stored. At search time, the norm byte value is read from the index {@link Lucene.Net.Store.Directory directory} and {@link #DecodeNorm(byte) decoded} back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75. Also notice that search time is too late to modify this norm part of scoring, e.g. by using a different {@link Similarity} for search.
     

Namespace: Lucene.Net.Search
Assembly: Lucene.Net (in Lucene.Net.dll) Version: 2.9.4.1

Syntax

C#
[SerializableAttribute]
public abstract class Similarity
Visual Basic
<SerializableAttribute> _
Public MustInherit Class Similarity
Visual C++
[SerializableAttribute]
public ref class Similarity abstract

Inheritance Hierarchy

See Also