Show / Hide Table of Contents

    Class Similarity

    Similarity defines the components of Lucene scoring.

    Expert: Scoring API.

    This is a low-level API, you should only extend this API if you want to implement an information retrieval model. If you are instead looking for a convenient way to alter Lucene's scoring, consider extending a higher-level implementation such as TFIDFSimilarity, which implements the vector space model with this API, or just tweaking the default implementation: DefaultSimilarity.

    Similarity determines how Lucene weights terms, and Lucene interacts with this class at both index-time and query-time.

    At indexing time, the indexer calls ComputeNorm(FieldInvertState), allowing the Similarity implementation to set a per-document value for the field that will be later accessible via GetNormValues(String). Lucene makes no assumption about what is in this norm, but it is most useful for encoding length normalization information.

    Implementations should carefully consider how the normalization is encoded: while Lucene's classical TFIDFSimilarity encodes a combination of index-time boost and length normalization information with SmallSingle into a single byte, this might not be suitable for all purposes.

    Many formulas require the use of average document length, which can be computed via a combination of SumTotalTermFreq and MaxDoc or DocCount, depending upon whether the average should reflect field sparsity.

    Additional scoring factors can be stored in named NumericDocValuesFields and accessed at query-time with GetNumericDocValues(String).

    Finally, using index-time boosts (either via folding into the normalization byte or via DocValues), is an inefficient way to boost the scores of different fields if the boost will be the same for every document, instead the Similarity can simply take a constant boost parameter C, and PerFieldSimilarityWrapper can return different instances with different boosts depending upon field name.

    At query-time, Queries interact with the Similarity via these steps:

    1. The ComputeWeight(Single, CollectionStatistics, TermStatistics[]) method is called a single time, allowing the implementation to compute any statistics (such as IDF, average document length, etc) across the entire collection. The TermStatistics and CollectionStatistics passed in already contain all of the raw statistics involved, so a Similarity can freely use any combination of statistics without causing any additional I/O. Lucene makes no assumption about what is stored in the returned Similarity.SimWeight object.
    2. The query normalization process occurs a single time: GetValueForNormalization() is called for each query leaf node, QueryNorm(Single) is called for the top-level query, and finally Normalize(Single, Single) passes down the normalization value and any top-level boosts (e.g. from enclosing BooleanQuerys).
    3. For each segment in the index, the Query creates a GetSimScorer(Similarity.SimWeight, AtomicReaderContext) The GetScore() method is called for each matching document.

    When Explain(Query, Int32) is called, queries consult the Similarity's DocScorer for an explanation of how it computed its score. The query passes in a the document id and an explanation of how the frequency was computed.

    This is a Lucene.NET EXPERIMENTAL API, use at your own risk
    Inheritance
    System.Object
    Similarity
    BM25Similarity
    MultiSimilarity
    PerFieldSimilarityWrapper
    SimilarityBase
    TFIDFSimilarity
    Namespace: Lucene.Net.Search.Similarities
    Assembly: Lucene.Net.dll
    Syntax
    public abstract class Similarity : object

    Constructors

    | Improve this Doc View Source

    Similarity()

    Sole constructor. (For invocation by subclass constructors, typically implicit.)

    Declaration
    public Similarity()

    Methods

    | Improve this Doc View Source

    ComputeNorm(FieldInvertState)

    Computes the normalization value for a field, given the accumulated state of term processing for this field (see FieldInvertState).

    Matches in longer fields are less precise, so implementations of this method usually set smaller values when state.Length is large, and larger values when

    state.Length
    is small.

    This is a Lucene.NET EXPERIMENTAL API, use at your own risk
    Declaration
    public abstract long ComputeNorm(FieldInvertState state)
    Parameters
    Type Name Description
    FieldInvertState state

    current processing state for this field

    Returns
    Type Description
    System.Int64

    computed norm value

    | Improve this Doc View Source

    ComputeWeight(Single, CollectionStatistics, TermStatistics[])

    Compute any collection-level weight (e.g. IDF, average document length, etc) needed for scoring a query.

    Declaration
    public abstract Similarity.SimWeight ComputeWeight(float queryBoost, CollectionStatistics collectionStats, params TermStatistics[] termStats)
    Parameters
    Type Name Description
    System.Single queryBoost

    the query-time boost.

    CollectionStatistics collectionStats

    collection-level statistics, such as the number of tokens in the collection.

    TermStatistics[] termStats

    term-level statistics, such as the document frequency of a term across the collection.

    Returns
    Type Description
    Similarity.SimWeight

    Similarity.SimWeight object with the information this Similarity needs to score a query.

    | Improve this Doc View Source

    Coord(Int32, Int32)

    Hook to integrate coordinate-level matching.

    By default this is disabled (returns 1), as with most modern models this will only skew performance, but some implementations such as TFIDFSimilarity override this.

    Declaration
    public virtual float Coord(int overlap, int maxOverlap)
    Parameters
    Type Name Description
    System.Int32 overlap

    the number of query terms matched in the document

    System.Int32 maxOverlap

    the total number of terms in the query

    Returns
    Type Description
    System.Single

    a score factor based on term overlap with the query

    | Improve this Doc View Source

    GetSimScorer(Similarity.SimWeight, AtomicReaderContext)

    Creates a new Similarity.SimScorer to score matching documents from a segment of the inverted index.

    Declaration
    public abstract Similarity.SimScorer GetSimScorer(Similarity.SimWeight weight, AtomicReaderContext context)
    Parameters
    Type Name Description
    Similarity.SimWeight weight

    collection information from ComputeWeight(Single, CollectionStatistics, TermStatistics[])

    AtomicReaderContext context

    segment of the inverted index to be scored.

    Returns
    Type Description
    Similarity.SimScorer

    Sloppy Similarity.SimScorer for scoring documents across context

    | Improve this Doc View Source

    QueryNorm(Single)

    Computes the normalization value for a query given the sum of the normalized weights GetValueForNormalization() of each of the query terms. this value is passed back to the weight (Normalize(Single, Single) of each query term, to provide a hook to attempt to make scores from different queries comparable.

    By default this is disabled (returns 1), but some implementations such as TFIDFSimilarity override this.

    Declaration
    public virtual float QueryNorm(float valueForNormalization)
    Parameters
    Type Name Description
    System.Single valueForNormalization

    the sum of the term normalization values

    Returns
    Type Description
    System.Single

    a normalization factor for query weights

    See Also

    Similarity
    Similarity
    • Improve this Doc
    • View Source
    Back to top Copyright © 2020 Licensed to the Apache Software Foundation (ASF)