Fork me on GitHub
  • API

    Show / Hide Table of Contents

    Class MoreLikeThis

    Generate "more like this" similarity queries. Based on this mail:

    Lucene does let you access the document frequency of terms, with .
    Term frequencies can be computed by re-tokenizing the text, which, for a single document,
    is usually fast enough.  But looking up the  of every term in the document is
    probably too slow.
    
    You can use some heuristics to prune the set of terms, to avoid calling  too much,
    or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested
    in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
    reduce the number of terms under consideration.  Another heuristic is that terms with a
    high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the
    number of characters, not selecting anything less than, e.g., six or seven characters.
    With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
    that do a pretty good job of characterizing a document.
    
    It all depends on what you're trying to do.  If you're trying to eek out that last percent
    of precision and recall regardless of computational difficulty so that you can win a TREC
    competition, then the techniques I mention above are useless.  But if you're trying to
    provide a "more like this" button on a search results page that does a decent job and has
    good performance, such techniques might be useful.
    
    An efficient, effective "more-like-this" query generator would be a great contribution, if
    anyone's interested.  I'd imagine that it would take a Reader or a String (the document's
    text), analyzer Analyzer, and return a set of representative terms using heuristics like those
    above.  The frequency and length thresholds could be parameters, etc.
    
    Doug

    Initial Usage

    This class has lots of options to try to make it efficient and flexible. The simplest possible usage is as follows. The bold fragment is specific to this class.

    IndexReader ir = ...
           IndexSearcher is = ...
    
       MoreLikeThis mlt = new MoreLikeThis(ir);
       TextReader target = ... // orig source of doc you want to find similarities to
       Query query = mlt.Like(target);
    
       Hits hits = is.Search(query);
       // now the usual iteration thru 'hits' - the only thing to watch for is to make sure
       //you ignore the doc if it matches your 'target' document, as it should be similar to itself</code></pre><p></p>
    

    Thus you:

    • do your normal, Lucene setup for searching,
    • create a MoreLikeThis,
    • get the text of the doc you want to find similarities to
    • then call one of the Like(TextReader, string) calls to generate a similarity query
    • call the searcher to find the similar docs

    More Advanced Usage

    You may want to use the setter for FieldNames so you can examine multiple fields (e.g. body and title) for similarity.

    Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
    • MinTermFreq
    • MinDocFreq
    • MaxDocFreq
    • SetMaxDocFreqPct(int)
    • MinWordLen
    • MaxWordLen
    • MaxQueryTerms
    • MaxNumTokensParsed
    • StopWords
    Inheritance
    object
    MoreLikeThis
    Inherited Members
    object.Equals(object)
    object.Equals(object, object)
    object.GetHashCode()
    object.GetType()
    object.ReferenceEquals(object, object)
    object.ToString()
    Namespace: Lucene.Net.Queries.Mlt
    Assembly: Lucene.Net.Queries.dll
    Syntax
    public sealed class MoreLikeThis
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    Constructors

    MoreLikeThis(IndexReader)

    Constructor requiring an Lucene.Net.Index.IndexReader.

    Declaration
    public MoreLikeThis(IndexReader ir)
    Parameters
    Type Name Description
    IndexReader ir
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MoreLikeThis(IndexReader, TFIDFSimilarity)

    Generate "more like this" similarity queries. Based on this mail:

    Lucene does let you access the document frequency of terms, with .
    Term frequencies can be computed by re-tokenizing the text, which, for a single document,
    is usually fast enough.  But looking up the  of every term in the document is
    probably too slow.
    
    You can use some heuristics to prune the set of terms, to avoid calling  too much,
    or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested
    in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
    reduce the number of terms under consideration.  Another heuristic is that terms with a
    high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the
    number of characters, not selecting anything less than, e.g., six or seven characters.
    With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
    that do a pretty good job of characterizing a document.
    
    It all depends on what you're trying to do.  If you're trying to eek out that last percent
    of precision and recall regardless of computational difficulty so that you can win a TREC
    competition, then the techniques I mention above are useless.  But if you're trying to
    provide a "more like this" button on a search results page that does a decent job and has
    good performance, such techniques might be useful.
    
    An efficient, effective "more-like-this" query generator would be a great contribution, if
    anyone's interested.  I'd imagine that it would take a Reader or a String (the document's
    text), analyzer Analyzer, and return a set of representative terms using heuristics like those
    above.  The frequency and length thresholds could be parameters, etc.
    
    Doug

    Initial Usage

    This class has lots of options to try to make it efficient and flexible. The simplest possible usage is as follows. The bold fragment is specific to this class.

    IndexReader ir = ...
           IndexSearcher is = ...
    
       MoreLikeThis mlt = new MoreLikeThis(ir);
       TextReader target = ... // orig source of doc you want to find similarities to
       Query query = mlt.Like(target);
    
       Hits hits = is.Search(query);
       // now the usual iteration thru 'hits' - the only thing to watch for is to make sure
       //you ignore the doc if it matches your 'target' document, as it should be similar to itself</code></pre><p></p>
    

    Thus you:

    • do your normal, Lucene setup for searching,
    • create a MoreLikeThis,
    • get the text of the doc you want to find similarities to
    • then call one of the Like(TextReader, string) calls to generate a similarity query
    • call the searcher to find the similar docs

    More Advanced Usage

    You may want to use the setter for FieldNames so you can examine multiple fields (e.g. body and title) for similarity.

    Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
    • MinTermFreq
    • MinDocFreq
    • MaxDocFreq
    • SetMaxDocFreqPct(int)
    • MinWordLen
    • MaxWordLen
    • MaxQueryTerms
    • MaxNumTokensParsed
    • StopWords
    Declaration
    public MoreLikeThis(IndexReader ir, TFIDFSimilarity sim)
    Parameters
    Type Name Description
    IndexReader ir
    TFIDFSimilarity sim
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    Fields

    DEFAULT_BOOST

    Boost terms in query based on score.

    Declaration
    public static readonly bool DEFAULT_BOOST
    Field Value
    Type Description
    bool
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    ApplyBoost

    DEFAULT_FIELD_NAMES

    Default field names. Null is used to specify that the field names should be looked up at runtime from the provided reader.

    Declaration
    public static readonly string[] DEFAULT_FIELD_NAMES
    Field Value
    Type Description
    string[]
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    DEFAULT_MAX_DOC_FREQ

    Ignore words which occur in more than this many docs.

    Declaration
    public static readonly int DEFAULT_MAX_DOC_FREQ
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MaxDocFreq
    SetMaxDocFreqPct(int)

    DEFAULT_MAX_NUM_TOKENS_PARSED

    Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support.

    Declaration
    public static readonly int DEFAULT_MAX_NUM_TOKENS_PARSED
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MaxNumTokensParsed

    DEFAULT_MAX_QUERY_TERMS

    Return a Query with no more than this many terms.

    Declaration
    public static readonly int DEFAULT_MAX_QUERY_TERMS
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MaxClauseCount
    MaxQueryTerms

    DEFAULT_MAX_WORD_LENGTH

    Ignore words greater than this length or if 0 then this has no effect.

    Declaration
    public static readonly int DEFAULT_MAX_WORD_LENGTH
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MaxWordLen

    DEFAULT_MIN_DOC_FREQ

    Ignore words which do not occur in at least this many docs.

    Declaration
    public static readonly int DEFAULT_MIN_DOC_FREQ
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MinDocFreq

    DEFAULT_MIN_TERM_FREQ

    Ignore terms with less than this frequency in the source doc.

    Declaration
    public static readonly int DEFAULT_MIN_TERM_FREQ
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MinTermFreq

    DEFAULT_MIN_WORD_LENGTH

    Ignore words less than this length or if 0 then this has no effect.

    Declaration
    public static readonly int DEFAULT_MIN_WORD_LENGTH
    Field Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    MinWordLen

    DEFAULT_STOP_WORDS

    Default set of stopwords. If null means to allow stop words.

    Declaration
    public static readonly ISet<string> DEFAULT_STOP_WORDS
    Field Value
    Type Description
    ISet<string>
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    StopWords

    Properties

    Analyzer

    Gets or Sets an analyzer that will be used to parse source doc with. The default analyzer is not set. An analyzer is not required for generating a query with the Like(int) method, all other 'like' methods require an analyzer.

    Declaration
    public Analyzer Analyzer { get; set; }
    Property Value
    Type Description
    Analyzer
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    ApplyBoost

    Gets or Sets whether to boost terms in query based on "score" or not. The default is DEFAULT_BOOST.

    Declaration
    public bool ApplyBoost { get; set; }
    Property Value
    Type Description
    bool
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    BoostFactor

    Gets or Sets the boost factor used when boosting terms

    Declaration
    public float BoostFactor { get; set; }
    Property Value
    Type Description
    float
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    FieldNames

    Gets or Sets the field names that will be used when generating the 'More Like This' query. The default field names that will be used is DEFAULT_FIELD_NAMES. Set this to null for the field names to be determined at runtime from the Lucene.Net.Index.IndexReader provided in the constructor.

    Declaration
    public string[] FieldNames { get; set; }
    Property Value
    Type Description
    string[]
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MaxDocFreq

    Gets or Sets the maximum frequency in which words may still appear. Words that appear in more than this many docs will be ignored. The default frequency is DEFAULT_MAX_DOC_FREQ.

    Declaration
    public int MaxDocFreq { get; set; }
    Property Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MaxNumTokensParsed

    Generate "more like this" similarity queries. Based on this mail:

    Lucene does let you access the document frequency of terms, with .
    Term frequencies can be computed by re-tokenizing the text, which, for a single document,
    is usually fast enough.  But looking up the  of every term in the document is
    probably too slow.
    
    You can use some heuristics to prune the set of terms, to avoid calling  too much,
    or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested
    in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
    reduce the number of terms under consideration.  Another heuristic is that terms with a
    high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the
    number of characters, not selecting anything less than, e.g., six or seven characters.
    With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
    that do a pretty good job of characterizing a document.
    
    It all depends on what you're trying to do.  If you're trying to eek out that last percent
    of precision and recall regardless of computational difficulty so that you can win a TREC
    competition, then the techniques I mention above are useless.  But if you're trying to
    provide a "more like this" button on a search results page that does a decent job and has
    good performance, such techniques might be useful.
    
    An efficient, effective "more-like-this" query generator would be a great contribution, if
    anyone's interested.  I'd imagine that it would take a Reader or a String (the document's
    text), analyzer Analyzer, and return a set of representative terms using heuristics like those
    above.  The frequency and length thresholds could be parameters, etc.
    
    Doug

    Initial Usage

    This class has lots of options to try to make it efficient and flexible. The simplest possible usage is as follows. The bold fragment is specific to this class.

    IndexReader ir = ...
           IndexSearcher is = ...
    
       MoreLikeThis mlt = new MoreLikeThis(ir);
       TextReader target = ... // orig source of doc you want to find similarities to
       Query query = mlt.Like(target);
    
       Hits hits = is.Search(query);
       // now the usual iteration thru 'hits' - the only thing to watch for is to make sure
       //you ignore the doc if it matches your 'target' document, as it should be similar to itself</code></pre><p></p>
    

    Thus you:

    • do your normal, Lucene setup for searching,
    • create a MoreLikeThis,
    • get the text of the doc you want to find similarities to
    • then call one of the Like(TextReader, string) calls to generate a similarity query
    • call the searcher to find the similar docs

    More Advanced Usage

    You may want to use the setter for FieldNames so you can examine multiple fields (e.g. body and title) for similarity.

    Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
    • MinTermFreq
    • MinDocFreq
    • MaxDocFreq
    • SetMaxDocFreqPct(int)
    • MinWordLen
    • MaxWordLen
    • MaxQueryTerms
    • MaxNumTokensParsed
    • StopWords
    Declaration
    public int MaxNumTokensParsed { get; set; }
    Property Value
    Type Description
    int

    Gets or Sets the maximum number of tokens to parse in each example doc field that is not stored with TermVector support

    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    DEFAULT_MAX_NUM_TOKENS_PARSED

    MaxQueryTerms

    Gets or Sets the maximum number of query terms that will be included in any generated query. The default is DEFAULT_MAX_QUERY_TERMS.

    Declaration
    public int MaxQueryTerms { get; set; }
    Property Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MaxWordLen

    Gets or Sets the maximum word length above which words will be ignored. Set this to 0 for no maximum word length. The default is DEFAULT_MAX_WORD_LENGTH.

    Declaration
    public int MaxWordLen { get; set; }
    Property Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MinDocFreq

    Gets or Sets the frequency at which words will be ignored which do not occur in at least this many docs. The default frequency is DEFAULT_MIN_DOC_FREQ.

    Declaration
    public int MinDocFreq { get; set; }
    Property Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MinTermFreq

    Gets or Sets the frequency below which terms will be ignored in the source doc. The default frequency is the DEFAULT_MIN_TERM_FREQ.

    Declaration
    public int MinTermFreq { get; set; }
    Property Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    MinWordLen

    Gets or Sets the minimum word length below which words will be ignored. Set this to 0 for no minimum word length. The default is DEFAULT_MIN_WORD_LENGTH.

    Declaration
    public int MinWordLen { get; set; }
    Property Value
    Type Description
    int
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    Similarity

    For idf() calculations.

    Declaration
    public TFIDFSimilarity Similarity { get; set; }
    Property Value
    Type Description
    TFIDFSimilarity
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    StopWords

    Gets or Sets the set of stopwords. Any word in this set is considered "uninteresting" and ignored. Even if your Lucene.Net.Analysis.Analyzer allows stopwords, you might want to tell the MoreLikeThis code to ignore them, as for the purposes of document similarity it seems reasonable to assume that "a stop word is never interesting".

    Declaration
    public ISet<string> StopWords { get; set; }
    Property Value
    Type Description
    ISet<string>
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    Methods

    DescribeParams()

    Describe the parameters that control how the "more like this" query is formed.

    Declaration
    public string DescribeParams()
    Returns
    Type Description
    string
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    Like(TextReader, string)

    Return a query that will return docs like the passed TextReader.

    Declaration
    public Query Like(TextReader r, string fieldName)
    Parameters
    Type Name Description
    TextReader r
    string fieldName
    Returns
    Type Description
    Query

    a query that will return docs like the passed TextReader.

    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    Like(int)

    Return a query that will return docs like the passed lucene document ID.

    Declaration
    public Query Like(int docNum)
    Parameters
    Type Name Description
    int docNum

    the documentID of the lucene doc to generate the 'More Like This" query for.

    Returns
    Type Description
    Query

    a query that will return docs like the passed lucene document ID.

    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization

    RetrieveInterestingTerms(TextReader, string)

    Convenience routine to make it easy to return the most interesting words in a document. More advanced users will call RetrieveTerms(TextReader, string) directly.

    Declaration
    public string[] RetrieveInterestingTerms(TextReader r, string fieldName)
    Parameters
    Type Name Description
    TextReader r

    the source document

    string fieldName

    field passed to analyzer to use when analyzing the content

    Returns
    Type Description
    string[]

    the most interesting words in the document

    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    RetrieveTerms(TextReader, string)
    MaxQueryTerms

    RetrieveInterestingTerms(int)

    Generate "more like this" similarity queries. Based on this mail:

    Lucene does let you access the document frequency of terms, with .
    Term frequencies can be computed by re-tokenizing the text, which, for a single document,
    is usually fast enough.  But looking up the  of every term in the document is
    probably too slow.
    
    You can use some heuristics to prune the set of terms, to avoid calling  too much,
    or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested
    in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
    reduce the number of terms under consideration.  Another heuristic is that terms with a
    high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the
    number of characters, not selecting anything less than, e.g., six or seven characters.
    With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
    that do a pretty good job of characterizing a document.
    
    It all depends on what you're trying to do.  If you're trying to eek out that last percent
    of precision and recall regardless of computational difficulty so that you can win a TREC
    competition, then the techniques I mention above are useless.  But if you're trying to
    provide a "more like this" button on a search results page that does a decent job and has
    good performance, such techniques might be useful.
    
    An efficient, effective "more-like-this" query generator would be a great contribution, if
    anyone's interested.  I'd imagine that it would take a Reader or a String (the document's
    text), analyzer Analyzer, and return a set of representative terms using heuristics like those
    above.  The frequency and length thresholds could be parameters, etc.
    
    Doug

    Initial Usage

    This class has lots of options to try to make it efficient and flexible. The simplest possible usage is as follows. The bold fragment is specific to this class.

    IndexReader ir = ...
           IndexSearcher is = ...
    
       MoreLikeThis mlt = new MoreLikeThis(ir);
       TextReader target = ... // orig source of doc you want to find similarities to
       Query query = mlt.Like(target);
    
       Hits hits = is.Search(query);
       // now the usual iteration thru 'hits' - the only thing to watch for is to make sure
       //you ignore the doc if it matches your 'target' document, as it should be similar to itself</code></pre><p></p>
    

    Thus you:

    • do your normal, Lucene setup for searching,
    • create a MoreLikeThis,
    • get the text of the doc you want to find similarities to
    • then call one of the Like(TextReader, string) calls to generate a similarity query
    • call the searcher to find the similar docs

    More Advanced Usage

    You may want to use the setter for FieldNames so you can examine multiple fields (e.g. body and title) for similarity.

    Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
    • MinTermFreq
    • MinDocFreq
    • MaxDocFreq
    • SetMaxDocFreqPct(int)
    • MinWordLen
    • MaxWordLen
    • MaxQueryTerms
    • MaxNumTokensParsed
    • StopWords
    Declaration
    public string[] RetrieveInterestingTerms(int docNum)
    Parameters
    Type Name Description
    int docNum
    Returns
    Type Description
    string[]
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    See Also
    RetrieveInterestingTerms(TextReader, string)

    RetrieveTerms(TextReader, string)

    Find words for a more-like-this query former. The result is a priority queue of ScoreTerm objects with one entry for every word in the document. Each object has 6 properties. The properties are:

    • The Word (string)
    • The TopField that this word comes from (string)
    • The Score for this word (float)
    • The Idf value (float)
    • The DocFreq (frequency of this word in the index (int))
    • The Tf (frequency of this word in the source document (int))
    This is a somewhat "advanced" routine, and in general only the Word is of interest. This method is exposed so that you can identify the "interesting words" in a document. For an easier method to call see RetrieveInterestingTerms(TextReader, string).
    Declaration
    public PriorityQueue<ScoreTerm> RetrieveTerms(TextReader r, string fieldName)
    Parameters
    Type Name Description
    TextReader r

    the reader that has the content of the document

    string fieldName

    field passed to the analyzer to use when analyzing the content

    Returns
    Type Description
    PriorityQueue<ScoreTerm>

    the most interesting words in the document ordered by score, with the highest scoring, or best entry, first

    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    Exceptions
    Type Condition
    IOException
    See Also
    RetrieveInterestingTerms(TextReader, string)

    RetrieveTerms(int)

    Find words for a more-like-this query former.

    Declaration
    public PriorityQueue<ScoreTerm> RetrieveTerms(int docNum)
    Parameters
    Type Name Description
    int docNum

    the id of the lucene document from which to find terms

    Returns
    Type Description
    PriorityQueue<ScoreTerm>
    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    Exceptions
    Type Condition
    IOException

    SetMaxDocFreqPct(int)

    Set the maximum percentage in which words may still appear. Words that appear in more than this many percent of all docs will be ignored.

    Declaration
    public void SetMaxDocFreqPct(int maxPercentage)
    Parameters
    Type Name Description
    int maxPercentage

    the maximum percentage of documents (0-100) that a term may appear in to be still considered relevant

    Remarks

    Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation.

    • bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
    • bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector)
    • refactor: moved common code into isNoiseWord()
    • optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
    Back to top Copyright © 2024 The Apache Software Foundation, Licensed under the Apache License, Version 2.0
    Apache Lucene.Net, Lucene.Net, Apache, the Apache feather logo, and the Apache Lucene.Net project logo are trademarks of The Apache Software Foundation.
    All other marks mentioned may be trademarks or registered trademarks of their respective owners.