Package org.apache.lucene.analysis
Class Analyzer
- java.lang.Object
-
- org.apache.lucene.analysis.Analyzer
-
- All Implemented Interfaces:
Closeable,AutoCloseable
- Direct Known Subclasses:
CollationKeyAnalyzer,LimitTokenCountAnalyzer,PerFieldAnalyzerWrapper,ReusableAnalyzerBase
public abstract class Analyzer extends Object implements Closeable
An Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text.Typical implementations first build a Tokenizer, which breaks the stream of characters from the Reader into raw Tokens. One or more TokenFilters may then be applied to the output of the Tokenizer.
The
Analyzer-API in Lucene is based on the decorator pattern. Therefore all non-abstract subclasses must be final or theirtokenStream(java.lang.String, java.io.Reader)andreusableTokenStream(java.lang.String, java.io.Reader)implementations must be final! This is checked when Java assertions are enabled.
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedAnalyzer()
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description voidclose()Frees persistent resources used by this AnalyzerintgetOffsetGap(Fieldable field)Just likegetPositionIncrementGap(java.lang.String), except for Token offsets instead.intgetPositionIncrementGap(String fieldName)Invoked before indexing a Fieldable instance if terms have already been added to that field.protected ObjectgetPreviousTokenStream()Used by Analyzers that implement reusableTokenStream to retrieve previously saved TokenStreams for re-use by the same thread.TokenStreamreusableTokenStream(String fieldName, Reader reader)Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method.protected voidsetPreviousTokenStream(Object obj)Used by Analyzers that implement reusableTokenStream to save a TokenStream for later re-use by the same thread.abstract TokenStreamtokenStream(String fieldName, Reader reader)Creates a TokenStream which tokenizes all the text in the provided Reader.
-
-
-
Method Detail
-
tokenStream
public abstract TokenStream tokenStream(String fieldName, Reader reader)
Creates a TokenStream which tokenizes all the text in the provided Reader. Must be able to handle null field name for backward compatibility.
-
reusableTokenStream
public TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException
Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method. Callers that do not need to use more than one TokenStream at the same time from this analyzer should use this method for better performance.- Throws:
IOException
-
getPreviousTokenStream
protected Object getPreviousTokenStream()
Used by Analyzers that implement reusableTokenStream to retrieve previously saved TokenStreams for re-use by the same thread.
-
setPreviousTokenStream
protected void setPreviousTokenStream(Object obj)
Used by Analyzers that implement reusableTokenStream to save a TokenStream for later re-use by the same thread.
-
getPositionIncrementGap
public int getPositionIncrementGap(String fieldName)
Invoked before indexing a Fieldable instance if terms have already been added to that field. This allows custom analyzers to place an automatic position increment gap between Fieldable instances using the same field name. The default value position increment gap is 0. With a 0 position increment gap and the typical default token position increment of 1, all terms in a field, including across Fieldable instances, are in successive positions, allowing exact PhraseQuery matches, for instance, across Fieldable instance boundaries.- Parameters:
fieldName- Fieldable name being indexed.- Returns:
- position increment gap, added to the next token emitted from
tokenStream(String,Reader)
-
getOffsetGap
public int getOffsetGap(Fieldable field)
Just likegetPositionIncrementGap(java.lang.String), except for Token offsets instead. By default this returns 1 for tokenized fields and, as if the fields were joined with an extra space character, and 0 for un-tokenized fields. This method is only called if the field produced at least one token for indexing.- Parameters:
field- the field just indexed- Returns:
- offset gap, added to the next token emitted from
tokenStream(String,Reader)
-
close
public void close()
Frees persistent resources used by this Analyzer- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable
-
-