Uses of Class
org.apache.lucene.analysis.TokenStream
-
Packages that use TokenStream Package Description org.apache.lucene.analysis API and code to convert text into indexable/searchable tokens.org.apache.lucene.analysis.standard Standards-based analyzers implemented with JFlex.org.apache.lucene.collation CollationKeyFilterconverts each token into its binaryCollationKeyusing the providedCollator, and then encode theCollationKeyas a String usingIndexableBinaryStringTools, to allow it to be stored as an index term.org.apache.lucene.document The logical representation of aDocumentfor indexing and searching. -
-
Uses of TokenStream in org.apache.lucene.analysis
Subclasses of TokenStream in org.apache.lucene.analysis Modifier and Type Class Description classASCIIFoldingFilterThis class converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists.classCachingTokenFilterThis class can be used if the token attributes of a TokenStream are intended to be consumed more than once.classCharTokenizerAn abstract base class for simple, character-oriented tokenizers.classFilteringTokenFilterAbstract base class for TokenFilters that may remove tokens.classISOLatin1AccentFilterDeprecated.If you build a new index, useASCIIFoldingFilterwhich covers a superset of Latin 1.classKeywordMarkerFilterMarks terms as keywords via theKeywordAttribute.classKeywordTokenizerEmits the entire input as a single token.classLengthFilterRemoves words that are too long or too short from the stream.classLetterTokenizerA LetterTokenizer is a tokenizer that divides text at non-letters.classLimitTokenCountFilterThis TokenFilter limits the number of tokens while indexing.classLowerCaseFilterNormalizes token text to lower case.classLowerCaseTokenizerLowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together.classNumericTokenStreamExpert: This class provides aTokenStreamfor indexing numeric values that can be used byNumericRangeQueryorNumericRangeFilter.classPorterStemFilterTransforms the token stream as per the Porter stemming algorithm.classStopFilterRemoves stop words from a token stream.classTeeSinkTokenFilterThis TokenFilter provides the ability to set aside attribute states that have already been analyzed.static classTeeSinkTokenFilter.SinkTokenStreamTokenStream output from a tee with optional filtering.classTokenFilterA TokenFilter is a TokenStream whose input is another TokenStream.classTokenizerA Tokenizer is a TokenStream whose input is a Reader.classTypeTokenFilterRemoves tokens whose types appear in a set of blocked types from a token stream.classWhitespaceTokenizerA WhitespaceTokenizer is a tokenizer that divides text at whitespace.Fields in org.apache.lucene.analysis declared as TokenStream Modifier and Type Field Description protected TokenStreamTokenFilter. inputThe source of tokens for this filter.protected TokenStreamReusableAnalyzerBase.TokenStreamComponents. sinkMethods in org.apache.lucene.analysis that return TokenStream Modifier and Type Method Description protected TokenStreamReusableAnalyzerBase.TokenStreamComponents. getTokenStream()Returns the sinkTokenStreamTokenStreamAnalyzer. reusableTokenStream(String fieldName, Reader reader)Creates a TokenStream that is allowed to be re-used from the previous time that the same thread called this method.TokenStreamLimitTokenCountAnalyzer. reusableTokenStream(String fieldName, Reader reader)TokenStreamPerFieldAnalyzerWrapper. reusableTokenStream(String fieldName, Reader reader)TokenStreamReusableAnalyzerBase. reusableTokenStream(String fieldName, Reader reader)This method usesReusableAnalyzerBase.createComponents(String, Reader)to obtain an instance ofReusableAnalyzerBase.TokenStreamComponents.abstract TokenStreamAnalyzer. tokenStream(String fieldName, Reader reader)Creates a TokenStream which tokenizes all the text in the provided Reader.TokenStreamLimitTokenCountAnalyzer. tokenStream(String fieldName, Reader reader)TokenStreamPerFieldAnalyzerWrapper. tokenStream(String fieldName, Reader reader)TokenStreamReusableAnalyzerBase. tokenStream(String fieldName, Reader reader)This method usesReusableAnalyzerBase.createComponents(String, Reader)to obtain an instance ofReusableAnalyzerBase.TokenStreamComponentsand returns the sink of the components.Constructors in org.apache.lucene.analysis with parameters of type TokenStream Constructor Description ASCIIFoldingFilter(TokenStream input)CachingTokenFilter(TokenStream input)FilteringTokenFilter(boolean enablePositionIncrements, TokenStream input)ISOLatin1AccentFilter(TokenStream input)Deprecated.KeywordMarkerFilter(TokenStream in, Set<?> keywordSet)Create a new KeywordMarkerFilter, that marks the current token as a keyword if the tokens term buffer is contained in the given set via theKeywordAttribute.KeywordMarkerFilter(TokenStream in, CharArraySet keywordSet)Create a new KeywordMarkerFilter, that marks the current token as a keyword if the tokens term buffer is contained in the given set via theKeywordAttribute.LengthFilter(boolean enablePositionIncrements, TokenStream in, int min, int max)Build a filter that removes words that are too long or too short from the text.LengthFilter(TokenStream in, int min, int max)Deprecated.UseLengthFilter(boolean, TokenStream, int, int)instead.LimitTokenCountFilter(TokenStream in, int maxTokenCount)Build a filter that only accepts tokens up to a maximum number.LowerCaseFilter(TokenStream in)Deprecated.UseLowerCaseFilter(Version, TokenStream)instead.LowerCaseFilter(Version matchVersion, TokenStream in)Create a new LowerCaseFilter, that normalizes token text to lower case.PorterStemFilter(TokenStream in)StopFilter(boolean enablePositionIncrements, TokenStream in, Set<?> stopWords)Deprecated.useStopFilter(Version, TokenStream, Set)insteadStopFilter(boolean enablePositionIncrements, TokenStream input, Set<?> stopWords, boolean ignoreCase)Deprecated.UseStopFilter(Version, TokenStream, Set)insteadStopFilter(Version matchVersion, TokenStream in, Set<?> stopWords)Constructs a filter which removes words from the input TokenStream that are named in the Set.StopFilter(Version matchVersion, TokenStream input, Set<?> stopWords, boolean ignoreCase)Deprecated.UseStopFilter(Version, TokenStream, Set)insteadTeeSinkTokenFilter(TokenStream input)Instantiates a new TeeSinkTokenFilter.TokenFilter(TokenStream input)Construct a token stream filtering the given input.TokenStreamComponents(Tokenizer source, TokenStream result)Creates a newReusableAnalyzerBase.TokenStreamComponentsinstance.TypeTokenFilter(boolean enablePositionIncrements, TokenStream input, Set<String> stopTypes)TypeTokenFilter(boolean enablePositionIncrements, TokenStream input, Set<String> stopTypes, boolean useWhiteList) -
Uses of TokenStream in org.apache.lucene.analysis.standard
Subclasses of TokenStream in org.apache.lucene.analysis.standard Modifier and Type Class Description classClassicFilterNormalizes tokens extracted withClassicTokenizer.classClassicTokenizerA grammar-based tokenizer constructed with JFlexclassStandardFilterNormalizes tokens extracted withStandardTokenizer.classStandardTokenizerA grammar-based tokenizer constructed with JFlex.classUAX29URLEmailTokenizerThis class implements Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29 URLs and email addresses are also tokenized according to the relevant RFCs.Constructors in org.apache.lucene.analysis.standard with parameters of type TokenStream Constructor Description ClassicFilter(TokenStream in)Construct filtering in.StandardFilter(TokenStream in)Deprecated.UseStandardFilter(Version, TokenStream)instead.StandardFilter(Version matchVersion, TokenStream in) -
Uses of TokenStream in org.apache.lucene.collation
Subclasses of TokenStream in org.apache.lucene.collation Modifier and Type Class Description classCollationKeyFilterConverts each token into itsCollationKey, and then encodes the CollationKey withIndexableBinaryStringTools, to allow it to be stored as an index term.Methods in org.apache.lucene.collation that return TokenStream Modifier and Type Method Description TokenStreamCollationKeyAnalyzer. reusableTokenStream(String fieldName, Reader reader)TokenStreamCollationKeyAnalyzer. tokenStream(String fieldName, Reader reader)Constructors in org.apache.lucene.collation with parameters of type TokenStream Constructor Description CollationKeyFilter(TokenStream input, Collator collator) -
Uses of TokenStream in org.apache.lucene.document
Fields in org.apache.lucene.document declared as TokenStream Modifier and Type Field Description protected TokenStreamAbstractField. tokenStreamMethods in org.apache.lucene.document that return TokenStream Modifier and Type Method Description TokenStreamField. tokenStreamValue()The TokesStream for this field to be used when indexing, or null.TokenStreamFieldable. tokenStreamValue()The TokenStream for this field to be used when indexing, or null.TokenStreamNumericField. tokenStreamValue()Returns aNumericTokenStreamfor indexing the numeric value.Methods in org.apache.lucene.document with parameters of type TokenStream Modifier and Type Method Description voidField. setTokenStream(TokenStream tokenStream)Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.Constructors in org.apache.lucene.document with parameters of type TokenStream Constructor Description Field(String name, TokenStream tokenStream)Create a tokenized and indexed field that is not stored.Field(String name, TokenStream tokenStream, Field.TermVector termVector)Create a tokenized and indexed field that is not stored, optionally with storing term vectors.
-