All Classes Interface Summary Class Summary Enum Summary Exception Summary Error Summary
| Class |
Description |
| AbstractAllTermDocs |
Base class for enumerating all but deleted docs.
|
| AbstractField |
Base class for Field implementations
|
| AlreadyClosedException |
This exception is thrown when there is an attempt to
access something that has already been closed.
|
| Analyzer |
An Analyzer builds TokenStreams, which analyze text.
|
| ArrayUtil |
Methods for manipulating arrays.
|
| ASCIIFoldingFilter |
This class converts alphabetic, numeric, and symbolic Unicode characters
which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
block) into their ASCII equivalents, if one exists.
|
| Attribute |
Base interface for attributes.
|
| AttributeImpl |
|
| AttributeReflector |
|
| AttributeSource |
An AttributeSource contains a list of different AttributeImpls,
and methods to add and get them.
|
| AttributeSource.AttributeFactory |
|
| AttributeSource.State |
This class holds the state of an AttributeSource.
|
| AveragePayloadFunction |
Calculate the final score as the average score of all payloads seen.
|
| BaseCharFilter |
|
| Bits |
Interface for Bitset-like structures.
|
| Bits.MatchAllBits |
Bits impl of the specified length with all bits set.
|
| Bits.MatchNoBits |
Bits impl of the specified length with no bits set.
|
| BitUtil |
A variety of high efficiency bit twiddling routines.
|
| BitVector |
Optimized implementation of a vector of bits.
|
| BooleanClause |
A clause in a BooleanQuery.
|
| BooleanClause.Occur |
Specifies how clauses are to occur in matching documents.
|
| BooleanQuery |
A Query that matches documents matching boolean combinations of other
queries, e.g.
|
| BooleanQuery.TooManyClauses |
|
| BufferedIndexInput |
Base implementation class for buffered IndexInput.
|
| BufferedIndexOutput |
|
| Builder<T> |
Builds a minimal FST (maps an IntsRef term to an arbitrary
output) from pre-sorted terms with outputs.
|
| Builder.Arc<T> |
Expert: holds a pending (seen but not yet serialized) arc.
|
| Builder.FreezeTail<T> |
Expert: this is invoked by Builder whenever a suffix
is serialized.
|
| Builder.UnCompiledNode<T> |
Expert: holds a pending (seen but not yet serialized) Node.
|
| ByteArrayDataInput |
DataInput backed by a byte array.
|
| ByteArrayDataOutput |
DataOutput backed by a byte array.
|
| ByteBlockPool |
Class that Posting and PostingVector use to write byte
streams into shared fixed-size byte[] arrays.
|
| ByteBlockPool.Allocator |
Abstract class for allocating and freeing byte
blocks.
|
| ByteBlockPool.DirectAllocator |
|
| ByteBlockPool.DirectTrackingAllocator |
|
| ByteFieldSource |
Expert: obtains single byte field values from the
FieldCache
using getBytes() and makes those values
available as other numeric types, casting as needed.
|
| ByteSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of bytes.
|
| BytesRef |
Represents byte[], as a slice (offset + length) into an
existing byte[].
|
| BytesRefFSTEnum<T> |
Enumerates all input (BytesRef) + output pairs in an
FST.
|
| BytesRefFSTEnum.InputOutput<T> |
Holds a single input (BytesRef) + output pair.
|
| BytesRefHash |
|
| BytesRefHash.BytesStartArray |
Manages allocation of the per-term addresses.
|
| BytesRefHash.DirectBytesStartArray |
|
| BytesRefHash.MaxBytesLengthExceededException |
|
| BytesRefHash.TrackingDirectBytesStartArray |
|
| BytesRefIterator |
A simple iterator interface for BytesRef iteration.
|
| CachingCollector |
Caches all docs, and optionally also scores, coming from
a search, and is then able to replay them to another
collector.
|
| CachingSpanFilter |
Wraps another SpanFilter's result and caches it.
|
| CachingTokenFilter |
This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once.
|
| CachingWrapperFilter |
Wraps another filter's result and caches it.
|
| CachingWrapperFilter.DeletesMode |
Expert: Specifies how new deletions against a reopened
reader should be handled.
|
| CharacterUtils |
CharacterUtils provides a unified interface to Character-related
operations to implement backwards compatible character operations based on a
Version instance.
|
| CharacterUtils.CharacterBuffer |
|
| CharArrayMap<V> |
A simple class that stores key Strings as char[]'s in a
hash table.
|
| CharArraySet |
A simple class that stores Strings as char[]'s in a
hash table.
|
| CharFilter |
Subclasses of CharFilter can be chained to filter CharStream.
|
| CharReader |
CharReader is a Reader wrapper.
|
| CharsRef |
Represents char[], as a slice (offset + length) into an existing char[].
|
| CharStream |
|
| CharStream |
This interface describes a character stream that maintains line and
column number positions of the characters.
|
| CharTermAttribute |
The term text of a Token.
|
| CharTermAttributeImpl |
The term text of a Token.
|
| CharTokenizer |
An abstract base class for simple, character-oriented tokenizers.
|
| CheckIndex |
Basic tool and API to check the health of an index and
write a new segments file that removes reference to
problematic segments.
|
| CheckIndex.Status |
|
| CheckIndex.Status.FieldNormStatus |
Status from testing field norms.
|
| CheckIndex.Status.SegmentInfoStatus |
Holds the status of each segment in the index.
|
| CheckIndex.Status.StoredFieldStatus |
Status from testing stored fields.
|
| CheckIndex.Status.TermIndexStatus |
Status from testing term index.
|
| CheckIndex.Status.TermVectorStatus |
Status from testing stored fields.
|
| ChecksumIndexInput |
Writes bytes through to a primary IndexOutput, computing
checksum as it goes.
|
| ChecksumIndexOutput |
Writes bytes through to a primary IndexOutput, computing
checksum.
|
| ClassicAnalyzer |
|
| ClassicFilter |
|
| ClassicTokenizer |
A grammar-based tokenizer constructed with JFlex
|
| CloseableThreadLocal<T> |
Java's builtin ThreadLocal has a serious flaw:
it can take an arbitrarily long amount of time to
dereference the things you had stored in it, even once the
ThreadLocal instance itself is no longer referenced.
|
| CodecUtil |
Utility class for reading and writing versioned headers.
|
| CollationKeyAnalyzer |
|
| CollationKeyFilter |
|
| CollectionUtil |
Methods for manipulating (sorting) collections.
|
| Collector |
Expert: Collectors are primarily meant to be used to
gather raw results from a search, and implement sorting
or custom result filtering, collation, etc.
|
| CommandLineUtil |
Class containing some useful methods used by command line tools
|
| ComplexExplanation |
Expert: Describes the score computation for document and query, and
can distinguish a match independent of a positive value.
|
| CompoundFileWriter |
Combines multiple files into a single compound file.
|
| CompressionTools |
Simple utility class providing static methods to
compress and decompress binary data for stored fields.
|
| ConcurrentMergeScheduler |
|
| Constants |
Some useful constants.
|
| ConstantScoreQuery |
A query that wraps another query or a filter and simply returns a constant score equal to the
query boost for every document that matches the filter or query.
|
| CorruptIndexException |
This exception is thrown when Lucene detects
an inconsistency in the index.
|
| Counter |
Simple counter class
|
| CustomScoreProvider |
|
| CustomScoreQuery |
Query that sets document score as a programmatic function of several (sub) scores:
the score of its subQuery (any query)
(optional) the score of its ValueSourceQuery (or queries).
|
| DataInput |
Abstract base class for performing read operations of Lucene's low-level
data types.
|
| DataOutput |
Abstract base class for performing write operations of Lucene's low-level
data types.
|
| DateField |
Deprecated.
|
| DateTools |
Provides support for converting dates to strings and vice-versa.
|
| DateTools.Resolution |
Specifies the time granularity.
|
| DefaultSimilarity |
Expert: Default scoring implementation.
|
| Directory |
A Directory is a flat list of files.
|
| DisjunctionMaxQuery |
A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum
score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
|
| DocIdBitSet |
Simple DocIdSet and DocIdSetIterator backed by a BitSet
|
| DocIdSet |
A DocIdSet contains a set of doc ids.
|
| DocIdSetIterator |
This abstract class defines methods to iterate over a set of non-decreasing
doc ids.
|
| Document |
Documents are the unit of indexing and search.
|
| DocValues |
Expert: represents field values as different types.
|
| DoubleBarrelLRUCache<K extends DoubleBarrelLRUCache.CloneableKey,V> |
Simple concurrent LRU cache, using a "double barrel"
approach where two ConcurrentHashMaps record entries.
|
| DoubleBarrelLRUCache.CloneableKey |
Object providing clone(); the key class must subclass this.
|
| DummyConcurrentLock |
A dummy lock as a replacement for ReentrantLock to disable locking
|
| Explanation |
Expert: Describes the score computation for document and query.
|
| Explanation.IDFExplanation |
Small Util class used to pass both an idf factor as well as an
explanation for that factor.
|
| FastCharStream |
An efficient implementation of JavaCC's CharStream interface.
|
| Field |
A field is a section of a Document.
|
| Field.Index |
Specifies whether and how a field should be indexed.
|
| Field.Store |
Specifies whether and how a field should be stored.
|
| Field.TermVector |
Specifies whether and how a field should have term vectors.
|
| Fieldable |
|
| FieldCache |
Expert: Maintains caches of term values.
|
| FieldCache.ByteParser |
Interface to parse bytes from document fields.
|
| FieldCache.CacheEntry |
EXPERT: A unique Identifier/Description for each item in the FieldCache.
|
| FieldCache.CreationPlaceholder |
|
| FieldCache.DoubleParser |
Interface to parse doubles from document fields.
|
| FieldCache.FloatParser |
Interface to parse floats from document fields.
|
| FieldCache.IntParser |
Interface to parse ints from document fields.
|
| FieldCache.LongParser |
Interface to parse long from document fields.
|
| FieldCache.Parser |
Marker interface as super-interface to all parsers.
|
| FieldCache.ShortParser |
Interface to parse shorts from document fields.
|
| FieldCache.StringIndex |
Expert: Stores term text values and document ordering data.
|
| FieldCacheDocIdSet |
Base class for DocIdSet to be used with FieldCache.
|
| FieldCacheRangeFilter<T> |
A range filter built on top of a cached single term field (in FieldCache).
|
| FieldCacheSanityChecker |
Provides methods for sanity checking that entries in the FieldCache
are not wasteful or inconsistent.
|
| FieldCacheSanityChecker.Insanity |
Simple container for a collection of related CacheEntry objects that
in conjunction with each other represent some "insane" usage of the
FieldCache.
|
| FieldCacheSanityChecker.InsanityType |
An Enumeration of the different types of "insane" behavior that
may be detected in a FieldCache.
|
| FieldCacheSource |
Expert: A base class for ValueSource implementations that retrieve values for
a single field from the FieldCache.
|
| FieldCacheTermsFilter |
A Filter that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.
|
| FieldComparator<T> |
Expert: a FieldComparator compares hits so as to determine their
sort order when collecting the top results with TopFieldCollector.
|
| FieldComparator.ByteComparator |
|
| FieldComparator.DocComparator |
Sorts by ascending docID
|
| FieldComparator.DoubleComparator |
|
| FieldComparator.FloatComparator |
|
| FieldComparator.IntComparator |
|
| FieldComparator.LongComparator |
|
| FieldComparator.NumericComparator<T extends Number> |
|
| FieldComparator.RelevanceComparator |
Sorts by descending relevance.
|
| FieldComparator.ShortComparator |
|
| FieldComparator.StringComparatorLocale |
Sorts by a field's value using the Collator for a
given Locale.
|
| FieldComparator.StringOrdValComparator |
Sorts by field's natural String sort order, using
ordinals.
|
| FieldComparator.StringValComparator |
Sorts by field's natural String sort order.
|
| FieldComparatorSource |
|
| FieldDoc |
Expert: A ScoreDoc which also contains information about
how to sort the referenced document.
|
| FieldInfo |
Access to the Fieldable Info file that describes document fields and whether or
not they are indexed.
|
| FieldInfo.IndexOptions |
Controls how much information is stored in the postings lists.
|
| FieldInfos |
Collection of FieldInfos (accessible by number or by name).
|
| FieldInvertState |
This class tracks the number and position / offset parameters of terms
being added to the index.
|
| FieldMaskingSpanQuery |
Wrapper to allow SpanQuery objects participate in composite
single-field SpanQueries by 'lying' about their search field.
|
| FieldReaderException |
Exception thrown when stored fields have an unexpected format.
|
| FieldScoreQuery |
A query that scores each document as the value of the numeric input field.
|
| FieldScoreQuery.Type |
Type of score field, indicating how field values are interpreted/parsed.
|
| FieldSelector |
|
| FieldSelectorResult |
Provides information about what should be done with this Field
|
| FieldSortedTermVectorMapper |
|
| FieldValueFilter |
A Filter that accepts all documents that have one or more values in a
given field.
|
| FieldValueHitQueue<T extends FieldValueHitQueue.Entry> |
Expert: A hit queue for sorting by hits by terms in more than one field.
|
| FieldValueHitQueue.Entry |
|
| FileSwitchDirectory |
Expert: A Directory instance that switches files between
two other Directory instances.
|
| Filter |
Abstract base class for restricting which documents may
be returned during searching.
|
| FilteredDocIdSet |
Abstract decorator class for a DocIdSet implementation
that provides on-demand filtering/validation
mechanism on a given DocIdSet.
|
| FilteredDocIdSetIterator |
Abstract decorator class of a DocIdSetIterator
implementation that provides on-demand filter/validation
mechanism on an underlying DocIdSetIterator.
|
| FilteredQuery |
A query that applies a filter to the results of another query.
|
| FilteredTermEnum |
Abstract class for enumerating a subset of all terms.
|
| FilterIndexReader |
A FilterIndexReader contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality.
|
| FilterIndexReader.FilterTermDocs |
Base class for filtering TermDocs implementations.
|
| FilterIndexReader.FilterTermEnum |
Base class for filtering TermEnum implementations.
|
| FilterIndexReader.FilterTermPositions |
|
| FilteringTokenFilter |
Abstract base class for TokenFilters that may remove tokens.
|
| FilterManager |
Deprecated.
|
| FixedBitSet |
BitSet of fixed length (numBits), backed by accessible
( FixedBitSet.getBits()) long[], accessed with an int index,
implementing Bits and DocIdSet.
|
| FlagsAttribute |
This attribute can be used to pass different flags down the Tokenizer chain,
eg from one TokenFilter to another one.
|
| FlagsAttributeImpl |
This attribute can be used to pass different flags down the tokenizer chain,
eg from one TokenFilter to another one.
|
| FloatFieldSource |
Expert: obtains float field values from the
FieldCache
using getFloats() and makes those values
available as other numeric types, casting as needed.
|
| FSDirectory |
Base class for Directory implementations that store index
files in the file system.
|
| FSDirectory.FSIndexOutput |
|
| FSLockFactory |
Base class for file system based locking implementation.
|
| FST<T> |
Represents an finite state machine (FST), using a
compact byte[] format.
|
| FST.Arc<T> |
Represents a single arc.
|
| FST.BytesReader |
Reads the bytes from this FST.
|
| FST.INPUT_TYPE |
Specifies allowed range of each int input label for
this FST.
|
| FuzzyQuery |
Implements the fuzzy search query.
|
| FuzzyTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that are similar
to the specified filter term.
|
| GrowableWriter |
Implements PackedInts.Mutable, but grows the
bit count of the underlying packed ints on-demand.
|
| IndexableBinaryStringTools |
Provides support for converting byte sequences to Strings and back again.
|
| IndexCommit |
|
| IndexDeletionPolicy |
|
| IndexFileNameFilter |
Filename filter that accept filenames and extensions only created by Lucene.
|
| IndexFileNames |
This class contains useful constants representing filenames and extensions
used by lucene, as well as convenience methods for querying whether a file
name matches an extension ( matchesExtension), as well as generating file names from a segment name,
generation and extension (
fileNameFromGeneration,
segmentFileName).
|
| IndexFormatTooNewException |
This exception is thrown when Lucene detects
an index that is newer than this Lucene version.
|
| IndexFormatTooOldException |
This exception is thrown when Lucene detects
an index that is too old for this Lucene version
|
| IndexInput |
Abstract base class for input from a file in a Directory.
|
| IndexNotFoundException |
Signals that no index was found in the Directory.
|
| IndexOutput |
Abstract base class for output to a file in a Directory.
|
| IndexReader |
IndexReader is an abstract class, providing an interface for accessing an
index.
|
| IndexReader.ReaderClosedListener |
A custom listener that's invoked when the IndexReader
is closed.
|
| IndexSearcher |
Implements search over a single IndexReader.
|
| IndexUpgrader |
This is an easy-to-use tool that upgrades all segments of an index from previous Lucene versions
to the current segment file format.
|
| IndexWriter |
An IndexWriter creates and maintains an index.
|
| IndexWriter.IndexReaderWarmer |
If IndexWriter.getReader() has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits.
|
| IndexWriter.MaxFieldLength |
Deprecated.
|
| IndexWriterConfig |
|
| IndexWriterConfig.OpenMode |
|
| InputStreamDataInput |
|
| IntFieldSource |
Expert: obtains int field values from the
FieldCache
using getInts() and makes those values
available as other numeric types, casting as needed.
|
| IntSequenceOutputs |
An FST Outputs implementation where each output
is a sequence of ints.
|
| IntsRef |
Represents int[], as a slice (offset + length) into an
existing int[].
|
| IntsRefFSTEnum<T> |
Enumerates all input (IntsRef) + output pairs in an
FST.
|
| IntsRefFSTEnum.InputOutput<T> |
Holds a single input (IntsRef) + output pair.
|
| IOUtils |
This class emulates the new Java 7 "Try-With-Resources" statement.
|
| ISOLatin1AccentFilter |
Deprecated.
|
| KeepOnlyLastCommitDeletionPolicy |
This IndexDeletionPolicy implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done.
|
| KeywordAnalyzer |
"Tokenizes" the entire stream as a single token.
|
| KeywordAttribute |
This attribute can be used to mark a token as a keyword.
|
| KeywordAttributeImpl |
This attribute can be used to mark a token as a keyword.
|
| KeywordMarkerFilter |
|
| KeywordTokenizer |
Emits the entire input as a single token.
|
| LengthFilter |
Removes words that are too long or too short from the stream.
|
| LetterTokenizer |
A LetterTokenizer is a tokenizer that divides text at non-letters.
|
| LimitTokenCountAnalyzer |
This Analyzer limits the number of tokens while indexing.
|
| LimitTokenCountFilter |
This TokenFilter limits the number of tokens while indexing.
|
| LoadFirstFieldSelector |
Load the First field and break.
|
| Lock |
An interprocess mutex lock.
|
| Lock.With |
Utility class for executing code with exclusive access.
|
| LockFactory |
Base class for Locking implementation.
|
| LockObtainFailedException |
This exception is thrown when the write.lock
could not be acquired.
|
| LockReleaseFailedException |
This exception is thrown when the write.lock
could not be released.
|
| LockStressTest |
Simple standalone tool that forever acquires & releases a
lock using a specific LockFactory.
|
| LockVerifyServer |
|
| LogByteSizeMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the total byte size of the segment's files.
|
| LogDocMergePolicy |
This is a LogMergePolicy that measures size of a
segment as the number of documents (not taking deletions
into account).
|
| LogMergePolicy |
This class implements a MergePolicy that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor.
|
| LowerCaseFilter |
Normalizes token text to lower case.
|
| LowerCaseTokenizer |
LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together.
|
| LucenePackage |
Lucene's package information, including version.
|
| MapBackedSet<E> |
A Set implementation that wraps an actual Map based
implementation.
|
| MapFieldSelector |
|
| MapOfSets<K,V> |
Helper class for keeping Lists of Objects associated with keys.
|
| MappingCharFilter |
Simplistic CharFilter that applies the mappings
contained in a NormalizeCharMap to the character
stream, and correcting the resulting changes to the
offsets.
|
| MatchAllDocsQuery |
A query that matches all documents.
|
| MaxPayloadFunction |
Returns the maximum payload score seen, else 1 if there are no payloads on the doc.
|
| MergePolicy |
Expert: a MergePolicy determines the sequence of
primitive merge operations.
|
| MergePolicy.MergeAbortedException |
|
| MergePolicy.MergeException |
Exception thrown if there are any problems while
executing a merge.
|
| MergePolicy.MergeSpecification |
A MergeSpecification instance provides the information
necessary to perform multiple merges.
|
| MergePolicy.OneMerge |
OneMerge provides the information necessary to perform
an individual primitive merge operation, resulting in
a single new segment.
|
| MergeScheduler |
Expert: IndexWriter uses an instance
implementing this interface to execute the merges
selected by a MergePolicy.
|
| MinPayloadFunction |
Calculates the minimum payload seen
|
| MMapDirectory |
|
| MultiCollector |
|
| MultiFieldQueryParser |
A QueryParser which constructs queries to search multiple fields.
|
| MultiPhraseQuery |
|
| MultipleTermPositions |
|
| MultiReader |
An IndexReader which reads multiple indexes, appending
their content.
|
| MultiSearcher |
Deprecated.
|
| MultiTermQuery |
An abstract Query that matches documents
containing a subset of terms provided by a FilteredTermEnum enumeration.
|
| MultiTermQuery.ConstantScoreAutoRewrite |
A rewrite method that tries to pick the best
constant-score rewrite method based on term and
document counts from the query.
|
| MultiTermQuery.RewriteMethod |
Abstract class that defines how the query is rewritten.
|
| MultiTermQuery.TopTermsBoostOnlyBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, but the scores
are only computed as the boost.
|
| MultiTermQuery.TopTermsScoringBooleanQueryRewrite |
A rewrite method that first translates each term into
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
| MultiTermQueryWrapperFilter<Q extends MultiTermQuery> |
|
| NamedThreadFactory |
A default ThreadFactory implementation that accepts the name prefix
of the created threads as a constructor argument.
|
| NativeFSLockFactory |
|
| NearSpansOrdered |
A Spans that is formed from the ordered subspans of a SpanNearQuery
where the subspans do not overlap and have a maximum slop between them.
|
| NearSpansUnordered |
|
| NGramPhraseQuery |
This is a PhraseQuery which is optimized for n-gram phrase query.
|
| NIOFSDirectory |
An FSDirectory implementation that uses java.nio's FileChannel's
positional read, which allows multiple threads to read from the same file
without synchronizing.
|
| NIOFSDirectory.NIOFSIndexInput |
|
| NoDeletionPolicy |
|
| NoLockFactory |
|
| NoMergePolicy |
A MergePolicy which never returns merges to execute (hence it's
name).
|
| NoMergeScheduler |
|
| NoOutputs |
A null FST Outputs implementation; use this if
you just want to build an FSA.
|
| NormalizeCharMap |
|
| NoSuchDirectoryException |
This exception is thrown when you try to list a
non-existent directory.
|
| NRTCachingDirectory |
Wraps a RAMDirectory
around any provided delegate directory, to
be used during NRT search.
|
| NRTManager |
Utility class to manage sharing near-real-time searchers
across multiple searching thread.
|
| NRTManager.TrackingIndexWriter |
Class that tracks changes to a delegated
IndexWriter.
|
| NRTManager.WaitingListener |
NRTManager invokes this interface to notify it when a
caller is waiting for a specific generation searcher
to be visible.
|
| NRTManagerReopenThread |
Utility class that runs a reopen thread to periodically
reopen the NRT searchers in the provided NRTManager.
|
| NumberTools |
Deprecated.
|
| NumericField |
This class provides a Field that enables indexing
of numeric values for efficient range filtering and
sorting.
|
| NumericField.DataType |
|
| NumericRangeFilter<T extends Number> |
A Filter that only accepts numeric values within
a specified range.
|
| NumericRangeQuery<T extends Number> |
A Query that matches numeric values within a
specified range.
|
| NumericTokenStream |
|
| NumericUtils |
This is a helper class to generate prefix-encoded representations for numerical values
and supplies converters to represent float/double values as sortable integers/longs.
|
| NumericUtils.IntRangeBuilder |
|
| NumericUtils.LongRangeBuilder |
|
| OffsetAttribute |
The start and end character offset of a Token.
|
| OffsetAttributeImpl |
The start and end character offset of a Token.
|
| OpenBitSet |
An "open" BitSet implementation that allows direct access to the array of words
storing the bits.
|
| OpenBitSetDISI |
|
| OpenBitSetIterator |
An iterator to iterate over set bits in an OpenBitSet.
|
| OrdFieldSource |
Expert: obtains the ordinal of the field value from the default Lucene
Fieldcache using getStringIndex().
|
| Outputs<T> |
Represents the outputs for an FST, providing the basic
algebra required for building and traversing the FST.
|
| OutputStreamDataOutput |
|
| PackedInts |
Simplistic compression for array of unsigned long values.
|
| PackedInts.Mutable |
A packed integer array that can be modified.
|
| PackedInts.Reader |
A read-only random access array of positive integers.
|
| PackedInts.ReaderImpl |
A simple base for Readers that keeps track of valueCount and bitsPerValue.
|
| PackedInts.Writer |
A write-once Writer.
|
| PagedBytes |
Represents a logical byte[] as a series of pages.
|
| PagedBytes.Reader |
Provides methods to read BytesRefs from a frozen
PagedBytes.
|
| PairOutputs<A,B> |
An FST Outputs implementation, holding two other outputs.
|
| PairOutputs.Pair<A,B> |
Holds a single pair of two outputs.
|
| ParallelMultiSearcher |
Deprecated.
|
| ParallelReader |
An IndexReader which reads multiple, parallel indexes.
|
| Parameter |
Deprecated.
|
| ParseException |
This exception is thrown when parse errors are encountered.
|
| Payload |
A Payload is metadata that can be stored together with each occurrence
of a term.
|
| PayloadAttribute |
The payload of a Token.
|
| PayloadAttributeImpl |
The payload of a Token.
|
| PayloadFunction |
An abstract class that defines a way for Payload*Query instances to transform
the cumulative effects of payload scores for a document.
|
| PayloadNearQuery |
This class is very similar to
SpanNearQuery except that it factors
in the value of the payloads located at each of the positions where the
TermSpans occurs.
|
| PayloadProcessorProvider |
|
| PayloadProcessorProvider.DirPayloadProcessor |
Deprecated.
|
| PayloadProcessorProvider.PayloadProcessor |
Processes the given payload.
|
| PayloadProcessorProvider.ReaderPayloadProcessor |
|
| PayloadSpanUtil |
Experimental class to get set of payloads for most standard Lucene queries.
|
| PayloadTermQuery |
This class is very similar to
SpanTermQuery except that it factors
in the value of the payload located at each of the positions where the
Term occurs.
|
| PerFieldAnalyzerWrapper |
This analyzer is used to facilitate scenarios where different
fields require different analysis techniques.
|
| PersistentSnapshotDeletionPolicy |
A SnapshotDeletionPolicy which adds a persistence layer so that
snapshots can be maintained across the life of an application.
|
| PhraseQuery |
A Query that matches documents containing a particular sequence of terms.
|
| PorterStemFilter |
Transforms the token stream as per the Porter stemming algorithm.
|
| PositionBasedTermVectorMapper |
For each Field, store position by position information.
|
| PositionBasedTermVectorMapper.TVPositionInfo |
Container for a term at a position
|
| PositionIncrementAttribute |
The positionIncrement determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
|
| PositionIncrementAttributeImpl |
The positionIncrement determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
|
| PositionLengthAttribute |
The positionLength determines how many positions this
token spans.
|
| PositionLengthAttributeImpl |
|
| PositiveIntOutputs |
An FST Outputs implementation where each output
is a non-negative long value.
|
| PositiveScoresOnlyCollector |
A Collector implementation which wraps another
Collector and makes sure only documents with
scores > 0 are collected.
|
| PrefixFilter |
A Filter that restricts search results to values that have a matching prefix in a given
field.
|
| PrefixQuery |
A Query that matches documents containing terms with a specified prefix.
|
| PrefixTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified prefix filter term.
|
| PriorityQueue<T> |
A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time.
|
| Query |
The abstract base class for queries.
|
| QueryParser |
This class is generated by JavaCC.
|
| QueryParser.Operator |
The default operator for parsing queries.
|
| QueryParserConstants |
Token literal values and constants.
|
| QueryParserTokenManager |
Token Manager.
|
| QueryTermVector |
|
| QueryWrapperFilter |
Constrains search results to only match those which also match a provided
query.
|
| RAMDirectory |
|
| RAMFile |
|
| RAMInputStream |
|
| RAMOutputStream |
|
| RamUsageEstimator |
Estimates the size (memory representation) of Java objects.
|
| RamUsageEstimator.JvmFeature |
JVM diagnostic features.
|
| ReaderUtil |
|
| ReaderUtil.Gather |
Recursively visits all sub-readers of a reader.
|
| RecyclingByteBlockAllocator |
|
| ReferenceManager<G> |
Utility class to safely share instances of a certain type across multiple
threads, while periodically refreshing them.
|
| ReusableAnalyzerBase |
An convenience subclass of Analyzer that makes it easy to implement
TokenStream reuse.
|
| ReusableAnalyzerBase.TokenStreamComponents |
This class encapsulates the outer components of a token stream.
|
| ReverseOrdFieldSource |
Expert: obtains the ordinal of the field value from the default Lucene
FieldCache using getStringIndex()
and reverses the order.
|
| RollingCharBuffer |
Acts like a forever growing char[] as you read
characters into it from the provided reader, but
internally it uses a circular buffer to only hold the
characters that haven't been freed yet.
|
| ScoreCachingWrappingScorer |
A Scorer which wraps another scorer and caches the score of the
current document.
|
| ScoreDoc |
|
| Scorer |
Expert: Common scoring functionality for different types of queries.
|
| Scorer.ScorerVisitor<P extends Query,C extends Query,S extends Scorer> |
A callback to gather information from a scorer and its sub-scorers.
|
| ScorerDocQueue |
Deprecated. |
| ScoringRewrite<Q extends Query> |
|
| Searchable |
Deprecated.
|
| Searcher |
Deprecated.
|
| SearcherFactory |
|
| SearcherLifetimeManager |
Keeps track of current plus old IndexSearchers, closing
the old ones once they have timed out.
|
| SearcherLifetimeManager.PruneByAge |
Simple pruner that drops any searcher older by
more than the specified seconds, than the newest
searcher.
|
| SearcherLifetimeManager.Pruner |
|
| SearcherManager |
Utility class to safely share IndexSearcher instances across multiple
threads, while periodically reopening.
|
| SegmentInfo |
Information about a segment such as it's name, directory, and files related
to the segment.
|
| SegmentInfos |
A collection of segmentInfo objects with methods for operating on
those segments in relation to the file system.
|
| SegmentInfos.FindSegmentsFile |
Utility class for executing code that needs to do
something with the current segments file.
|
| SegmentReader |
IndexReader implementation over a single segment.
|
| SegmentReader.CoreClosedListener |
Called when the shared core for this SegmentReader
is closed.
|
| SegmentWriteState |
Holder class for common parameters used during write.
|
| SerialMergeScheduler |
A MergeScheduler that simply does each merge
sequentially, using the current thread.
|
| SetBasedFieldSelector |
Declare what fields to load normally and what fields to load lazily
|
| SetOnce<T> |
A convenient class which offers a semi-immutable object wrapper
implementation which allows one to set the value of an object exactly once,
and retrieve it many times.
|
| SetOnce.AlreadySetException |
|
| ShortFieldSource |
Expert: obtains short field values from the
FieldCache
using getShorts() and makes those values
available as other numeric types, casting as needed.
|
| Similarity |
Expert: Scoring API.
|
| SimilarityDelegator |
Deprecated.
|
| SimpleAnalyzer |
|
| SimpleFSDirectory |
A straightforward implementation of FSDirectory
using java.io.RandomAccessFile.
|
| SimpleFSDirectory.SimpleFSIndexInput |
|
| SimpleFSDirectory.SimpleFSIndexInput.Descriptor |
|
| SimpleFSLockFactory |
|
| SimpleStringInterner |
Simple lockless and memory barrier free String intern cache that is guaranteed
to return the same String instance as String.intern()
does.
|
| SingleInstanceLockFactory |
Implements LockFactory for a single in-process instance,
meaning all locking will take place through this one instance.
|
| SingleTermEnum |
Subclass of FilteredTermEnum for enumerating a single term.
|
| SmallFloat |
Floating point numbers smaller than 32 bits.
|
| SnapshotDeletionPolicy |
|
| Sort |
Encapsulates sort criteria for returned hits.
|
| SortedTermVectorMapper |
|
| SortedVIntList |
Stores and iterate on sorted integers in compressed form in RAM.
|
| SorterTemplate |
This class was inspired by CGLIB, but provides a better
QuickSort algorithm without additional InsertionSort
at the end.
|
| SortField |
Stores information about how to sort documents by terms in an individual
field.
|
| SpanFilter |
Abstract base class providing a mechanism to restrict searches to a subset
of an index and also maintains and returns position information.
|
| SpanFilterResult |
The results of a SpanQueryFilter.
|
| SpanFilterResult.PositionInfo |
|
| SpanFilterResult.StartEnd |
|
| SpanFirstQuery |
Matches spans near the beginning of a field.
|
| SpanMultiTermQueryWrapper<Q extends MultiTermQuery> |
|
| SpanMultiTermQueryWrapper.SpanRewriteMethod |
Abstract class that defines how the query is rewritten.
|
| SpanMultiTermQueryWrapper.TopTermsSpanBooleanQueryRewrite |
A rewrite method that first translates each term into a SpanTermQuery in a
BooleanClause.Occur.SHOULD clause in a BooleanQuery, and keeps the
scores as computed by the query.
|
| SpanNearPayloadCheckQuery |
Only return those matches that have a specific payload at
the given position.
|
| SpanNearQuery |
Matches spans which are near one another.
|
| SpanNotQuery |
Removes matches which overlap with another SpanQuery.
|
| SpanOrQuery |
Matches the union of its clauses.
|
| SpanPayloadCheckQuery |
Only return those matches that have a specific payload at
the given position.
|
| SpanPositionCheckQuery |
Base class for filtering a SpanQuery based on the position of a match.
|
| SpanPositionCheckQuery.AcceptStatus |
Return value if the match should be accepted YES, rejected NO,
or rejected and enumeration should advance to the next document NO_AND_ADVANCE.
|
| SpanPositionRangeQuery |
|
| SpanQuery |
Base class for span-based queries.
|
| SpanQueryFilter |
Constrains search results to only match those which also match a provided
query.
|
| Spans |
Expert: an enumeration of span matches.
|
| SpanScorer |
Public for extension only.
|
| SpanTermQuery |
Matches spans containing a term.
|
| SpanWeight |
Expert-only.
|
| StaleReaderException |
|
| StandardAnalyzer |
|
| StandardFilter |
|
| StandardTokenizer |
A grammar-based tokenizer constructed with JFlex.
|
| StandardTokenizerImpl |
|
| StandardTokenizerImpl31 |
Deprecated.
|
| StandardTokenizerInterface |
Internal interface for supporting versioned grammars.
|
| StopAnalyzer |
|
| StopFilter |
Removes stop words from a token stream.
|
| StopwordAnalyzerBase |
Base class for Analyzers that need to make use of stopword sets.
|
| StringHelper |
Methods for manipulating strings.
|
| StringInterner |
Subclasses of StringInterner are required to
return the same single String object for all equal strings.
|
| TeeSinkTokenFilter |
This TokenFilter provides the ability to set aside attribute states
that have already been analyzed.
|
| TeeSinkTokenFilter.SinkFilter |
|
| TeeSinkTokenFilter.SinkTokenStream |
TokenStream output from a tee with optional filtering.
|
| Term |
A Term represents a word from text.
|
| TermAttribute |
Deprecated.
|
| TermAttributeImpl |
Deprecated.
|
| TermDocs |
TermDocs provides an interface for enumerating <document, frequency>
pairs for a term.
|
| TermEnum |
Abstract class for enumerating terms.
|
| TermFreqVector |
Provides access to stored term vector of
a document field.
|
| TermPositions |
TermPositions provides an interface for enumerating the <document,
frequency, <position>* > tuples for a term.
|
| TermPositionVector |
Extends TermFreqVector to provide additional information about
positions in which each of the terms is found.
|
| TermQuery |
A Query that matches documents containing a term.
|
| TermRangeFilter |
A Filter that restricts search results to a range of term
values in a given field.
|
| TermRangeQuery |
A Query that matches documents within an range of terms.
|
| TermRangeTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified range parameters.
|
| TermSpans |
Expert:
Public for extension only
|
| TermVectorEntry |
Convenience class for holding TermVector information.
|
| TermVectorEntryFreqSortedComparator |
Compares TermVectorEntrys first by frequency and then by
the term (case-sensitive)
|
| TermVectorMapper |
|
| TermVectorOffsetInfo |
The TermVectorOffsetInfo class holds information pertaining to a Term in a TermPositionVector's
offset information.
|
| ThreadInterruptedException |
Thrown by lucene on detecting that Thread.interrupt() had
been called.
|
| TieredMergePolicy |
Merges segments of approximately equal size, subject to
an allowed number of segments per tier.
|
| TieredMergePolicy.MergeScore |
Holds score and explanation for a single candidate
merge.
|
| TimeLimitingCollector |
The TimeLimitingCollector is used to timeout search requests that
take longer than the maximum allowed search time limit.
|
| TimeLimitingCollector.TimeExceededException |
Thrown when elapsed search time exceeds allowed search time.
|
| TimeLimitingCollector.TimerThread |
|
| Token |
A Token is an occurrence of a term from the text of a field.
|
| Token |
Describes the input token stream.
|
| Token.TokenAttributeFactory |
Expert: Creates a TokenAttributeFactory returning Token as instance for the basic attributes
and for all other attributes calls the given delegate factory.
|
| TokenFilter |
A TokenFilter is a TokenStream whose input is another TokenStream.
|
| Tokenizer |
A Tokenizer is a TokenStream whose input is a Reader.
|
| TokenMgrError |
Token Manager Error.
|
| TokenStream |
A TokenStream enumerates the sequence of tokens, either from
Fields of a Document or from query text.
|
| TopDocs |
|
| TopDocsCollector<T extends ScoreDoc> |
A base class for all collectors that return a TopDocs output.
|
| TopFieldCollector |
|
| TopFieldDocs |
|
| TopScoreDocCollector |
A Collector implementation that collects the top-scoring hits,
returning them as a TopDocs.
|
| TopTermsRewrite<Q extends Query> |
Base rewrite method for collecting only the top terms
via a priority queue.
|
| ToStringUtils |
|
| TotalHitCountCollector |
Just counts the total number of hits.
|
| TwoPhaseCommit |
An interface for implementations that support 2-phase commit.
|
| TwoPhaseCommitTool |
A utility for executing 2-phase commit on several objects.
|
| TwoPhaseCommitTool.CommitFailException |
|
| TwoPhaseCommitTool.PrepareCommitFailException |
|
| TwoPhaseCommitTool.TwoPhaseCommitWrapper |
A wrapper of a TwoPhaseCommit, which delegates all calls to the
wrapped object, passing the specified commitData.
|
| TypeAttribute |
A Token's lexical type.
|
| TypeAttributeImpl |
A Token's lexical type.
|
| TypeTokenFilter |
Removes tokens whose types appear in a set of blocked types from a token stream.
|
| UAX29URLEmailAnalyzer |
|
| UAX29URLEmailTokenizer |
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
| UAX29URLEmailTokenizerImpl |
This class implements Word Break rules from the Unicode Text Segmentation
algorithm, as specified in
Unicode Standard Annex #29
URLs and email addresses are also tokenized according to the relevant RFCs.
|
| UAX29URLEmailTokenizerImpl31 |
Deprecated.
|
| UAX29URLEmailTokenizerImpl34 |
Deprecated.
|
| UnicodeUtil |
Class to encode java's UTF16 char[] into UTF8 byte[]
without always allocating a new byte[] as
String.getBytes("UTF-8") does.
|
| UnicodeUtil.UTF16Result |
Holds decoded UTF16 code units.
|
| UnicodeUtil.UTF8Result |
Holds decoded UTF8 code units.
|
| UpgradeIndexMergePolicy |
|
| UpToTwoPositiveIntOutputs |
An FST Outputs implementation where each output
is one or two non-negative long values.
|
| UpToTwoPositiveIntOutputs.TwoLongs |
Holds two long outputs.
|
| Util |
Static helper methods.
|
| Util.MinResult<T> |
|
| ValueSource |
Expert: source of values for basic function queries.
|
| ValueSourceQuery |
Expert: A Query that sets the scores of document to the
values obtained from a ValueSource.
|
| VerifyingLockFactory |
A LockFactory that wraps another LockFactory and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time).
|
| Version |
Use by certain classes to match version compatibility
across releases of Lucene.
|
| VirtualMethod<C> |
A utility for keeping backwards compatibility on previously abstract methods
(or similar replacements).
|
| WeakIdentityMap<K,V> |
|
| Weight |
Expert: Calculate query weights and build query scorers.
|
| WhitespaceAnalyzer |
|
| WhitespaceTokenizer |
A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
|
| WildcardQuery |
Implements the wildcard search query.
|
| WildcardTermEnum |
Subclass of FilteredTermEnum for enumerating all terms that match the
specified wildcard filter term.
|
| WordlistLoader |
Loader for text files that represent a list of stopwords.
|