public abstract class CompoundWordTokenFilterBase extends TokenFilter
You must specify the required Version
compatibility when creating
CompoundWordTokenFilterBase:
CompoundWordTokenFilterBase
doesn't update offsets.
Modifier and Type | Class and Description |
---|---|
protected class |
CompoundWordTokenFilterBase.CompoundToken
Helper class to hold decompounded token information
|
AttributeSource.AttributeFactory, AttributeSource.State
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter
|
static int |
DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter
|
static int |
DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed
|
protected CharArraySet |
dictionary |
protected Version |
matchVersion |
protected int |
maxSubwordSize |
protected int |
minSubwordSize |
protected int |
minWordSize |
protected OffsetAttribute |
offsetAtt |
protected boolean |
onlyLongestMatch |
protected CharTermAttribute |
termAtt |
protected LinkedList<CompoundWordTokenFilterBase.CompoundToken> |
tokens |
input
Modifier | Constructor and Description |
---|---|
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
CharArraySet dictionary) |
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
CharArraySet dictionary,
boolean onlyLongestMatch) |
protected |
CompoundWordTokenFilterBase(Version matchVersion,
TokenStream input,
CharArraySet dictionary,
int minWordSize,
int minSubwordSize,
int maxSubwordSize,
boolean onlyLongestMatch) |
Modifier and Type | Method and Description |
---|---|
protected abstract void |
decompose()
Decomposes the current
termAtt and places CompoundWordTokenFilterBase.CompoundToken instances in the tokens list. |
boolean |
incrementToken()
Consumers (i.e.,
IndexWriter ) use this method to advance the stream to
the next token. |
void |
reset()
This method is called by a consumer before it begins consumption using
TokenStream.incrementToken() . |
close, end
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
public static final int DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZE
protected final Version matchVersion
protected final CharArraySet dictionary
protected final LinkedList<CompoundWordTokenFilterBase.CompoundToken> tokens
protected final int minWordSize
protected final int minSubwordSize
protected final int maxSubwordSize
protected final boolean onlyLongestMatch
protected final CharTermAttribute termAtt
protected final OffsetAttribute offsetAtt
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, CharArraySet dictionary, boolean onlyLongestMatch)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, CharArraySet dictionary)
protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
public final boolean incrementToken() throws IOException
TokenStream
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
incrementToken
in class TokenStream
IOException
protected abstract void decompose()
termAtt
and places CompoundWordTokenFilterBase.CompoundToken
instances in the tokens
list.
The original token may not be placed in the list, as it is automatically passed through this filter.public void reset() throws IOException
TokenFilter
TokenStream.incrementToken()
.
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call super.reset()
, otherwise
some internal state will not be correctly reset (e.g., Tokenizer
will
throw IllegalStateException
on further usage).
NOTE:
The default implementation chains the call to the input TokenStream, so
be sure to call super.reset()
when overriding this method.
reset
in class TokenFilter
IOException
Copyright © 2000-2016 The Apache Software Foundation. All Rights Reserved.