-
Notifications
You must be signed in to change notification settings - Fork 94
[Performance Improvement] Add custom bulk scorer for hybrid query (2-3x faster) #1289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance Improvement] Add custom bulk scorer for hybrid query (2-3x faster) #1289
Conversation
64fc42a
to
e9aa8a7
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1289 +/- ##
============================================
- Coverage 81.55% 0 -81.56%
============================================
Files 133 0 -133
Lines 5920 0 -5920
Branches 951 0 -951
============================================
- Hits 4828 0 -4828
+ Misses 726 0 -726
+ Partials 366 0 -366 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
0b0e31a
to
bd5ed90
Compare
ce04f73
to
8b150f3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, code and logic looks good. Few minor comments.
src/main/java/org/opensearch/neuralsearch/query/HybridBulkScorer.java
Outdated
Show resolved
Hide resolved
* @throws IOException in case of IO exception | ||
*/ | ||
@Override | ||
public void forEach(CheckedIntConsumer<IOException> consumer) throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great way of adding comments in this method. Really helpful in understanding.
@@ -0,0 +1,189 @@ | |||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At high level, please add a BWC test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what kind of BWC do you have in mind? The whole idea is that logic and results remain unchanged after this PR, so I'm relying on existing BWC. I'm not seeing a special scenario that is added with the logic and requires new BWC test case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to check is there any impact on the search when two nodes are of 2.19 and one node is of 3.0. As there are some new classes introduced so wanted to make sure nothing breaks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add bwc test of search result on old cluster is equal to search result on new cluster. old cluster 2.19 and new is 3.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see what you mean now, it makes sense, will work on adding such test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added new test, CI is failing now due to unrelated reason, either ml-commons or core change
» WARNING: Using incubator modules: jdk.incubator.vector
» WARNING: Unknown module: org.apache.arrow.memory.core specified to --add-opens
» fatal error in thread [main], exiting
» java.lang.NoClassDefFoundError: com/google/common/collect/ImmutableSet
» at org.opensearch.ml.task.MLTaskManager.<clinit>(MLTaskManager.java:84)
» at org.opensearch.ml.plugin.MachineLearningPlugin.createComponents(MachineLearningPlugin.java:589)
» at org.opensearch.node.Node.lambda$new$21(Node.java:1040)
» at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
» at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
» at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
» at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
» at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
» at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
» at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
» at org.opensearch.node.Node.<init>(Node.java:1054)
» at org.opensearch.node.Node.<init>(Node.java:461)
» at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:243)
» at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:243)
» at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:405)
» at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:168)
» at org.opensearch.bootstrap.OpenSearch.execute(OpenSearch.java:159)
» at org.opensearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:110)
» at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
» at org.opensearch.cli.Command.main(Command.java:101)
» at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:125)
» at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:91)
» Caused by: java.lang.ClassNotFoundException: com.google.common.collect.ImmutableSet
» at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
» at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:593)
» at java.base/java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:872)
» at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526)
» ... 22 more
1ca7a16
to
4d3deab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Great job Martin 👏
src/main/java/org/opensearch/neuralsearch/search/collector/HybridLeafCollector.java
Outdated
Show resolved
Hide resolved
This is a great improvement and looking forward to it. |
Signed-off-by: Martin Gaievski <[email protected]>
Signed-off-by: Martin Gaievski <[email protected]>
Signed-off-by: Martin Gaievski <[email protected]>
Signed-off-by: Martin Gaievski <[email protected]>
4d3deab
to
efb2611
Compare
Signed-off-by: Martin Gaievski <[email protected]>
Signed-off-by: Martin Gaievski <[email protected]>
98fd38f
to
90aad85
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work here @martin-gaievski and numbers look promising. Didn't review tests yet
*/ | ||
public class HybridBulkScorer extends BulkScorer { | ||
private static final int SHIFT = 12; | ||
private static final int WINDOW_SIZE = 1 << SHIFT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we make this a dynamic cluster setting? Default can be 4096
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm running benchmarks to see how big is the impact of different window sizes. But even leaving that aside, I think it's way too expert level setting to be exposed to end user. I would prefer us to came up with optimal default rather then exposing such potentially unsafe knob.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on this. Lets not move this to a cluster setting. We should pick a good default for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@owaiskazi19 I gather data from experiments related to impact of different window size values on resource utilization and response times. In short, there is no much of impact, and default value of 4096 is the most balanced one
src/main/java/org/opensearch/neuralsearch/query/HybridBulkScorer.java
Outdated
Show resolved
Hide resolved
for (int subQueryIndex = 0; subQueryIndex < scorers.size(); subQueryIndex++) { | ||
Scorer scorer = scorers.get(subQueryIndex); | ||
if (Objects.isNull(scorer)) { | ||
continue; | ||
} | ||
cost += scorer.iterator().cost(); | ||
this.scorers[subQueryIndex] = scorer; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
L49-56 Can be moved to a different method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the resulting code, we had to either return two variables (array of scorers and cost) or change the class level variable inside that method (array of scorers). I want to keep initialization of class level variables in constructor itself.
src/main/java/org/opensearch/neuralsearch/query/HybridBulkScorer.java
Outdated
Show resolved
Hide resolved
int doc = it.docID(); | ||
if (doc < min) { | ||
doc = it.advance(min); | ||
} | ||
docIds[subQueryIndex] = doc; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
int doc = it.docID(); | |
if (doc < min) { | |
doc = it.advance(min); | |
} | |
docIds[subQueryIndex] = doc; | |
docIds[subQueryIndex] = (it.docID() < min) | |
? it.advance(min) | |
: it.docID();; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure about this one, I think it's a matter of preference in syntax, I do like current version a bit more
} | ||
DocIdSetIterator it = scorers[subQueryIndex].iterator(); | ||
int doc = docIds[subQueryIndex]; | ||
if (doc < windowMin) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add validation for window bound
if (windowMin >= windowMax || max < windowMax)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's redundant, we calculate both min and max of window in scoreWindow
method, and this is performance critical code
* Reset the internal state for the next window of documents | ||
*/ | ||
private void resetWindowState() { | ||
matching.clear(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
matching.clear(); | |
matching.clear(0, WINDOW_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
implementation for clear() method is simpler, essentially it's calling same code but with less checks: clear() vs clear(int, int))
src/main/java/org/opensearch/neuralsearch/query/HybridQueryDocIdStream.java
Outdated
Show resolved
Hide resolved
src/main/java/org/opensearch/neuralsearch/search/collector/HybridTopScoreDocCollector.java
Outdated
Show resolved
Hide resolved
src/main/java/org/opensearch/neuralsearch/search/collector/HybridTopScoreDocCollector.java
Outdated
Show resolved
Hide resolved
if (Objects.isNull(windowScores[subQueryIndex])) { | ||
continue; | ||
} | ||
float scoreOfDocIdForSubQuery = windowScores[subQueryIndex][docIndexInWindow]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While there's a null check for the array windowScores[subQueryIndex])
itself, but there's no check for individual elements within it before accessing windowScores[subQueryIndex][docIndexInWindow]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a valid scenario when windowScores[subQueryIndex]
can be null, if sub-query has no matches for one window. As this is array of primitive float
individual elements will be initialized with 0.0 values. Maybe I misunderstood the question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification!
if (Objects.isNull(acceptDocs) || acceptDocs.get(doc)) { | ||
int d = doc & MASK; | ||
if (needsScores) { | ||
float score = scorers[subQueryIndex].score(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question for learn: does the cost calculation happens only once during initialization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, the contract for cost() method is quite relaxed, so it most scorers it's calculated once to save resources. Here is from Lucene doc:
Returns the estimated cost of this DocIdSetIterator.
This is generally an upper bound of the number of documents this iterator might match, but may be a rough heuristic, hardcoded value, or otherwise completely inaccurate.
src/main/java/org/opensearch/neuralsearch/query/HybridBulkScorer.java
Outdated
Show resolved
Hide resolved
Signed-off-by: Martin Gaievski <[email protected]>
0a0d30c
to
65a18f3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, @martin-gaievski great work for the performance improvement!
…3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]>
* [FIX] gradlew spotlessApply Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] remove unnecessary comments Signed-off-by: yeonghyeonKo <[email protected]> * [Performance Improvement] Add custom bulk scorer for hybrid query (2-3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]> * Add TextChunkingProcessor stats (opensearch-project#1308) * Add TextChunkingProcessor stats Signed-off-by: Andy Qin <[email protected]> # Conflicts: # CHANGELOG.md * Update unit and integ tests Signed-off-by: Andy Qin <[email protected]> --------- Signed-off-by: Andy Qin <[email protected]> * Update Lucene dependencies (opensearch-project#1336) * Update Lucene dependencies Signed-off-by: Ryan Bogan <[email protected]> * Add changelog entry Signed-off-by: Ryan Bogan <[email protected]> * Update model request body for bwc and integ tests Signed-off-by: Ryan Bogan <[email protected]> --------- Signed-off-by: Ryan Bogan <[email protected]> * [REFACTOR] modify algorithm name and related parts Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] update test codes along with the change in CharacterLengthChunker Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] remove defensive check to prevent adding redundant code lines Signed-off-by: yeonghyeonKo <[email protected]> * Update CharacterLengthChunker to FixedCharLengthChunker Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> * Update ChunkerFactory Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> * Update CharacterLengthChunkerTests to FixedCharLengthChunkerTests Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> * [FIX] handle a corner case where the content is shorter than charLimit Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] Add integration test codes for fixed_char_length chunking algorithm Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] integration test code for cascaded pipeline Signed-off-by: yeonghyeonKo <[email protected]> --------- Signed-off-by: yeonghyeonKo <[email protected]> Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: Andy Qin <[email protected]> Signed-off-by: Ryan Bogan <[email protected]> Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> Co-authored-by: Martin Gaievski <[email protected]> Co-authored-by: Andy <[email protected]> Co-authored-by: Ryan Bogan <[email protected]>
…3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]>
…3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]>
…3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]>
…3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]>
…3x faster) (opensearch-project#1289) Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]>
#1342) * Implement Optimized embedding generation in text embedding processor (#1238) * implement single document update scenario for text embedding processor (#1191) Signed-off-by: Will Hwang <[email protected]> * implement batch document update scenario for text embedding processor (#1217) Signed-off-by: Will Hwang <[email protected]> --------- Signed-off-by: Will Hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Going from alpha1 to beta1 for 3.0 release (#1245) Signed-off-by: yeonghyeonKo <[email protected]> * Implement Optimized embedding generation in sparse encoding processor (#1246) Signed-off-by: Will Hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Implement Optimized embedding generation in text and image embedding processor (#1249) Signed-off-by: will-hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Inner hits support with hybrid query (#1253) * Inner Hits in Hybrid query Signed-off-by: Varun Jain <[email protected]> * Inner hits support with hybrid query Signed-off-by: Varun Jain <[email protected]> * Add changelog Signed-off-by: Varun Jain <[email protected]> * fix integ tests Signed-off-by: Varun Jain <[email protected]> * Modify comment Signed-off-by: Varun Jain <[email protected]> * Explain test case Signed-off-by: Varun Jain <[email protected]> * Optimize inner hits count calculation method Signed-off-by: Varun Jain <[email protected]> --------- Signed-off-by: Varun Jain <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Support custom tags in semantic highlighter (#1254) Signed-off-by: yeonghyeonKo <[email protected]> * Add neural stats API (#1256) * Add neural stats API Signed-off-by: Andy Qin <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Added release notes for 3.0 beta1 (#1252) * Added release notes for 3.0 beta1 Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Update semantic highlighter test model (#1259) Signed-off-by: Junqiu Lei <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Fix the edge case when the value of a fieldMap key in ingestDocument is empty string (#1257) Signed-off-by: Chloe Gao <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Hybrid query should call rewrite before creating weight (#1268) * Hybrid query should call rewrite before creating weight Signed-off-by: Harsha Vamsi Kalluri <[email protected]> * Awaits fix Signed-off-by: Harsha Vamsi Kalluri <[email protected]> * Rewrite with searcher Signed-off-by: Harsha Vamsi Kalluri <[email protected]> * Feature flag issue Signed-off-by: Harsha Vamsi Kalluri <[email protected]> --------- Signed-off-by: Harsha Vamsi Kalluri <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Support phasing off SecurityManager usage in favor of Java Agent (#1265) Signed-off-by: Gulshan <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Fix multi node transport issue on NeuralKNNQueryBuilder originalQueryText (#1272) Signed-off-by: Junqiu Lei <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add semantic field mapper. (#1225) Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Increment version to 3.0.0-SNAPSHOT (#1286) Signed-off-by: opensearch-ci-bot <[email protected]> Co-authored-by: opensearch-ci-bot <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Remove beta1 qualifier (#1292) Signed-off-by: Peter Zhu <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Fix for merging scoreDocs when totalHits are greater than 1 and fieldDocs are 0 (#1295) (#1296) (cherry picked from commit 6f3aabb) Co-authored-by: Varun Jain <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * add release notes for 3.0 (#1287) Signed-off-by: will-hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Allow maven to publish to all versions (#1300) (#1301) Signed-off-by: Peter Zhu <[email protected]> (cherry picked from commit c5625db) Co-authored-by: Peter Zhu <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [FEAT] introduce new FixedStringLengthChunker Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] initial test cases for FixedStringLengthChunker Signed-off-by: yeonghyeonKo <[email protected]> * [FIX] gradlew spotlessApply Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] remove unnecessary comments Signed-off-by: yeonghyeonKo <[email protected]> * [Performance Improvement] Add custom bulk scorer for hybrid query (2-3x faster) (#1289) Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add TextChunkingProcessor stats (#1308) * Add TextChunkingProcessor stats Signed-off-by: Andy Qin <[email protected]> # Conflicts: # CHANGELOG.md * Update unit and integ tests Signed-off-by: Andy Qin <[email protected]> --------- Signed-off-by: Andy Qin <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Update Lucene dependencies (#1336) * Update Lucene dependencies Signed-off-by: Ryan Bogan <[email protected]> * Add changelog entry Signed-off-by: Ryan Bogan <[email protected]> * Update model request body for bwc and integ tests Signed-off-by: Ryan Bogan <[email protected]> --------- Signed-off-by: Ryan Bogan <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] modify algorithm name and related parts Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] update test codes along with the change in CharacterLengthChunker Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] remove defensive check to prevent adding redundant code lines Signed-off-by: yeonghyeonKo <[email protected]> * Update CharacterLengthChunker to FixedCharLengthChunker Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Update ChunkerFactory Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Update CharacterLengthChunkerTests to FixedCharLengthChunkerTests Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [FIX] handle a corner case where the content is shorter than charLimit Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] Add integration test codes for fixed_char_length chunking algorithm Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] integration test code for cascaded pipeline Signed-off-by: yeonghyeonKo <[email protected]> * Support analyzer-based neural sparse query (#1088) * merge main; add analyzer impl Signed-off-by: zhichao-aws <[email protected]> * two phase adaption Signed-off-by: zhichao-aws <[email protected]> * two phase adaption Signed-off-by: zhichao-aws <[email protected]> * remove analysis Signed-off-by: zhichao-aws <[email protected]> * lint Signed-off-by: zhichao-aws <[email protected]> * update Signed-off-by: zhichao-aws <[email protected]> * address comments Signed-off-by: zhichao-aws <[email protected]> * tests Signed-off-by: zhichao-aws <[email protected]> * modify plugin security policy Signed-off-by: zhichao-aws <[email protected]> * change log Signed-off-by: zhichao-aws <[email protected]> * address comments Signed-off-by: zhichao-aws <[email protected]> * modify to package-private Signed-off-by: zhichao-aws <[email protected]> --------- Signed-off-by: zhichao-aws <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Fixed score value as null for single shard for sorting (#1277) * Fixed score value as null for single shard for sorting Signed-off-by: Owais <[email protected]> * Addressed comment Signed-off-by: Owais <[email protected]> * Addressed more comments Signed-off-by: Owais <[email protected]> * Added UT Signed-off-by: Owais <[email protected]> --------- Signed-off-by: Owais <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add IT for neural sparse query + bert-uncased mbert-uncased analyzer (#1279) * add it Signed-off-by: zhichao-aws <[email protected]> * change log Signed-off-by: zhichao-aws <[email protected]> --------- Signed-off-by: zhichao-aws <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add WithFieldName implementation to QueryBuilders (#1285) Signed-off-by: Owais <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [AUTO] Increment version to 3.1.0-SNAPSHOT (#1288) * Increment version to 3.1.0-SNAPSHOT Signed-off-by: opensearch-ci-bot <[email protected]> * Update build.gradle Signed-off-by: Peter Zhu <[email protected]> --------- Signed-off-by: opensearch-ci-bot <[email protected]> Signed-off-by: Peter Zhu <[email protected]> Co-authored-by: opensearch-ci-bot <[email protected]> Co-authored-by: Peter Zhu <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * add release notes for 3.0 (#1298) Signed-off-by: will-hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Return bad request for invalid stat parameters in stats API (#1291) Signed-off-by: Andy Qin <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add semantic mapping transformer. (#1276) Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add semantic ingest processor. (#1309) Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [Performance Improvement] Add custom bulk scorer for hybrid query (2-3x faster) (#1289) Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Implement the query logic for the semantic field. (#1315) Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Support custom weights params in RRF (#1322) * Support Weights params in RRF Signed-off-by: Varun Jain <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * add validation for invalid nested hybrid query (#1305) * add validation for nested hybrid query Signed-off-by: will-hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add stats tracking for semantic highlighting (#1327) * Add stats tracking for semantic highlighting Signed-off-by: Junqiu Lei <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Update Lucene dependencies (#1336) * Update Lucene dependencies Signed-off-by: Ryan Bogan <[email protected]> * Add changelog entry Signed-off-by: Ryan Bogan <[email protected]> * Update model request body for bwc and integ tests Signed-off-by: Ryan Bogan <[email protected]> --------- Signed-off-by: Ryan Bogan <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Enhance semantic field to allow to enable/disable chunking. (#1337) * Implement the query logic for the semantic field. Signed-off-by: Bo Zhang <[email protected]> * Enhance semantic field to allow to enable/disable chunking. Signed-off-by: Bo Zhang <[email protected]> --------- Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] modify algorithm name and related parts Signed-off-by: yeonghyeonKo <[email protected]> * Update CHANGELOG.md Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * [FEAT] Add fixed_char_length chunking algorithm to STAT manager Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] Add integration test codes for fixed_char_length chunking algorithm Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] integration test code for cascaded pipeline Signed-off-by: yeonghyeonKo <[email protected]> * Going from alpha1 to beta1 for 3.0 release (#1245) Signed-off-by: yeonghyeonKo <[email protected]> * Fix multi node transport issue on NeuralKNNQueryBuilder originalQueryText (#1272) Signed-off-by: Junqiu Lei <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add semantic field mapper. (#1225) Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Add semantic mapping transformer. (#1276) Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> * Fix multi node transport issue on NeuralKNNQueryBuilder originalQueryText (#1272) Signed-off-by: Junqiu Lei <[email protected]> * Add semantic field mapper. (#1225) Signed-off-by: Bo Zhang <[email protected]> * Add semantic mapping transformer. (#1276) Signed-off-by: Bo Zhang <[email protected]> * [FIX] minor typo Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] adopt FixedTokenLengthChunker's loop strategy for robust final chunking Signed-off-by: yeonghyeonKo <[email protected]> * [TEST] sum the number of processors and their executions correctly in TextChunkingProcessorIT Signed-off-by: yeonghyeonKo <[email protected]> * [REFACTOR] gradlew spotlessApply Signed-off-by: yeonghyeonKo <[email protected]> --------- Signed-off-by: Will Hwang <[email protected]> Signed-off-by: yeonghyeonKo <[email protected]> Signed-off-by: will-hwang <[email protected]> Signed-off-by: Varun Jain <[email protected]> Signed-off-by: Andy Qin <[email protected]> Signed-off-by: Martin Gaievski <[email protected]> Signed-off-by: Junqiu Lei <[email protected]> Signed-off-by: Chloe Gao <[email protected]> Signed-off-by: Harsha Vamsi Kalluri <[email protected]> Signed-off-by: Gulshan <[email protected]> Signed-off-by: Bo Zhang <[email protected]> Signed-off-by: opensearch-ci-bot <[email protected]> Signed-off-by: Peter Zhu <[email protected]> Signed-off-by: Ryan Bogan <[email protected]> Signed-off-by: Marcel Yeonghyeon Ko <[email protected]> Signed-off-by: zhichao-aws <[email protected]> Signed-off-by: Owais <[email protected]> Co-authored-by: Will Hwang <[email protected]> Co-authored-by: Martin Gaievski <[email protected]> Co-authored-by: Varun Jain <[email protected]> Co-authored-by: Junqiu Lei <[email protected]> Co-authored-by: Andy <[email protected]> Co-authored-by: Chloe Gao <[email protected]> Co-authored-by: Harsha Vamsi Kalluri <[email protected]> Co-authored-by: Gulshan <[email protected]> Co-authored-by: Bo Zhang <[email protected]> Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: opensearch-ci-bot <[email protected]> Co-authored-by: Peter Zhu <[email protected]> Co-authored-by: Ryan Bogan <[email protected]> Co-authored-by: zhichao-aws <[email protected]> Co-authored-by: Owais Kazi <[email protected]>
Description
Adding custom implementation of bulk scorer for hybrid query, mainly design that is highlighted in this RFC. This improves score collection process and overall response times for query.
After running benchmarks using released 3.0 as baseline these are following results in terms of response time and system throughput:
NOAA dataset, keyword search)
https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/noaa_semantic_search
Benchmark Mode Performance
Response times
Quora dataset, keyword + semantic search
Based on neural_search and PR
Benchmark Mode Performance
Response Time Metrics
Cluster setup:
Related Issues
#1234
#1236 (partially)
Check List
[ ] New functionality has been documented.[ ] API changes companion pull request created.--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.