-
Notifications
You must be signed in to change notification settings - Fork 249
Support retrying non-finished async tasks on startup and periodically #1585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Support retrying non-finished async tasks on startup and periodically #1585
Conversation
@@ -152,6 +156,7 @@ public void testTableCleanup() throws IOException { | |||
|
|||
handler.handleTask(task, callContext); | |||
|
|||
timeSource.add(Duration.ofMinutes(10)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously, task entity might miss LAST_ATTEMPT_START_TIME
prop so loading tasks without time-out can success; After complete each task entity with this property, we need to manipulate time to make loadTasks
works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain this further - I'm not sure why the tests need this 10m jump? Is it so that tasks are "recovered" by the Quarkus Scheduled method?
@@ -172,6 +172,11 @@ public Map<String, BaseResult> purgeRealms(Iterable<String> realms) { | |||
return Map.copyOf(results); | |||
} | |||
|
|||
@Override | |||
public Map<String, PolarisMetaStoreManager> getMetaStoreManagerMap() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make this a bit more defensively-coded, I might recommend making this into a iterator of Map.Entry objects, given that this is a public method and we wouldn't want any code path to be able to modify this mapping?
} | ||
|
||
private void addTaskLocation(TaskEntity task) { | ||
Map<String, String> internalPropertiesAsMap = new HashMap<>(task.getInternalPropertiesAsMap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addInternalProperty
try { | ||
ManifestReader<DataFile> dataFiles = ManifestFiles.read(manifestFile, fileIO); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason behind this change?
@@ -193,6 +198,9 @@ private Stream<TaskEntity> getManifestTaskStream( | |||
.withData( | |||
new ManifestFileCleanupTaskHandler.ManifestCleanupTask( | |||
tableEntity.getTableIdentifier(), TaskUtils.encodeManifestFile(mf))) | |||
.withLastAttemptExecutorId(executorId) | |||
.withAttemptCount(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we assume this?
@@ -235,6 +247,9 @@ private Stream<TaskEntity> getMetadataTaskStream( | |||
.withData( | |||
new BatchFileCleanupTaskHandler.BatchFileCleanupTask( | |||
tableEntity.getTableIdentifier(), metadataBatch)) | |||
.withLastAttemptExecutorId(executorId) | |||
.withAttemptCount(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto as above.
PolarisCallContext polarisCallContext = | ||
new PolarisCallContext( | ||
metastore, new PolarisDefaultDiagServiceImpl(), configurationStore, clock); | ||
EntitiesResult entitiesResult = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I'm understanding the logic here: we are asking for 20 tasks here - but what if there are more than 20 tasks that need recovery?
@@ -152,6 +156,7 @@ public void testTableCleanup() throws IOException { | |||
|
|||
handler.handleTask(task, callContext); | |||
|
|||
timeSource.add(Duration.ofMinutes(10)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain this further - I'm not sure why the tests need this 10m jump? Is it so that tasks are "recovered" by the Quarkus Scheduled method?
tableCleanupTaskHandler.handleTask(task, callCtx); | ||
|
||
// Step 3: Verify that the generated child tasks were registered, ATTEMPT_COUNT = 2 | ||
timeSource.add(Duration.ofMinutes(10)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I, personally, found this very hard to follow - even with the comments. I would highly recommend making the comments much more verbose here to allow the full flow of logic (what is happening with which task and why) to be communicated to a reader who may not be an expert at this particular type of task or tasks in general.
Fix #774
Context
Polaris uses async tasks to perform operations such as table and manifest file cleanup. These tasks are executed asynchronously in a separate thread within the same JVM, and retries are handled inline within the task execution. However, this mechanism does not guarantee eventual execution in the following cases:
Implementation Plan
Stage 1: Potential improvement - #1523
Introduce per-task transactional leasing in the metastore layer via loadTasks(...)
Stage 2 (Current PR):
Persist failed tasks and introduce a retry mechanism triggered during Polaris startup and via periodic background checks, changes included:
getMetaStoreManagerMap
LAST_ATTEMPT_START_TIME
set for each task entity creation, which is important for time-out filtering whenloadTasks()
from metastore, so that prevent multiple executors from picking the same taskTaskRecoveryManager
: New class responsible for task recovery logic, including:PolarisCallContext
QuarkusTaskExecutorImpl
: Hook into application lifecycle to initiate task recovery.Recommended Review Order
TaskRecoveryManager
QuarkusTaskExecutorImpl
andTaskExecutorImpl