Skip to content

Search stability improvements#1033

Merged
mdisibio merged 7 commits intografana:mainfrom
mdisibio:search-contention
Oct 18, 2021
Merged

Search stability improvements#1033
mdisibio merged 7 commits intografana:mainfrom
mdisibio:search-contention

Conversation

@mdisibio
Copy link
Copy Markdown
Contributor

@mdisibio mdisibio commented Oct 13, 2021

What this PR does:
This PR fixes several rough spots:

  • Fixes a mutex deadlock that can occur between search and instance.writeTraceToHeadBlock, by taking i.blocksMtx until all search routines are started in instance.Search
  • Fixes panics channel already closed and send on closed channel in search.Results by redoing the cleanup and using sync.WaitGroup
  • Improves overall search contention by using read-locks where possible
  • Fix wal search error "file already closed" where the wal block is deleted after search is initiated but before the block is opened, by tracking closed state in the block.
  • Fix flakey test by avoiding t.TempDir, behavior discussed here: golang/go/issues/43547
  • Fix flakey data race by guarding StreamingSearchBlock.FlushBuffer with a mutex

Which issue(s) this PR fixes:
Fixes n/a

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

Copy link
Copy Markdown
Contributor

@mapno mapno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. probably another couple of eyes would be good

Comment thread tempodb/search/backend_search_block.go
Copy link
Copy Markdown
Contributor

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice batch of changes, just one comment on trying to understand the deadlock state better.

Comment thread modules/ingester/instance_search.go
Copy link
Copy Markdown
Contributor

@annanay25 annanay25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can add a bunch of bugfixes to the Changelog :)

@joe-elliott
Copy link
Copy Markdown
Collaborator

Looks good to me, but we are getting a test timeout in integration tests.

panic: test timed out after 10m0s

@mdisibio
Copy link
Copy Markdown
Contributor Author

The test that timed out is TestScalableSingleBinary. This test pushes 1 trace and is somewhat new. Wondering if it is related or if the test is flakey. Will review.

@zalegrala
Copy link
Copy Markdown
Contributor

zalegrala commented Oct 14, 2021

If the e2e tests are timing out, this might have to do with waiting for the metric to show up in the /metrics endpoint.

@mdisibio mdisibio mentioned this pull request Oct 15, 2021
12 tasks
@mdisibio
Copy link
Copy Markdown
Contributor Author

Unable to reproduce the flakey tests locally, and consensus is they are likely not caused by these changes. Merging.

@mdisibio mdisibio merged commit 0662f16 into grafana:main Oct 18, 2021
@mdisibio mdisibio deleted the search-contention branch April 25, 2023 18:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants