GODRIVER-3132 Use dst buf pool for zstd encoding.#1577
GODRIVER-3132 Use dst buf pool for zstd encoding.#1577qingyang-hu merged 4 commits intomongodb:v1from
Conversation
API Change ReportNo changes found! |
matthewdale
left a comment
There was a problem hiding this comment.
This change doesn't directly address the problem described in GODRIVER-3132, which is that each zstd.Encoder allocates a bunch of memory and then never releases it. The best way to limit that "permanent" allocated memory seems to be using WithWindowSize to limit the max window size. Additionally, klauspost/compress v1.17.5 limits the max window size for better/best Zstd compression to 8MB, so it may be worth updating to that version or newer.
| ptr := zstdBufPool.Get().(*[]byte) | ||
| b := encoder.EncodeAll(in, *ptr) |
There was a problem hiding this comment.
Instead of adding an additional pool of buffers just for Zstd compression, can we leverage the existing op buffer pool to improve memory usage for all compression types?
For example, in compression.go, change CompressPayload to accept a destination byte slice:
func CompressPayload(src, dst []byte, opts CompressionOpts) ([]byte, error) {
// ...
return encoder.EncodeAll(src, dst)
// ...Then in connection.go, pass the existing destination byte slice into CompressPayload:
func (c *Connection) CompressWireMessage(src, dst []byte) ([]byte, error) {
// ...
compressed, err := driver.CompressPayload(rem, dst, opts)
// ...Then remove the following call to AppendCompressedCompressedMessage, which is just an alias for append.
There was a problem hiding this comment.
The benchmark shows no improvements after passing the dst slice to CompressPayload. I also tried replacing zstdBufPool with the existing memoryPool, or applying zstdBufPool on other compression types, but none improved memory consumption.
|
Does this need to be ported to |
Yes |
(cherry picked from commit b10dc34)
GODRIVER-3132
Summary
zstd.EncodeAll.benchstat results: