Description
The current implementation of BinaryHeap::append
uses a heuristic based on the worst-case number of comparisons to determine between two strategies:
- Calling
self.extend(other.drain())
, equivalent to pushing all elements of the other heap. - Appending the vector of the other heap and rebuilding the heap from scratch.
I've done some benchmarking, and it turns out that method 1 (based on extend
) is always faster for two heaps based on randomly shuffled data – on the computers I've tested it on, anyway. I've included images of the benchmarks below: first the current append
strategy, then the extend
strategy (method 1). In the benchmarks, two heaps are merged, one containing the number of elements on the x-axis, the other containing 100,000 minus that number (for the rightmost data points both heaps are equal in size). The red line (sometimes hiding behind the green one) is in both images that corresponding to std::collections::BinaryHeap
, the other lines are not relevant to this issue (they aren't standard library types).
From the jump in performance in the first graph, you can clearly see when the switch between the two approaches happens. Graphs are similar for smaller heaps, I didn't test larger ones. It's possible method 2 is faster under other circumstances (maybe if one of the heaps contains mostly very small/large elements?), especially if both heaps are (almost) equal in size. However, I think the benchmark is more close to the "average" real-world use case than such a case would be (I'd be happy to be proven wrong about this, though).
Simplification of the benchmark that only runs the parts relevant for this issue
Cargo.toml
[package]
name = "binary_heap_bench"
version = "0.1.0"
edition = "2018"
[dependencies]
criterion = "0.3"
rand = "0.7"
[[bench]]
name = "benchmark"
harness = false
benches/benchmark.rs
use criterion::*;
use rand::{seq::SliceRandom, thread_rng};
use std::collections::BinaryHeap;
use std::iter::FromIterator;
use std::mem;
/// Data type we want to use
type T = u32;
/// Produce shuffled vector containing values 0..n
fn random_data(n: T) -> Vec<T> {
let mut data = Vec::from_iter(0..n);
data.shuffle(&mut thread_rng());
data
}
fn two_heaps<H: From<Vec<T>>>(mut vec1: Vec<T>, i: usize) -> (H, H) {
let vec2 = vec1.split_off(i);
(vec1.into(), vec2.into())
}
fn std_heap_append((mut heap1, mut heap2): (BinaryHeap<T>, BinaryHeap<T>)) -> BinaryHeap<T> {
heap1.append(&mut heap2);
heap1
}
fn std_heap_extend((mut heap1, mut heap2): (BinaryHeap<T>, BinaryHeap<T>)) -> BinaryHeap<T> {
if heap1.len() < heap2.len() {
mem::swap(&mut heap1, &mut heap2);
}
heap1.extend(heap2.drain());
heap1
}
fn benchmark(c: &mut Criterion) {
let mut group = c.benchmark_group("append/extend");
let base = 100000;
let step = 2500;
for i in (step..=base as usize / 2).step_by(step) {
let size = BatchSize::SmallInput;
group.bench_function(BenchmarkId::new("append", i), |b| {
b.iter_batched(|| two_heaps(random_data(base), i), std_heap_append, size)
});
group.bench_function(BenchmarkId::new("extend", i), |b| {
b.iter_batched(|| two_heaps(random_data(base), i), std_heap_extend, size)
});
}
}
criterion_group!(benches, benchmark);
criterion_main!(benches);
Activity
the8472 commentedon Oct 2, 2020
The comment suggests that the rebuild strategy is more efficient if at least one of the following is true:
You can see how the slope of the append method is lower, so its advantage would show if you extended your benchmark by an order of magnitude. If you used a more expensive comparison function it would probably show earlier too.
hanmertens commentedon Oct 2, 2020
The code for
append
swaps the heaps ifother
is larger thanself
, so the appended heap is always smaller than or equal to the self heap. Indeed, the slope of the rebuild method is smaller, but I can't find conditions where that is outweighed by the higher base cost of the method.To simulate more expensive comparisons I've made a benchmark where each comparison involves some global atomic operations. Below that I've included a benchmark with ten times the number of items in the heaps in total (an order of magnitude more than that makes the benchmark very slow). The trends remain similar.
The original implementation (#32526 and #32987) seems to base its choice for heuristic mostly on the expectation it is a reasonable heuristic. However, I think reasoning from the worst-case scenario (O(n log(m))) is pessimistic for the
extend
method as a large number ofpush
es require only one or two comparisons (instead of log(m)) before ending up in the bottom two layers (that contain 75% of all items in the end). Maybe for even larger heaps therebuild
method actually is advantageous again, but I'm not sure how to determine a heuristic for that.Changed code for atomic benchmark
the8472 commentedon Oct 2, 2020
Ah, I missed that. That indeed limits the opportunities to realize its advantage.
The x-axis still says 50k though?
First we would need a benchmark demonstrating that advantage anyway.
hanmertens commentedon Oct 2, 2020
Only for the first image, that one has the same number of elements as before but a more expansive comparison implementation. The x-axis of the second image goes up to 500k, that one has normal "cheap" integer comparison but a larger number of items in the heaps.
Certainly.
scottmcm commentedon Oct 3, 2020
Naively, when I look at the graphs in the OP what I see isn't that extend is always going to be better -- the slopes seem to be clearly different, though two graphs with different scales and extra lines makes that comparison harder than it ought to be -- but that the current heuristic is doing a bad job of accounting for constant factors when picking the switchover point.
Why isn't the answer here just use a number other than two in the heuristic?
hanmertens commentedon Oct 4, 2020
Here's the two lines from a different measurement of the same benchmark in the same graph. It has a weird jump between 25000 and 27500 (artifact?), that's why included the more messy pictures initially.
The slopes are indeed very different, and extrapolating them would lead to a crossover point. However, this crossover point can never actually be reached because you can't extrapolate beyond the rightmost data point. The benchmark is set up such that two heaps are merged, one containing x elements and the other 100,000 - x. The rightmost data point at x = 50,000 corresponds to merging two equally sized heaps. The data point that would be to the right of that (x = 52,500) would give the same result as the data point to the left (x = 47,500) because both involve merging a heap of 47,500 and 52,500 elements and the heaps are swapped in one of them to make sure the smaller heap is append to the larger one. In other words, beyond the rightmost data point the slopes change (become negative) and the data would mirror the data to the left of that point (which is why I didn't include it). The crossover point is thus only reachable (in this benchmark) if you'd remove the swap-optimization (theoretically I mean, practically that would worsen performance, or course).
The TLDR is that it appears the number would have to be such that the condition always evaluates to
false
.On another note, I've now also benchmarked merging very large heaps (100 million elements after merge), and
extend
is always more favorable there as well.SkiFire13 commentedon Nov 8, 2020
Just tried the benchmark with changes from PR #78857 and
append
scores significantly better. The heuristic could still be improved though.scottmcm commentedon Nov 8, 2020
Thanks for the update to remind me about this 🙂
One thing that I'd missed last time:
extend
ing from the contents of aBinaryHeap
can never hit the worst case (needing to bubble to the root every time), since it's being extended by something that's in roughly-descending order.Do you want to wait on #78857 before looking at #77435 more?
SkiFire13 commentedon Nov 8, 2020
I think for now it would be better to wait and/or work towards a better heuristic (even though
append
can perform better, it's still up to 25% slower when the second heap is much smaller).PS/Disclaimer: I'm the author of #78857 and not a reviewer, so I may be a little biased, however the benchmarks should speak for themself. Would be nice someone else could confirm my claims
7 remaining items