refactor(trimming): implement pre-trimming#687
Conversation
This commit refactors trimming such that the staking-miner is responsible to ensure that trimming doesn't have to be done be on-chain. Instead the staking-miner iteratively removes the votes by one voter at the time and re-computes the solution. Once a solution with no trimming is found that solution is submitted to the chain
| static ref NOMINATORS: u32 = std::env::var("N").unwrap_or("1000".to_string()).parse().unwrap(); | ||
| static ref CANDIDATES: u32 = std::env::var("C").unwrap_or("500".to_string()).parse().unwrap(); | ||
| static ref VALIDATORS: u32 = std::env::var("V").unwrap_or("100".to_string()).parse().unwrap(); | ||
| static ref NOMINATION_DEGREE: NominationDegree = NominationDegree::from_str(std::env::var("ND").unwrap_or("full".to_string()).as_ref()).unwrap(); |
There was a problem hiding this comment.
I removed this nomination degree because I don't want have randomness in how many validators a nominator supports and I don't understand partial and full
My assumption was that full and partial would be the other way around but I understand why it was added.
| } | ||
|
|
||
| const SESSION: BlockNumber = 1 * MINUTES; | ||
| const SESSION: BlockNumber = 3 * MINUTES; |
There was a problem hiding this comment.
Because the trimming is slower with "pre-trimming", I increased the SignPhase a bit...
| Perbill::from_rational(8u32, 10) * <Runtime as frame_system::Config>::BlockWeights::get().get(DispatchClass::Normal).max_total.unwrap(), | ||
| WeightTrimming::get() | ||
| ); | ||
| pub MinerMaxLength: u32 = Perbill::from_percent(60) * *(<<Runtime as frame_system::Config>::BlockLength as Get<limits::BlockLength>>::get()).max.get(DispatchClass::Normal); |
There was a problem hiding this comment.
modified these values to ensure trimming always occurs when we run it.
gpestana
left a comment
There was a problem hiding this comment.
Added a few comments that could optimise the number of trimming round before reaching the appropriate solution weight/length, but I think that should not be a blocker for this PR if it's too much work.
Thanks for implementing this!
src/epm.rs
Outdated
| Self { voters, voters_by_stake } | ||
| } | ||
|
|
||
| /// Trim the next voter with the least amount of stake. |
There was a problem hiding this comment.
Have we tried to trim more than one voter in each iteration? This may reduce the number of iterations required to finalise the trimming. Maybe we could have an heuristic where depending on how much the solution exceeds the limit, trim 1 or more voters in one go.
This probably should not block this PR, but perhaps addressed in a subsequent PR if helpful.
There was a problem hiding this comment.
We could also calculate the difference between the weight_trimmed/length_trimmed calculated in prepare_election_result_with_snapshot and use that as an heuristic of how much trimming will be required. Then we can use that to request the trimming of N voters in one go, if there's still a long way to trim.
There was a problem hiding this comment.
That's a good point, I think we should merge this PR and create a issue for your idea.
If there's much trimming to do then yeah this would be very slow as it does it one-by-one.
Co-authored-by: Gonçalo Pestana <g6pestana@gmail.com>
| let desired_targets = desired_targets; | ||
| let targets = targets.len() as u32; | ||
|
|
||
| let est_weight: Weight = tokio::task::spawn_blocking(move || { |
There was a problem hiding this comment.
decided to add this in the constructor/new because it should only be needed to do once
| let mut h = voters.len(); | ||
| let mut best_solution = None; | ||
|
|
||
| while l <= h { |
There was a problem hiding this comment.
binary search to find the best "pre-trim"
This commit refactors trimming such that the staking-miner is responsible to ensure that trimming doesn't have to be done be on-chain.
The pre-trimming is implemented as follows:
By “binary search manner” I mean:
This makes things much faster than the iterative approach I implemented first
This PR also adds a prometheous metric to count the number of "pre-trimming length ops"