-
Notifications
You must be signed in to change notification settings - Fork 17
BatchVerifier #66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: batch-ring-proof
Are you sure you want to change the base?
BatchVerifier #66
Conversation
| let (challenges, mut rng) = self.verifier.plonk_verifier.restore_challenges( | ||
| &result, | ||
| &proof, | ||
| // '1' accounts for the quotient polynomial that is aggregated together with the columns | ||
| PiopVerifier::<E::ScalarField, <KZG<E> as PCS<_>>::C, Affine<J>>::N_COLUMNS + 1, | ||
| PiopVerifier::<E::ScalarField, <KZG<E> as PCS<_>>::C, Affine<J>>::N_CONSTRAINTS, | ||
| ); | ||
| let seed = self.verifier.piop_params.seed; | ||
| let seed_plus_result = (seed + result).into_affine(); | ||
| let domain_at_zeta = self.verifier.piop_params.domain.evaluate(challenges.zeta); | ||
| let piop = PiopVerifier::<_, _, Affine<J>>::init( | ||
| domain_at_zeta, | ||
| self.verifier.fixed_columns_committed.clone(), | ||
| proof.column_commitments.clone(), | ||
| proof.columns_at_zeta.clone(), | ||
| (seed.x, seed.y), | ||
| (seed_plus_result.x, seed_plus_result.y), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you can pull this out into a prepare_proof if its slow? Then we can run it in parallel for all proofs.
And if the accumulate is commutative then maybe we can do it in log2(n) with binary reduction instead of a normal loop to parallelize it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea! The timings have been reduced by half.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Batch size = 100
Sequential
Proofs push: 59.26215ms
Unprepared batch verification: 38.723946ms
Total time: 98.002929ms
With parallel prepare
Proofs prepare: 2.816633ms
Proofs push prepared: 739.797µs
Prepared batch verification: 38.691304ms
Total time: 42.267331ms
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I guess we wont be able to use this from the runtime, but at least we know the theoretical optimum and can use it to inform our decisions about host calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, not from the runtime.
But this is useful for JAM tickets verification :-)
| // Pick some entropy from plonk verifier for later usage | ||
| let mut entropy = [0_u8; 32]; | ||
| rng.fill_bytes(&mut entropy); | ||
|
|
||
| PreparedBatchItem { | ||
| piop, | ||
| proof, | ||
| challenges, | ||
| entropy, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@swasilyev @burdges @drskalman Need some extra attention here.
In practice, instead of immediately using the returned rng, we pick some randomness from it to be used later in the push_prepared
| pub fn push_prepared(&mut self, item: PreparedBatchItem<E, J>) { | ||
| let mut ts = self.verifier.plonk_verifier.transcript_prelude.clone(); | ||
| ts._add_serializable(b"batch-entropy", &item.entropy); | ||
| self.acc | ||
| .accumulate(item.piop, item.proof, item.challenges, &mut ts.to_rng()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@swasilyev @burdges @drskalman here I pick the randomness back to:
- extend verifier transcript
- and use the derived rng in accumulate
| # TODO: restore w3f once https://github.com/w3f/fflonk/pull/46 gets merged | ||
| # w3f-pcs = { git = "https://github.com/w3f/fflonk", default-features = false } | ||
| w3f-pcs = { git = "https://github.com/davxy/fflonk", default-features = false } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO before we merge
This PR targets batching branch
Introduce a batching structure for ring proofs that bundles the
KzgAccumulatorand theRingVerifier, providing greater flexibility for downstream users.This would be further improved if
KzgAccumulatorimplementedCanonicalSerializeandCanonicalDeserialize, enabling batching across blocks.Proofs can be prepared for batch verification in parallel. Prepared proofs can be accumulated.
Performance boost ~2x
Some benches
Batch vs sequential verification times (ms):
Sequential prepare+accumulate
Sequential verification scales linearly with proof count.
Batch verification scales sub-linearly.
Parallel prepare + final sequential accumulate
NOTE: Parallel preparation can roughly yield an extra 2x speedup.
The
parallelcrate feature does not enable this.Downstream users can perform parallel preparation themselves. Each Prepared proof consumes ~3K, which may introduce significant hidden overhead when preparing big batches, so it may be preferable to accumulate every X proofs rather than the entire batch at once to save memory.