Skip to content

Conversation

@davxy
Copy link
Collaborator

@davxy davxy commented Jan 28, 2026

This PR targets batching branch

Introduce a batching structure for ring proofs that bundles the KzgAccumulator and the RingVerifier, providing greater flexibility for downstream users.

This would be further improved if KzgAccumulator implemented CanonicalSerialize and CanonicalDeserialize, enabling batching across blocks.


Proofs can be prepared for batch verification in parallel. Prepared proofs can be accumulated.
Performance boost ~2x


Some benches

Batch vs sequential verification times (ms):

Sequential prepare+accumulate

// | proofs | sequential | batch  | speedup |
// |--------|------------|--------|---------|
// | 1      | 3.032      | 2.790  | 1.09x   |
// | 2      | 6.425      | 3.218  | 2.00x   |
// | 4      | 11.968     | 5.122  | 2.34x   |
// | 8      | 23.922     | 6.487  | 3.69x   |
// | 16     | 47.773     | 10.002 | 4.78x   |
// | 32     | 95.570     | 16.601 | 5.76x   |
// | 64     | 210.959    | 29.484 | 7.15x   |
// | 128    | 422.217    | 52.170 | 8.09x   |
// | 256    | 762.874    | 85.164 | 8.96x   |

Sequential verification scales linearly with proof count.
Batch verification scales sub-linearly.

Parallel prepare + final sequential accumulate

NOTE: Parallel preparation can roughly yield an extra 2x speedup.
The parallel crate feature does not enable this.
Downstream users can perform parallel preparation themselves. Each Prepared proof consumes ~3K, which may introduce significant hidden overhead when preparing big batches, so it may be preferable to accumulate every X proofs rather than the entire batch at once to save memory.

@davxy davxy mentioned this pull request Jan 28, 2026
@davxy davxy requested review from drskalman and swasilyev January 28, 2026 17:48
Comment on lines +111 to +128
let (challenges, mut rng) = self.verifier.plonk_verifier.restore_challenges(
&result,
&proof,
// '1' accounts for the quotient polynomial that is aggregated together with the columns
PiopVerifier::<E::ScalarField, <KZG<E> as PCS<_>>::C, Affine<J>>::N_COLUMNS + 1,
PiopVerifier::<E::ScalarField, <KZG<E> as PCS<_>>::C, Affine<J>>::N_CONSTRAINTS,
);
let seed = self.verifier.piop_params.seed;
let seed_plus_result = (seed + result).into_affine();
let domain_at_zeta = self.verifier.piop_params.domain.evaluate(challenges.zeta);
let piop = PiopVerifier::<_, _, Affine<J>>::init(
domain_at_zeta,
self.verifier.fixed_columns_committed.clone(),
proof.column_commitments.clone(),
proof.columns_at_zeta.clone(),
(seed.x, seed.y),
(seed_plus_result.x, seed_plus_result.y),
);
Copy link

@ggwpez ggwpez Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you can pull this out into a prepare_proof if its slow? Then we can run it in parallel for all proofs.

And if the accumulate is commutative then maybe we can do it in log2(n) with binary reduction instead of a normal loop to parallelize it.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea! The timings have been reduced by half.

Copy link
Collaborator Author

@davxy davxy Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Batch size = 100

Sequential

Proofs push: 59.26215ms
Unprepared batch verification: 38.723946ms
Total time: 98.002929ms

With parallel prepare

Proofs prepare: 2.816633ms
Proofs push prepared: 739.797µs
Prepared batch verification: 38.691304ms
Total time: 42.267331ms

Copy link

@ggwpez ggwpez Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. I guess we wont be able to use this from the runtime, but at least we know the theoretical optimum and can use it to inform our decisions about host calls.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, not from the runtime.
But this is useful for JAM tickets verification :-)

@davxy davxy requested a review from ggwpez January 29, 2026 17:56
@davxy davxy requested review from burdges and removed request for ggwpez January 29, 2026 18:43
Comment on lines +147 to +157
// Pick some entropy from plonk verifier for later usage
let mut entropy = [0_u8; 32];
rng.fill_bytes(&mut entropy);

PreparedBatchItem {
piop,
proof,
challenges,
entropy,
}
}
Copy link
Collaborator Author

@davxy davxy Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@swasilyev @burdges @drskalman Need some extra attention here.
In practice, instead of immediately using the returned rng, we pick some randomness from it to be used later in the push_prepared

Comment on lines +159 to +164
pub fn push_prepared(&mut self, item: PreparedBatchItem<E, J>) {
let mut ts = self.verifier.plonk_verifier.transcript_prelude.clone();
ts._add_serializable(b"batch-entropy", &item.entropy);
self.acc
.accumulate(item.piop, item.proof, item.challenges, &mut ts.to_rng());
}
Copy link
Collaborator Author

@davxy davxy Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@swasilyev @burdges @drskalman here I pick the randomness back to:

  • extend verifier transcript
  • and use the derived rng in accumulate

Comment on lines +16 to +18
# TODO: restore w3f once https://github.com/w3f/fflonk/pull/46 gets merged
# w3f-pcs = { git = "https://github.com/w3f/fflonk", default-features = false }
w3f-pcs = { git = "https://github.com/davxy/fflonk", default-features = false }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO before we merge

@davxy davxy changed the title BatchVerifier structure BatchVerifier Feb 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants