Skip to content
Open
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
5717045
Update commitment scheme trait to include compressed target field
JuI3s Nov 26, 2025
545301e
Compressed commitment implementation
JuI3s Nov 27, 2025
1965545
temp
JuI3s Nov 27, 2025
0b9169a
aggregate_chunks_compressed
JuI3s Nov 27, 2025
a49ebfa
Commit compressed in progress
JuI3s Nov 27, 2025
3e632f8
Fix Cargo.toml
JuI3s Nov 27, 2025
985e303
Dory Prover/Verifier setup
JuI3s Nov 27, 2025
2eb13d3
update toml
JuI3s Nov 27, 2025
b31cb94
fix toml
JuI3s Nov 27, 2025
3fd5217
fmt
JuI3s Nov 27, 2025
0da7b59
Disable public visibility of DoryBn254
JuI3s Nov 27, 2025
37f52ca
feat: customizable guest optimization level (#1135)
mathmasterzach Dec 1, 2025
148b6f8
Patch for dev
JuI3s Dec 1, 2025
3e65b6a
expect test
JuI3s Dec 1, 2025
fac5c10
temp
JuI3s Dec 1, 2025
b83d918
integration in progress
JuI3s Dec 2, 2025
f16a87b
Compression field size table summary.
JuI3s Dec 2, 2025
a485f03
Fix compressed prover/verifier transcript bug
JuI3s Dec 3, 2025
62bdecd
Implement DoryElement for compressed pairing GT
JuI3s Dec 3, 2025
7f7721b
feat(dory): Add GT element compression for proofs and commitments
JuI3s Dec 3, 2025
e5a21c7
Before deleting Jolt commitment
JuI3s Dec 3, 2025
01be1c4
temp
JuI3s Dec 3, 2025
dbee230
Temp
JuI3s Dec 4, 2025
2496a42
Refactor ProverOpeningAccumulator::reduce_and_prove_sumcheck_helper
JuI3s Dec 4, 2025
0b3ab94
temp
JuI3s Dec 5, 2025
c61027e
prove_compressed
JuI3s Dec 5, 2025
82f934e
Non compression e2e tests pass
JuI3s Dec 6, 2025
608c8f9
Temp added more compressed methods
JuI3s Dec 6, 2025
7b85f15
Temp add homomorphic reduction feature
JuI3s Dec 6, 2025
1ff23b7
Updated expected test results for compression
JuI3s Dec 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
169 changes: 83 additions & 86 deletions Cargo.lock

Large diffs are not rendered by default.

5 changes: 3 additions & 2 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -127,14 +127,15 @@ ark-ff = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist
ark-ec = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout" }
ark-serialize = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout" }
allocative = { git = "https://github.com/facebookexperimental/allocative", rev = "85b773d85d526d068ce94724ff7a7b81203fc95e" }
dory-pcs = { git = "https://github.com/a16z/dory", branch = "dev/twist-shout" }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistency between Cargo.toml and Cargo.lock for dory-pcs dependency. Cargo.toml adds a patch pointing to a16z/dory branch dev/twist-shout, but Cargo.lock shows the dependency is resolved from JuI3s/dory branch compression-integration. This mismatch will cause build issues and dependency resolution failures.

# Cargo.toml line 130 adds:
dory-pcs = { git = "https://github.com/a16z/dory", branch = "dev/twist-shout" }

# But Cargo.lock shows:
source = "git+https://github.com/JuI3s/dory?branch=compression-integration#..."

Update Cargo.toml to match the actual dependency being used:

dory-pcs = { git = "https://github.com/JuI3s/dory", branch = "compression-integration" }
Suggested change
dory-pcs = { git = "https://github.com/a16z/dory", branch = "dev/twist-shout" }
dory-pcs = { git = "https://github.com/JuI3s/dory", branch = "compression-integration" }

Spotted by Graphite Agent

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.


[workspace.metadata.cargo-machete]
ignored = ["jolt-sdk"]

[workspace.dependencies]
# Cryptography and Math
ark-bn254 = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-grumpkin = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-grumpkin = { git = "https://github.com/JuI3s/arkworks-algebra", branch = "compression", default-features = false }
ark-ec = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-ff = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent arkworks-algebra sources will cause version conflicts. Most dependencies use a16z/arkworks-algebra branch dev/twist-shout (lines 136, 138-140) while ark-grumpkin uses JuI3s/arkworks-algebra branch compression (line 137). Since these packages are tightly coupled, mixing sources will lead to type incompatibilities and compilation failures.

# Line 136: a16z repo
ark-bn254 = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", ... }
# Line 137: JuI3s repo (different!)
ark-grumpkin = { git = "https://github.com/JuI3s/arkworks-algebra", branch = "compression", ... }
# Lines 138-140: back to a16z repo
ark-ec = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", ... }

All arkworks dependencies should use the same repository and branch to ensure compatibility.

Suggested change
# Cryptography and Math
ark-bn254 = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-grumpkin = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-grumpkin = { git = "https://github.com/JuI3s/arkworks-algebra", branch = "compression", default-features = false }
ark-ec = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-ff = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
# Cryptography and Math
ark-bn254 = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-grumpkin = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-ec = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }
ark-ff = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false }

Spotted by Graphite Agent

Fix in Graphite


Is this helpful? React 👍 or 👎 to let us know.

ark-serialize = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout", default-features = false, features = [
Expand All @@ -145,7 +146,7 @@ ark-std = { version = "0.5.0", default-features = false }
sha3 = "0.10.8"
blake2 = "0.10"
blake3 = { version = "1.5.0" }
jolt-optimizations = { git = "https://github.com/a16z/arkworks-algebra", branch = "dev/twist-shout" }
jolt-optimizations = { git = "https://github.com/JuI3s/arkworks-algebra", branch = "compression" }
dory = { package = "dory-pcs", version = "0.1.0", features = ["backends", "cache", "disk-persistence"] }

# Core Utilities
Expand Down
34 changes: 34 additions & 0 deletions jolt-core/src/poly/commitment/commitment_scheme.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,15 @@ pub trait CommitmentScheme: Clone + Sync + Send + 'static {
type Field: JoltField + Sized;
type ProverSetup: Clone + Sync + Send + Debug + CanonicalSerialize + CanonicalDeserialize;
type VerifierSetup: Clone + Sync + Send + Debug + CanonicalSerialize + CanonicalDeserialize;
type CompressedCommitment: Default
+ Debug
+ Sync
+ Send
+ PartialEq
+ CanonicalSerialize
+ CanonicalDeserialize
+ AppendToTranscript
+ Clone;
type Commitment: Default
+ Debug
+ Sync
Expand Down Expand Up @@ -50,6 +59,22 @@ pub trait CommitmentScheme: Clone + Sync + Send + 'static {
setup: &Self::ProverSetup,
) -> (Self::Commitment, Self::OpeningProofHint);

/// Commits to a multilinear polynomial using the provided setup, where the commitment is compressed.
///
/// # Arguments
/// * `poly` - The multilinear polynomial to commit to
/// * `setup` - The prover setup for the commitment scheme
///
/// # Returns
/// A tuple containing the compressed commitment to the polynomial and a hint that can be used
/// to optimize opening proof generation
fn commit_compressed(
_poly: &MultilinearPolynomial<Self::Field>,
_setup: &Self::ProverSetup,
) -> (Self::CompressedCommitment, Self::OpeningProofHint) {
panic!("`commit_compressed` is not implemented for this commitment scheme. CompressedCommitment of type `{}` not supported.", std::any::type_name::<Self::CompressedCommitment>());
}

/// Commits to multiple multilinear polynomials in batch.
///
/// # Arguments
Expand Down Expand Up @@ -147,4 +172,13 @@ pub trait StreamingCommitmentScheme: CommitmentScheme {
onehot_k: Option<usize>,
tier1_commitments: &[Self::ChunkState],
) -> (Self::Commitment, Self::OpeningProofHint);

/// Compute tier 2 commitment from accumulated tier 1 commitments, where the output commitment is compressed.
fn aggregate_chunks_compressed(
_setup: &Self::ProverSetup,
_onehot_k: Option<usize>,
_tier1_commitments: &[Self::ChunkState],
) -> (Self::CompressedCommitment, Self::OpeningProofHint) {
panic!("`aggregate_chunks_compressed` is not implemented for this commitment scheme. CompressedCommitment of type `{}` not supported.", std::any::type_name::<Self::CompressedCommitment>());
}
}
161 changes: 148 additions & 13 deletions jolt-core/src/poly/commitment/dory/commitment_scheme.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,21 @@
use super::dory_globals::DoryGlobals;
use super::jolt_dory_routines::{JoltG1Routines, JoltG2Routines};
use super::wrappers::{
jolt_to_ark, ArkDoryProof, ArkFr, ArkG1, ArkGT, ArkworksProverSetup, ArkworksVerifierSetup,
JoltToDoryTranscript, BN254,
jolt_to_ark, ArkDoryProof, ArkFr, ArkG1, ArkG2, ArkGT, ArkGTCompressed, JoltBn254,
JoltToDoryTranscript,
};
use crate::poly::commitment::dory::setup::{DoryProverSetup, DoryVerifierSetup};
use crate::{
field::JoltField,
poly::commitment::commitment_scheme::{CommitmentScheme, StreamingCommitmentScheme},
poly::multilinear_polynomial::MultilinearPolynomial,
transcripts::Transcript,
utils::{errors::ProofVerifyError, math::Math, small_scalar::SmallScalar},
};
use ark_bn254::{G1Affine, G1Projective};
use ark_bn254::{Bn254 as ArkBn254, G1Affine, G1Projective};
use ark_ec::pairing::{CompressedPairing, MillerLoopOutput, Pairing};
use ark_ec::CurveGroup;
use ark_ff::Zero;
use ark_ff::{One, Zero};
use dory::primitives::{
arithmetic::{Group, PairingCurve},
poly::Polynomial,
Expand All @@ -26,20 +28,21 @@ use std::borrow::Borrow;
use tracing::trace_span;

#[derive(Clone)]
pub struct DoryCommitmentScheme;
pub struct DoryCommitmentScheme {}

impl CommitmentScheme for DoryCommitmentScheme {
type Field = ark_bn254::Fr;
type ProverSetup = ArkworksProverSetup;
type VerifierSetup = ArkworksVerifierSetup;
type ProverSetup = DoryProverSetup;
type VerifierSetup = DoryVerifierSetup;
type Commitment = ArkGT;
type Proof = ArkDoryProof;
type BatchedProof = Vec<ArkDoryProof>;
type OpeningProofHint = Vec<ArkG1>;
type CompressedCommitment = ArkGTCompressed;

fn setup_prover(max_num_vars: usize) -> Self::ProverSetup {
let _span = trace_span!("DoryCommitmentScheme::setup_prover").entered();
let setup = ArkworksProverSetup::new_from_urs(&mut OsRng, max_num_vars);
let setup = DoryProverSetup::new_from_urs(&mut OsRng, max_num_vars);

DoryGlobals::init_prepared_cache(&setup.g1_vec, &setup.g2_vec);

Expand All @@ -64,7 +67,28 @@ impl CommitmentScheme for DoryCommitmentScheme {

let (tier_2, row_commitments) = <MultilinearPolynomial<ark_bn254::Fr> as Polynomial<
ArkFr,
>>::commit::<BN254, JoltG1Routines>(
>>::commit::<JoltBn254, JoltG1Routines>(
poly, nu, sigma, setup
)
.expect("commitment should succeed");

(tier_2, row_commitments)
}

fn commit_compressed(
poly: &MultilinearPolynomial<ark_bn254::Fr>,
setup: &Self::ProverSetup,
) -> (Self::CompressedCommitment, Self::OpeningProofHint) {
let _span = trace_span!("DoryCommitmentScheme::commit").entered();

let num_cols = DoryGlobals::get_num_columns();
let num_rows = DoryGlobals::get_max_num_rows();
let sigma = num_cols.log_2();
let nu = num_rows.log_2();

let (tier_2, row_commitments) = <MultilinearPolynomial<ark_bn254::Fr> as Polynomial<
ArkFr,
>>::commit_compressed::<JoltBn254, JoltG1Routines>(
poly, nu, sigma, setup
)
.expect("commitment should succeed");
Expand Down Expand Up @@ -118,7 +142,7 @@ impl CommitmentScheme for DoryCommitmentScheme {

let mut dory_transcript = JoltToDoryTranscript::<ProofTranscript>::new(transcript);

dory::prove::<ArkFr, BN254, JoltG1Routines, JoltG2Routines, _, _>(
dory::prove::<ArkFr, JoltBn254, JoltG1Routines, JoltG2Routines, _, _>(
poly,
&ark_point,
row_commitments,
Expand Down Expand Up @@ -153,7 +177,7 @@ impl CommitmentScheme for DoryCommitmentScheme {

let mut dory_transcript = JoltToDoryTranscript::<ProofTranscript>::new(transcript);

dory::verify::<ArkFr, BN254, JoltG1Routines, JoltG2Routines, _>(
dory::verify::<ArkFr, JoltBn254, JoltG1Routines, JoltG2Routines, _>(
*commitment,
ark_eval,
&ark_point,
Expand Down Expand Up @@ -315,17 +339,128 @@ impl StreamingCommitmentScheme for DoryCommitmentScheme {
}

let g2_bases = &setup.g2_vec[..row_commitments.len()];
let tier_2 = <BN254 as PairingCurve>::multi_pair_g2_setup(&row_commitments, g2_bases);
let tier_2 = JoltBn254::multi_pair_g2_setup(&row_commitments, g2_bases);

(tier_2, row_commitments)
} else {
let row_commitments: Vec<ArkG1> =
chunks.iter().flat_map(|chunk| chunk.clone()).collect();

let g2_bases = &setup.g2_vec[..row_commitments.len()];
let tier_2 = JoltBn254::multi_pair_g2_setup(&row_commitments, g2_bases);

(tier_2, row_commitments)
}
}

#[tracing::instrument(
skip_all,
name = "DoryCommitmentScheme::compute_tier2_commitment_compressed"
)]
fn aggregate_chunks_compressed(
setup: &Self::ProverSetup,
onehot_k: Option<usize>,
chunks: &[Self::ChunkState],
) -> (Self::CompressedCommitment, Self::OpeningProofHint) {
if let Some(K) = onehot_k {
let row_len = DoryGlobals::get_num_columns();
let T = DoryGlobals::get_T();
let rows_per_k = T / row_len;
let num_rows = K * T / row_len;

let mut row_commitments = vec![ArkG1(G1Projective::zero()); num_rows];
for (chunk_index, commitments) in chunks.iter().enumerate() {
row_commitments
.par_iter_mut()
.skip(chunk_index)
.step_by(rows_per_k)
.zip(commitments.par_iter())
.for_each(|(dest, src)| *dest = *src);
}

let g2_bases = &setup.g2_vec[..row_commitments.len()];
let tier_2 = multi_pair_g2_setup_optimized_compressed(&row_commitments, g2_bases);

(tier_2, row_commitments)
} else {
let row_commitments: Vec<ArkG1> =
chunks.iter().flat_map(|chunk| chunk.clone()).collect();

let g2_bases = &setup.g2_vec[..row_commitments.len()];
let tier_2 = <BN254 as PairingCurve>::multi_pair_g2_setup(&row_commitments, g2_bases);
let tier_2 = multi_pair_g2_setup_optimized_compressed(&row_commitments, g2_bases);

(tier_2, row_commitments)
}
}
}

fn determine_chunk_size(total: usize) -> usize {
const MIN_CHUNK: usize = 32;
const MAX_CHUNK: usize = 128;

if total < MIN_CHUNK {
return total;
}

let num_threads = rayon::current_num_threads();
let chunk = total.div_ceil(num_threads);
chunk.clamp(MIN_CHUNK, MAX_CHUNK)
}

/// Optimized multi-pairing dispatch for G2 from setup
fn multi_pair_g2_setup_optimized_compressed(ps: &[ArkG1], qs: &[ArkG2]) -> ArkGTCompressed {
let combined = multi_pair_g1_setup_parallel(ps, qs);

let result = ArkBn254::compressed_final_exponentiation(combined)
.expect("Final exponentiation should not fail");
ArkGTCompressed(result)
}

/// Parallel multi-pairing with G1 from setup (uses cache if available)
#[tracing::instrument(skip_all, name = "multi_pair_g1_setup_parallel", fields(len = ps.len(), chunk_size = determine_chunk_size(ps.len())))]
fn multi_pair_g1_setup_parallel(
ps: &[ArkG1],
qs: &[ArkG2],
) -> MillerLoopOutput<ark_ec::bn::Bn<ark_bn254::Config>> {
use ark_bn254::G1Affine;
use ark_bn254::G2Affine;
use rayon::prelude::*;

let chunk_size = determine_chunk_size(ps.len());

// NOTE: no cache as in the dory arkworks implementation.

qs.par_chunks(chunk_size)
.enumerate()
.map(|(chunk_idx, qs_chunk)| {
let start_idx = chunk_idx * chunk_size;
let end_idx = start_idx + qs_chunk.len();

let qs_prep: Vec<<ArkBn254 as ark_ec::pairing::Pairing>::G2Prepared> = qs_chunk
.iter()
.map(|q| {
let affine: G2Affine = q.0.into();
affine.into()
})
.collect();

let ps_prep: Vec<<ArkBn254 as ark_ec::pairing::Pairing>::G1Prepared> = ps
[start_idx..end_idx]
.iter()
.map(|p| {
let affine: G1Affine = p.0.into();
affine.into()
})
.collect();

ArkBn254::multi_miller_loop(ps_prep, qs_prep)
})
.reduce(
|| {
ark_ec::pairing::MillerLoopOutput(
<<ArkBn254 as ark_ec::pairing::Pairing>::TargetField>::one(),
)
},
|a, b| ark_ec::pairing::MillerLoopOutput(a.0 * b.0),
)
}
5 changes: 4 additions & 1 deletion jolt-core/src/poly/commitment/dory/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,18 @@
mod commitment_scheme;
mod dory_globals;
mod jolt_dory_routines;
mod serde;
mod setup;
mod wrappers;

#[cfg(test)]
mod tests;

pub use commitment_scheme::DoryCommitmentScheme;

Check warning on line 16 in jolt-core/src/poly/commitment/dory/mod.rs

View workflow job for this annotation

GitHub Actions / fmt

Diff in /home/runner/work/jolt/jolt/jolt-core/src/poly/commitment/dory/mod.rs
pub use dory_globals::{DoryContext, DoryGlobals};
pub use jolt_dory_routines::{JoltG1Routines, JoltG2Routines};
pub use wrappers::{
ArkDoryProof, ArkFr, ArkG1, ArkG2, ArkGT, ArkworksProverSetup, ArkworksVerifierSetup,
JoltFieldWrapper, BN254,
DoryBN254, JoltBn254, JoltFieldWrapper,
};

Check warning on line 22 in jolt-core/src/poly/commitment/dory/mod.rs

View workflow job for this annotation

GitHub Actions / fmt

Diff in /home/runner/work/jolt/jolt/jolt-core/src/poly/commitment/dory/mod.rs
pub use setup::{DoryProverSetup, DoryVerifierSetup};
Loading
Loading