Skip to content

feat: Final docs pass #49

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Jun 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions .github/actions/setup/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,15 @@ runs:
- name: Set env
shell: bash
run: |
echo "RUSTFLAGS=${{env.RUSTFLAGS}} --cfg tokio_unstable" | tee -a $GITHUB_ENV
echo "RUSTFLAGS=${{env.RUSTFLAGS}} --cfg tokio_unstable -C opt-level=3" | tee -a $GITHUB_ENV
echo "RUST_LOG=info" | tee -a $GITHUB_ENV
- uses: actions/setup-go@v5
with:
go-version: '1.22'
cache-dependency-path: "**/go.sum"
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@master
with:
toolchain: nightly-2024-05-31
- uses: Swatinem/rust-cache@v2
with:
workspaces: "aptos -> target"
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/bench.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,9 @@ jobs:
uses: ./.github/actions/setup
with:
pull_token: ${{ secrets.REPO_TOKEN }}
- uses: dtolnay/rust-toolchain@nightly
- uses: dtolnay/rust-toolchain@master
with:
toolchain: nightly-2024-05-31
- name: Install extra deps
run: |
sudo apt-get update && sudo apt-get install -y python3-pip
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/bump-version-PR.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,9 @@ jobs:
uses: actions/checkout@v4

- name: Install Rust
uses: dtolnay/rust-toolchain@stable
uses: dtolnay/rust-toolchain@master
with:
toolchain: nightly-2024-05-31

- name: Install `tq-rs`
run: cargo install tq-rs
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/tag-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ jobs:

- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Get version
id: get-version
Expand Down
10 changes: 6 additions & 4 deletions aptos/core/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,21 @@ chain. The code for this is located in the `aptos_test_utils` module.
To run tests, we recommend the following command:

```shell
cargo +nightly nextest run --verbose --release --profile ci --features aptos --package aptos-lc --no-capture
SHARD_BATCH_SIZE=0 cargo +nightly-2024-05-31 nextest run --verbose --release --profile ci --features aptos --package aptos-lc --no-capture
```

This command should be run with the following environment variable:

- `RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable"`:
- `RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable -C opt-level=3"`:
- `-C target-cpu=native`: This will ensure that the binary is optimized
for the CPU it is running on. This is very important
for [plonky3](https://github.com/plonky3/plonky3?tab=readme-ov-file#cpu-features) performance.
- `--cfg tokio_unstable`: This will enable the unstable features of the
Tokio runtime. This is necessary for aptos dependencies.
- `-C opt-level=3`: This turns on the maximum level of compiler optimizations.
- This can also be configured in `~/.cargo/config.toml` instead by adding:
```toml
[target.'cfg(all())']
rustflags = ["--cfg", "tokio_unstable", "-C", "target-cpu=native"]
```
rustflags = ["--cfg", "tokio_unstable", "-C", "target-cpu=native", "-C", "opt-level=3"]
```
- `SHARD_BATCH_SIZE=0`: Disables some checkpointing for faster proving at the cost of RAM.
4 changes: 2 additions & 2 deletions aptos/core/src/merkle/node.rs
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ impl SparseMerkleLeafNode {
///
/// * `key: HashValue` - The key of the leaf node.
/// * `value_hash: HashValue` - The hash of the value
/// stored in the leaf node.
/// stored in the leaf node.
///
/// # Returns
///
Expand Down Expand Up @@ -109,7 +109,7 @@ impl SparseMerkleLeafNode {
/// # Arguments
///
/// * `bytes: &[u8]` - A byte slice from which to create
/// the `SparseMerkleLeafNode`.
/// the `SparseMerkleLeafNode`.
///
/// # Returns
///
Expand Down
4 changes: 2 additions & 2 deletions aptos/core/src/merkle/sparse_proof.rs
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ impl SparseMerkleProof {
/// # Arguments
///
/// * `bytes: &[u8]` - A byte slice from which to create
/// the `SparseMerkleProof`.
/// the `SparseMerkleProof`.
///
/// # Returns
///
Expand Down Expand Up @@ -211,7 +211,7 @@ impl SparseMerkleProof {
///
/// * `acc_hash: HashValue` - The current accumulator hash.
/// * `(sibling_hash, bit): (&HashValue, bool)` - The hash of the
/// sibling node and a boolean indicating whether the sibling is on the right.
/// sibling node and a boolean indicating whether the sibling is on the right.
///
/// # Returns
///
Expand Down
2 changes: 1 addition & 1 deletion aptos/core/src/types/epoch_state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ impl EpochState {
/// # Arguments
///
/// * `ledger_info: &LedgerInfoWithSignatures` - The ledger
/// info with signatures to verify.
/// info with signatures to verify.
///
/// # Returns
///
Expand Down
32 changes: 16 additions & 16 deletions aptos/core/src/types/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,28 +8,28 @@
//! ## Sub-modules
//!
//! - `block_info`: This sub-module contains the `BlockInfo`
//! structure and associated methods. It is used to represent
//! the block information in the blockchain.
//! structure and associated methods. It is used to represent
//! the block information in the blockchain.
//! - `epoch_state`: This sub-module contains the `EpochState`
//! structure and associated methods. It is used to represent
//! the epoch state in the blockchain.
//! structure and associated methods. It is used to represent
//! the epoch state in the blockchain.
//! - `ledger_info`: This sub-module contains the `LedgerInfo`
//! structure and associated methods. It is used to represent
//! the ledger information from the blockchain.
//! structure and associated methods. It is used to represent
//! the ledger information from the blockchain.
//! - `transaction`: This sub-module contains the `Transaction`
//! structure and associated methods. It is used to represent
//! the transactions in the blockchain.
//! structure and associated methods. It is used to represent
//! the transactions in the blockchain.
//! - `trusted_state`: This sub-module contains the `TrustedState`
//! structure and associated methods. It is used to represent the
//! trusted state for the blockchain from the Light Client perspective.
//! structure and associated methods. It is used to represent the
//! trusted state for the blockchain from the Light Client perspective.
//! - `validator`: This sub-module contains the `ValidatorConsensusInfo`
//! and `ValidatorVerifier` structures and associated methods. They are
//! used to represent the validator information from the blockchain
//! consensus.
//! and `ValidatorVerifier` structures and associated methods. They are
//! used to represent the validator information from the blockchain
//! consensus.
//! - `waypoint`: This sub-module contains the `Waypoint` and
//! `Ledger2WaypointConverter` structures and associated methods.
//! They are used to represent the waypoints over the blockchain
//! state that can be leveraged for bootstrapping securely.
//! `Ledger2WaypointConverter` structures and associated methods.
//! They are used to represent the waypoints over the blockchain
//! state that can be leveraged for bootstrapping securely.
//!
//! For more detailed information, users should refer to the specific
//! documentation for each sub-module.
Expand Down
2 changes: 1 addition & 1 deletion aptos/core/src/types/trusted_state.rs
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ impl TrustedState {
/// # Arguments
///
/// * `ledger_info: &LedgerInfoWithSignatures` - The
/// ledger info with signatures to verify.
/// ledger info with signatures to verify.
///
/// # Returns
///
Expand Down
5 changes: 3 additions & 2 deletions aptos/docs/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ This section goes over the high-level concept behind the Aptos Light Client.
- [Epoch change proof](./design/epoch_change_proof.md)
- [Inclusion proof](./design/inclusion_proof.md)
- [Edge cases](./design/edge_cases.md)
- [Security considerations](./design/security.md)

# Components

Expand All @@ -36,10 +37,10 @@ This section goes over how to run the benchmarks to measure the performances of

- [Overview](./benchmark/overview.md)
- [Configuration](./benchmark/configuration.md)
- [Benchmarks individual proofs](./benchmark/proof.md)
- [Benchmark individual proofs](./benchmark/proof.md)
- [E2E benchmarks](./benchmark/e2e.md)
- [On-chain verification benchmarks](./benchmark/on_chain.md)

# Miscellaneous

- [Release / Hotfix process](./misc/release.md)
- [Release / Hotfix process](./misc/release.md)
38 changes: 24 additions & 14 deletions aptos/docs/src/benchmark/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,23 @@ In this section we will cover the configuration that should be set to run the be
important to run the benchmarks on proper machines, such as the one described for the Proof Server in
the [Run the Light Client](../run/overview.md) section.

## Settings
## Requirements

The requirements to run the benchmarks are the same as the ones for the client. You will need to follow
the instructions listed [here](../run/configuration.md).

## Other settings

Here are the standard config variables that are worth setting for any benchmark:

- `RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable"`
- `RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable -C opt-level=3"`

This can also be configured in `~/.cargo/config.toml` by adding:
```toml
[target.'cfg(all())']
rustflags = ["--cfg", "tokio_unstable", "-C", "target-cpu=native", "-C", "opt-level=3"]
```

- `SHARD_SIZE=4194304`

The highest possible setting, giving the fewest shards. Because the compression phase dominates the timing of the
Expand All @@ -18,27 +30,25 @@ Here are the standard config variables that are worth setting for any benchmark:

This disables checkpointing making proving faster at the expense of higher memory usage

- `cargo +nightly`
- `cargo +nightly-2024-05-31`

This ensures you are on a nightly toolchain, overriding the local `rust-toolchain.toml` file. Nightly allows usage
of AVX512 instructions which is crucial for performance.
This ensures you are on a nightly toolchain. Nightly allows usage of AVX512 instructions which is crucial for performance.
This is the same version set on `rust-toolchain.toml`. It's pinned to a specific release (`v1.80.0-nightly`) to prevent
unexpected issues caused by newer Rust versions.

- `cargo bench --release <...>`

Or otherwise specify compiler options via `RUSTFLAGS="-Copt-level=3 lto=true <...>"` or Cargo profiles
Make sure to always run in release mode with `--release`. Alternatively, specify the proper compiler options via
`RUSTFLAGS="-C opt-level=3 <...>"`, `~/.cargo/config.toml` or Cargo profiles

- `RUST_LOG=debug` _(optional)_

This prints out useful Sphinx metrics, such as cycle counts, iteration speed, proof size, etc.

## Requirements

The requirements to run the benchmarks are the same as the ones for the client. You can find those instructions
in [their dedicated section](../run/configuration.md).

## SNARK proofs

When running any tests or benchmarks that makes Plonk proofs over BN254, the prover leverages some pre-built circuits
artifacts. Those circuits artifacts are generated when we release new versions of Sphinx and are made avaialble on a
remote storage. The current address for the storage can be
found [here](https://github.com/lurk-lab/sphinx/blob/dev/prover/src/install.rs).
artifacts. Those circuits artifacts are generated when we release new versions of Sphinx and are automatically
downloaded on first use. The current address for downloading the artifacts can be found
[here](https://github.com/lurk-lab/sphinx/blob/dev/prover/src/install.rs), but it should not be necessary to download
them manually.
72 changes: 45 additions & 27 deletions aptos/docs/src/benchmark/e2e.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,55 +2,73 @@

The end-to-end benchmark is meant to benchmark the time taken to send both of the proofs generation request to the Proof
Server, have a parallel computation happen and receive the two proofs back. This benchmark is meant to simulate the
worst case scenario where the client has to generate two proofs in parallel.
worst case scenario where the client has to generate two proofs in parallel. It can run the proofs in sequence if the
benchmark is running in a single machine, to prevent resource exhaustion.

The benchmark can be found in
the [`proof-server`](https://github.com/lurk-lab/zk-light-clients/blob/dev/aptos/proof-server/benches/proof_server.rs)
crate. It can be run with the following command:

```bash
RUST_LOG="debug" RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable" PRIMARY_ADDR="127.0.0.1:8080" SECONDARY_ADDR="127.0.0.1:8081" cargo +nightly bench --bench proof_server
SHARD_BATCH_SIZE=0 RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable -C opt-level=3" PRIMARY_ADDR="127.0.0.1:8080" SECONDARY_ADDR="127.0.0.1:8081" cargo +nightly-2024-05-31 bench --bench proof_server
```

This benchmark will spawn the two servers locally and make two requests in parallel to them. This generates both proofs
at the same time in the same machine. In a production setting, the two prover servers will be in different machines, and
the two proofs will be generated in parallel.
To run the proofs serially instead, pass the `RUN_SERIAL=1` environment variable to the test. This report times that are
closer to a production setting where each proof is generated in parallel by a different machine.
This benchmark will spawn the two servers locally and make two requests in sequence to them. This generates both proofs
in the same machine, one after the other. In a production setting, the two prover servers will be in different machines,
and the two proofs would be generated in parallel. This returns times that are closer to a production setting, without
any resource exhaustion when generating the proofs.

It measures two main metrics for each proof:
To run the proofs in parallel even though it's running in one single machine, pass the `RUN_PARALLEL=1` environment variable
when running the benchmark. This is not recommended as the total reported time for generating both proofs at the same time
in the same machine will be longer than a properly configured production setting where the proofs are generated by different
machines.

- `e2e_proving_time`: Time taken to send both request to the Proof Server and generate both proofs.
The benchmark returns two main metrics for each proof:

- `e2e_proving_time`: Time in milliseconds taken to send both request to the Proof Server and generate both proofs.
- `inclusion_proof`:
- `proving_time`: Time taken to generate the inclusion proof.
- `request_response_proof_size`: Size of the proof returned by the server.
- `proving_time`: Time in milliseconds taken to generate the inclusion proof.
- `request_response_proof_size`: Size of the proof in bytes returned by the server.
- `epoch_change_proof`:
- `proving_time`: Time taken to generate the epoch change proof.
- `request_response_proof_size`: Size of the proof returned by the server.
- `proving_time`: Time in milliseconds taken to generate the epoch change proof.
- `request_response_proof_size`: Size of the proof in bytes returned by the server.

For our [production configuration](../run/overview.md), we currently get the following results:

```json
{
e2e_proving_time: 107678,
inclusion_proof: {
proving_time: 107678,
request_response_proof_size: 20823443
"e2e_proving_time": 51489,
"inclusion_proof": {
"proving_time": 46636,
"request_response_proof_size": 22830628
},
epoch_change_proof: {
proving_time: 125169,
request_response_proof_size: 23088485
"epoch_change_proof": {
"proving_time": 51489,
"request_response_proof_size": 25482668
}
}
```

> **Note**
>
> As the proof server is run with the `RUST_LOG=debug` environment variable, it is also possible to grab the inner
> metrics from Sphinx.

## SNARK proofs

To enable SNARK proving, just pass the environment variable `SNARK=1`:
To enable SNARK proving, just pass the environment variable `SNARK=1` when running:

```bash
RUN_SERIAL=1 SNARK=1 RUST_LOG="debug" RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable" PRIMARY_ADDR="127.0.0.1:8080" SECONDARY_ADDR="127.0.0.1:8081" cargo +nightly bench --bench proof_server
SNARK=1 SHARD_BATCH_SIZE=0 RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable -C opt-level=3" PRIMARY_ADDR="127.0.0.1:8080" SECONDARY_ADDR="127.0.0.1:8081" cargo +nightly-2024-05-31 bench --bench proof_server
```

For our [production configuration](../run/overview.md), we currently get the following results:

```json
{
"e2e_proving_time": 694809,
"inclusion_proof": {
"proving_time": 689228,
"request_response_proof_size": 18454
},
"epoch_change_proof": {
"proving_time": 694809,
"request_response_proof_size": 28661
}
}
```
11 changes: 6 additions & 5 deletions aptos/docs/src/benchmark/on_chain.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
Our Light Client is able to produce SNARK proofs that can be verified on-chain. This section will cover how to run the
benchmarks for the on-chain verification.

To be able to execute such tests our repository contains a project called `solidity` is based
To be able to execute such tests the repository contains a project called `solidity` that is based
off [Foundry](https://github.com/foundry-rs/foundry) which demonstrates the Solidity verification using so-called
fixtures (JSON files) containing the proof data (proof itself, public values and verification key) required for running
the verification for both epoch-change and inclusion programs.
the verification for both epoch-change and inclusion programs. These fixtures are generated from a SNARK proof generated
by the proof servers, but currently the fixtures generated are meant for simple testing only.

The contracts used for testing can be found in the [sphinx-contracts](https://github.com/lurk-lab/sphinx-contracts)
repository which is used as a dependency.
Expand Down Expand Up @@ -59,7 +60,7 @@ export the fixture file to the relevant place (`solidity/contracts/src/plonk_fix
To run the `fixture-generator` for the inclusion program, execute the following command:

```bash
RUST_LOG=info RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable" SHARD_SIZE=4194304 SHARD_BATCH_SIZE=0 cargo +nightly run --release --features aptos --bin generate-fixture -- --program inclusion
RUST_LOG=info RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable -C opt-level=3" SHARD_SIZE=4194304 SHARD_BATCH_SIZE=0 cargo +nightly-2024-05-31 run --release --features aptos --bin generate-fixture -- --program inclusion
```

> **Tips**
Expand All @@ -69,7 +70,7 @@ RUST_LOG=info RUSTFLAGS="-C target-cpu=native --cfg tokio_unstable" SHARD_SIZE=4
> **Note**
>
> You might be encountering issue with updating `sphinx-contracts` Foundry dependency, in this case try manually
> specifying accessing the submodule via SSH
> specifying accessing the submodule via SSH like this:
> ```
> git config submodule.aptos/solidity/contracts/lib/sphinx-contracts.url [email protected]:lurk-lab/sphinx-contracts
> ```
> ```
Loading
Loading