Skip to content

Commit 77a858c

Browse files
drozdziak1jayantk
andauthored
Accumulator Support and Integration Testing (#62)
* adding stuff * add accumulator thing * tests: New oracle.so, add accumulator *.so, gen accumulator key * exporter: add transaction statuses logging * uh oh * stuff * update oracle * Update oracle.so, re-enable pub tx failure, more updates in tests * aggregate now updates * CPI is invoked * tests: fix wrong *.so for accumulator, bypassed checks in binaries * integration_tests: WIP accumulator initialize() call * update stuff * agent: oracle auth, tests: regen client, setup auth, new oracle.so * add anchorpy * it works * stuff * exporter: re-enable preflight, tests: hardcoding my thing this time! * Clean integration tests, new accumulator address, agent logging * exporter.rs: restore rpc calls to their former infallible glory * exporter: fix missing UPDATE_PRICE_NO_FAIL_ON_ERROR * test_integration.py: bring back solana logs * message_buffer -> message_buffer_client_codegen * move prebuilt artifacts to `program-binaries`, add md5 verification * README.md: replace other README with testing section, config docs * exporter: Remove code comment, oracle PDA log statement * integration-tests/pyproject.toml: Point at the root readme --------- Co-authored-by: Jayant Krishnamurthy <[email protected]>
1 parent c0a3cd1 commit 77a858c

34 files changed

+2873
-271
lines changed

.dockerignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
target
2+
Dockerfile

.pre-commit-config.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,3 +12,9 @@ repos:
1212
language: "rust"
1313
entry: cargo +nightly fmt
1414
pass_filenames: false
15+
- id: integration-test-checksums
16+
name: Integration Test Artifact Checksums
17+
language: "system"
18+
files: integration-tests/program-binaries/.*\.(json|so|md5sum)$
19+
entry: md5sum --check canary.md5sum
20+
pass_filenames: false

README.md

Lines changed: 95 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -10,20 +10,27 @@ Note that only permissioned publishers can publish data to the network. Please r
1010

1111
Prerequisites: Rust 1.68 or higher. A Unix system is recommended.
1212

13-
```bash
14-
# Install OpenSSL
15-
apt install libssl-dev
13+
```shell
14+
# Install OpenSSL (Debian-based systems)
15+
$ apt install libssl-dev
16+
17+
# Install Rust
18+
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
19+
$ rustup default 1.68 # Optional
1620

1721
# Build the project. This will produce a binary at target/release/agent
18-
cargo build --release
22+
$ cargo build --release
1923
```
2024

2125
## Configure
22-
Configuration is managed through a configuration file. An example configuration file with sensible defaults can be found at [config/config.toml](config/config.toml).
26+
The agent takes a single `--config` CLI option, pointing at
27+
`config/config.toml` by default. An example configuration is provided
28+
there, containing a minimal set of mandatory options and documentation
29+
comments for optional settings. **The config file must exist.**
2330

24-
The logging level can be configured at runtime through the `RUST_LOG` environment variable, using [env_logger](https://docs.rs/env_logger/latest/env_logger/)'s scheme. For example, to log at `debug` instead of the default `info` level, set `RUST_LOG=debug`.
25-
26-
## Run
31+
The logging level can be configured at runtime
32+
through the `RUST_LOG` environment variable using the standard
33+
`error|warn|info|debug|trace` levels.
2734

2835
### Key Store
2936
If you already have a key store set up, you can skip this step. If you haven't, you will need to create one before publishing data. A key store contains the cryptographic keys needed to publish data. Once you have a key store set up, please ensure that the configuration file mentioned above contains the correct path to your key store.
@@ -44,11 +51,86 @@ PYTH_KEY_ENV=devnet # Can be devnet, testnet or mainnet
4451
./scripts/init_key_store.sh $PYTH_KEY_ENV $PYTH_KEY_STORE
4552
```
4653

47-
### API Server
54+
## Run
55+
`cargo run --release -- --config <your_config.toml>` will build and run the agent in a single step.
56+
57+
## Publishing API
58+
A running agent will expose a WebSocket serving the JRPC publishing API documented [here](https://docs.pyth.network/publish-data/pyth-client-websocket-api). See `config/config.toml` for related settings.
59+
60+
# Development
61+
## Unit Testing
62+
A collection of Rust unit tests is provided, ran with `cargo test`.
63+
64+
## Integration Testing
65+
In `integration-tests`, we provide end-to-end tests for the Pyth
66+
`agent` binary against a running `solana-test-validator` with Pyth
67+
oracle deployed to it. Optionally, accumulator message buffer program
68+
can be deployed and used to validate accumulator CPI correctness
69+
end-to-end (see configuration options below). Prebuilt binaries are
70+
provided manually in `integration-tests/program-binaries` - see below
71+
for more context.
72+
73+
### Running Integration Tests
74+
The tests are implemented as a Python package containing a `pytest`
75+
test suite, managed with [Poetry](https://python-poetry.org/) under
76+
Python >3.10. Use following commands to install and run them:
77+
4878
```bash
49-
# Run the agent binary, which will start a JRPC websocket API server.
50-
./target/release/agent --config config/config.toml
79+
cd integration-tests/
80+
poetry install
81+
poetry run pytest -s --log-cli-level=debug
5182
```
5283

53-
### Publish Data
54-
You can now publish data to the Pyth Network using the JRPC websocket API documented [here](https://docs.pyth.network/publish-data/pyth-client-websocket-api).
84+
### Optional Integration Test Configuration
85+
* `USE_ACCUMULATOR`, off by default - when this env is set, the test
86+
framework also deploys the accumulator program
87+
(`message_buffer.so`), initializes it and configures the agent to
88+
make accumulator-enabled calls into the oracle
89+
* `SOLANA_TEST_VALIDATOR`, systemwide `solana-test-validator` by
90+
default - when this env is set, the specified binary is used as the
91+
test validator. This is especially useful with `USE_ACCUMULATOR`,
92+
enabling life-like accumulator output from the `pythnet` validator.
93+
94+
### Testing Setup Overview
95+
For each test's setup in `integration-tests/tests/test_integration.py`, we:
96+
* Start `solana-test-validator` with prebuilt Solana programs deployed
97+
* Generate and fund test Solana keypairs
98+
* Initialize the oracle program - allocate test price feeds, assign
99+
publishing permissions. This is done using the dedicated [`program-admin`](https://github.com/pyth-network/program-admin) Python package.
100+
* (Optionally) Initialize accumulator message buffer program
101+
initialize test authority, preallocate message buffers, assign
102+
allowed program permissions to the oracle - this is done using a
103+
generated client package in
104+
`integration-tests/message_buffer_client_codegen`, created using
105+
[AnchorPy](https://github.com/kevinheavey/anchorpy).
106+
* Build and run the agent
107+
108+
This is followed by a specific test scenario,
109+
e.g. `test_update_price_simple` - a couple publishing attempts with
110+
assertions of expected on-chain state.
111+
112+
### Prebuilt Artifact Safety
113+
In `integration-tests/program-binaries` we store oracle and
114+
accumulator `*.so`s as well as accumulator program's Anchor IDL JSON
115+
file. These artifacts are guarded against unexpected updates with a
116+
commit hook verifying `md5sum --check canary.md5sum`. Changes to the
117+
`integration-tests/message_buffer_client_codegen` package are much
118+
harder to miss in review and tracked manually.
119+
120+
### Updating Artifacts
121+
While you are free to experiment with the contents of
122+
`program-binaries`, commits for new or changed artifacts must include
123+
updated checksums in `canary.md5sum`. This can be done
124+
by running `md5sum` in repository root:
125+
```shell
126+
$ md5sum integration-tests/program-binaries/*.json > canary.md5sum
127+
$ md5sum integration-tests/program-binaries/*.so >> canary.md5sum # NOTE: Mind the ">>" for appending
128+
```
129+
130+
### Updating `message_buffer_client_codegen`
131+
After obtaining an updated `message_buffer.so` and `message_buffer_idl.json`, run:
132+
```shell
133+
$ cd integration-tests/
134+
$ poetry install # If you haven't run this already
135+
$ poetry run anchorpy client-gen --pdas program-binaries/message_buffer_idl.json message_buffer_client_codegen
136+
```

canary.md5sum

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
b213ae5b2a4137238c47bdc5951fc95d integration-tests/program-binaries/message_buffer_idl.json
2+
1d5b5e43be31e10f6e747b20ef77f4e9 integration-tests/program-binaries/message_buffer.so
3+
7c2782f6f58e9c91a95ce7c310a47927 integration-tests/program-binaries/oracle.so

config/config.toml

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -73,27 +73,32 @@ key_store.root_path = "/path/to/keystore"
7373
# channel_capacities.logger_buffer = 10000
7474

7575

76-
# Path to publisher identity keypair. When the specified path is not
77-
# found on startup, the relevant primary/secondary network will expect
78-
# a remote-loaded keypair. See remote_keypair_loader options for
76+
# Relative path to publisher identity keypair
77+
# w.r.t. `key_store.root_path`. When the specified file is not found
78+
# on startup, the relevant primary/secondary network will expect a
79+
# remote-loaded keypair. See remote_keypair_loader options for
7980
# details.
8081
# key_store.publish_keypair_path = "publish_key_pair.json" # I exist, remote loading disabled
8182
# key_store.publish_keypair_path = "none" # I do not exist, remote loading activated for the network
8283

84+
# Relative path to accumulator message buffer program ID. Setting this
85+
# value enables accumulator support on publishing transactions.
86+
# key_store.accumulator_program_key = <not set by default>
87+
8388
# The interval with which to poll account information.
8489
# oracle.poll_interval_duration = "2m"
8590

8691
# Whether subscribing to account updates over websocket is enabled
8792
# oracle.subscriber_enabled = true
8893

89-
# Ask the RPC for up to this many product/price accounts in a
90-
# single request. Tune this setting if you're experiencing
91-
# timeouts on data fetching. In order to keep concurrent open
92-
# socket count at bay, the batches are looked up sequentially,
93-
# trading off overall time it takes to fetch all symbols.
94+
# Ask the Solana RPC for up to this many product/price accounts in a
95+
# single request. Tune this setting if you're experiencing timeouts on
96+
# data fetching. In order to keep concurrent open socket count at bay,
97+
# the batches are looked up sequentially, trading off overall time it
98+
# takes to fetch all symbols.
9499
# oracle.max_lookup_batch_size = 100
95100

96-
# Duration of the interval at which to refresh the cached network state (current slot and blockhash).
101+
# How often to refresh the cached network state (current slot and blockhash).
97102
# It is recommended to set this to slightly less than the network's block time,
98103
# as the slot fetched will be used as the time of the price update.
99104
# exporter.refresh_network_state_interval_duration = "200ms"

integration-tests/README.md

Lines changed: 0 additions & 20 deletions
This file was deleted.

integration-tests/agent_conf.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1+
[metrics_server]
2+
bind_address="0.0.0.0:8888"
3+
14
[primary_network]
25
key_store.root_path = "keystore"
36
oracle.poll_interval_duration = "1s"
47
exporter.transaction_monitor.poll_interval_duration = "1s"
5-
6-
[metrics_server]
7-
bind_address="0.0.0.0:8888"

integration-tests/message_buffer_client_codegen/__init__.py

Whitespace-only changes.
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
from .message_buffer import MessageBuffer, MessageBufferJSON
2+
from .whitelist import Whitelist, WhitelistJSON
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
import typing
2+
from dataclasses import dataclass
3+
from solana.publickey import PublicKey
4+
from solana.rpc.async_api import AsyncClient
5+
from solana.rpc.commitment import Commitment
6+
import borsh_construct as borsh
7+
from anchorpy.coder.accounts import ACCOUNT_DISCRIMINATOR_SIZE
8+
from anchorpy.error import AccountInvalidDiscriminator
9+
from anchorpy.utils.rpc import get_multiple_accounts
10+
from ..program_id import PROGRAM_ID
11+
12+
13+
class MessageBufferJSON(typing.TypedDict):
14+
bump: int
15+
version: int
16+
header_len: int
17+
end_offsets: list[int]
18+
19+
20+
@dataclass
21+
class MessageBuffer:
22+
discriminator: typing.ClassVar = b"\x19\xf4\x03\x05\xe1\xa5\x1d\xfa"
23+
layout: typing.ClassVar = borsh.CStruct(
24+
"bump" / borsh.U8,
25+
"version" / borsh.U8,
26+
"header_len" / borsh.U16,
27+
"end_offsets" / borsh.U16[255],
28+
)
29+
bump: int
30+
version: int
31+
header_len: int
32+
end_offsets: list[int]
33+
34+
@classmethod
35+
async def fetch(
36+
cls,
37+
conn: AsyncClient,
38+
address: PublicKey,
39+
commitment: typing.Optional[Commitment] = None,
40+
program_id: PublicKey = PROGRAM_ID,
41+
) -> typing.Optional["MessageBuffer"]:
42+
resp = await conn.get_account_info(address, commitment=commitment)
43+
info = resp.value
44+
if info is None:
45+
return None
46+
if info.owner != program_id.to_solders():
47+
raise ValueError("Account does not belong to this program")
48+
bytes_data = info.data
49+
return cls.decode(bytes_data)
50+
51+
@classmethod
52+
async def fetch_multiple(
53+
cls,
54+
conn: AsyncClient,
55+
addresses: list[PublicKey],
56+
commitment: typing.Optional[Commitment] = None,
57+
program_id: PublicKey = PROGRAM_ID,
58+
) -> typing.List[typing.Optional["MessageBuffer"]]:
59+
infos = await get_multiple_accounts(conn, addresses, commitment=commitment)
60+
res: typing.List[typing.Optional["MessageBuffer"]] = []
61+
for info in infos:
62+
if info is None:
63+
res.append(None)
64+
continue
65+
if info.account.owner != program_id:
66+
raise ValueError("Account does not belong to this program")
67+
res.append(cls.decode(info.account.data))
68+
return res
69+
70+
@classmethod
71+
def decode(cls, data: bytes) -> "MessageBuffer":
72+
if data[:ACCOUNT_DISCRIMINATOR_SIZE] != cls.discriminator:
73+
raise AccountInvalidDiscriminator(
74+
"The discriminator for this account is invalid"
75+
)
76+
dec = MessageBuffer.layout.parse(data[ACCOUNT_DISCRIMINATOR_SIZE:])
77+
return cls(
78+
bump=dec.bump,
79+
version=dec.version,
80+
header_len=dec.header_len,
81+
end_offsets=dec.end_offsets,
82+
)
83+
84+
def to_json(self) -> MessageBufferJSON:
85+
return {
86+
"bump": self.bump,
87+
"version": self.version,
88+
"header_len": self.header_len,
89+
"end_offsets": self.end_offsets,
90+
}
91+
92+
@classmethod
93+
def from_json(cls, obj: MessageBufferJSON) -> "MessageBuffer":
94+
return cls(
95+
bump=obj["bump"],
96+
version=obj["version"],
97+
header_len=obj["header_len"],
98+
end_offsets=obj["end_offsets"],
99+
)

0 commit comments

Comments
 (0)