Summary
Index all ERC-20 Transfer events related to each pool that is discovered from a factory’s PoolCreated events—in a single table, without duplication—and be able to filter those transfers by source/destination address (e.g., only rows where from==pool or to==pool). Today I can subscribe to tokens by address, but I can’t easily constrain results to transfers involving the pool, nor merge token0+token1 streams into one sink table cleanly.
Motivation / Use case
For Uniswap/HyperSwap-like DEXes, each pool has two tokens (token0/token1) and a pool address.
TVL/flow analytics require:
- listing all transfers where the pool is from (withdrawal) or to (deposit),
- across both token0 and token1,
- deduplicated and stored in one table with a uniform schema.
Right now, it’s clunky to:
discover tokens via PoolCreated and then subscribe to their ERC-20 Transfer events, filtered only to rows where from/to == the discovered pool;
merge token0+token1 results into a single table without building custom plumbing.
Feature request (two parts)
1. Discovered address filters
Allow event subscriptions to reference addresses discovered upstream (factory outputs) and then apply field-level filters on downstream events.
2. Union multiple subscriptions into one sink table
Let multiple “child” subscriptions (e.g., token0 and token1 ERC-20 streams) write into one table with a shared schema, with built-in dedup on (tx_hash, log_index) and optional unique_keys override.
Current implementation allows to catch only one input per factory and create separate table for each input captured. So we will have two identical transfers tables for token0 and token1 and we can't filter out Transfers by TX src/destination:
- name: HyperswapV3TransfersToken0
details:
- network: hyperevm
start_block: 11648
end_block: 100000
factory:
name: HyperswapV3Factory
address: 0xb1c0fa0b789320044a6f623cfe5ebda9562602e3
abi: ./abis/HyperswapV3Factory.abi.json
event_name: PoolCreated
input_name: "token0"
abi: ./abis/ERC20.abi.json
include_events:
- Transfer
- name: HyperswapV3TransfersToken1
details:
- network: hyperevm
start_block: 11648
end_block: 100000
factory:
name: HyperswapV3Factory
address: 0xb1c0fa0b789320044a6f623cfe5ebda9562602e3
abi: ./abis/HyperswapV3Factory.abi.json
event_name: PoolCreated
input_name: "token1"
abi: ./abis/ERC20.abi.json
include_events:
- Transfer
Suggested YAML design, the idea:
- name: HyperswapV3Transfers
details:
- network: hyperevm
start_block: 11648
end_block: 100000
factory:
name: HyperswapV3Factory
address: 0xb1c0fa0b789320044a6f623cfe5ebda9562602e3
abi: ./abis/HyperswapV3Factory.abi.json
event_name: PoolCreated
input_name:
- "token0"
- "token1"
abi: ./abis/ERC20.abi.json
include_events:
- name: Transfer
# constrain contract to discovered tokens
filter_by_contract:
address: input_name
# keep only rows involving the discovered pool addresses
filter_by_fields:
any:
- eq: { field: "from", value: "pool_created.pool" }
- eq: { field: "to", value: "pool_created.pool" }
Summary
Index all ERC-20 Transfer events related to each pool that is discovered from a factory’s PoolCreated events—in a single table, without duplication—and be able to filter those transfers by source/destination address (e.g., only rows where from==pool or to==pool). Today I can subscribe to tokens by address, but I can’t easily constrain results to transfers involving the pool, nor merge token0+token1 streams into one sink table cleanly.
Motivation / Use case
For Uniswap/HyperSwap-like DEXes, each pool has two tokens (token0/token1) and a pool address.
TVL/flow analytics require:
Right now, it’s clunky to:
discover tokens via PoolCreated and then subscribe to their ERC-20 Transfer events, filtered only to rows where from/to == the discovered pool;
merge token0+token1 results into a single table without building custom plumbing.
Feature request (two parts)
1. Discovered address filters
Allow event subscriptions to reference addresses discovered upstream (factory outputs) and then apply field-level filters on downstream events.
2. Union multiple subscriptions into one sink table
Let multiple “child” subscriptions (e.g., token0 and token1 ERC-20 streams) write into one table with a shared schema, with built-in dedup on (tx_hash, log_index) and optional unique_keys override.
Current implementation allows to catch only one input per factory and create separate table for each input captured. So we will have two identical transfers tables for
token0andtoken1and we can't filter out Transfers by TX src/destination:Suggested YAML design, the idea: