Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
123 changes: 123 additions & 0 deletions apps/balances-bench/PERFORMANCE_TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
# Performance Testing Guide for subscribeBalances Refactor

## Current Situation

**❌ The current `dev:bittensor` test is NOT suitable for measuring subscribeBalances performance**

The current test (`pnpm run --filter balances-bench dev:bittensor`) only tests:

- `fetchBalances()` - a one-time promise-based call
- Does NOT test `subscribeBalances()` - the Observable-based subscription pattern

The performance bottleneck we're fixing is in `subscribeBalances`, which uses a polling pattern with blocking `await` calls.

## What We Need to Measure

To properly measure the impact of converting `subscribeBalances` from blocking awaits to RxJS Observables, we need to measure:

1. **Time to first emission** - How long until the first balance data arrives
2. **Time between emissions** - Poll interval consistency
3. **Blocking time** - How long the thread is blocked during await
4. **Total test time** - Overall subscription performance
5. **Cancellation responsiveness** - How quickly unsubscribe works

## Recommended Testing Approach

### Option 1: Create a Dedicated Subscription Test (Recommended)

I've created `testSubscribeBalances.ts` utility that:

- Sets up a subscription using `subscribeBalances()`
- Measures multiple poll iterations
- Tracks blocking time and emission intervals
- Provides detailed performance metrics

**To use it:**

1. Integrate it with the existing test setup
2. Run before and after the refactor
3. Compare metrics

### Option 2: Use BalancesProvider (More Realistic)

Test through `BalancesProvider.getBalances$()` which internally uses `subscribeBalances`:

- More realistic usage pattern
- Measures the full stack performance
- Harder to isolate subscribeBalances-specific improvements

### Option 3: Micro-benchmark subscribeBalances Directly

Create a simple test that:

- Calls `mod.subscribeBalances()` directly
- Measures time to first emission
- Measures multiple emissions
- Tests cancellation

## Quick Test Implementation

Since the full integration would require refactoring `testNetworkDot`, here's a simpler approach:

**Create a minimal test file that reuses setup from testNetworkDot:**

```typescript
// bittensor-subscribe-performance.ts
// 1. Run testNetworkDot to get tokens and metadata
// 2. Then run subscribeBalances test
// 3. Compare before/after metrics
```

## Metrics to Compare

When comparing before/after:

| Metric | Before (Blocking) | After (Observable) | Expected Improvement |
| ----------------------------- | ----------------- | ------------------ | --------------------------------- |
| Time to first emission | X ms | Y ms | Similar (network bound) |
| Blocking percentage | X% | Y% | **Should decrease significantly** |
| Emission interval consistency | Variable | More consistent | Better |
| Cancellation time | Slow | Fast | **Much faster** |
| Thread blocking | High | Low | **Significant reduction** |

## Key Insight

The main difference won't be in total time (network calls are still network calls), but in:

- **Thread blocking**: Observable pattern allows other work during network waits
- **Cancellation**: Observable unsubscription is instant vs waiting for async functions
- **Error handling**: Observable errors don't break the stream

## Recommendation

For the Phase 1 refactor (subscribeBalances), you can:

1. **Start with a simple manual test:**

- Make the refactor
- Run the existing test to ensure it still works
- Manually verify no regressions

2. **Add performance instrumentation:**

- Add timing logs in subscribeBalances
- Compare console output before/after
- Look for reduced blocking time

3. **Use the mobile app as the real test:**

- The UI freeze is the actual problem we're solving
- Test in the mobile app before/after
- Measure UI responsiveness

4. **Create proper benchmark later:**
- After verifying the refactor works
- Add comprehensive benchmarks
- Use for regression testing

## Next Steps

1. ✅ Implement the Observable refactor
2. ✅ Run existing tests to ensure no breakage
3. ✅ Test in mobile app to verify UI freeze improvement
4. ⏭️ Add comprehensive benchmarks for future testing
1 change: 1 addition & 0 deletions apps/balances-bench/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
"dev:astar": "tsx watch src/astar.ts",
"dev:azero": "tsx watch src/aleph-zero.ts",
"dev:bittensor": "tsx watch src/bittensor.ts",
"dev:bittensor-subscribe": "tsx watch src/bittensor-subscribe.ts",
"dev:polkadot": "tsx watch src/polkadot.ts",
"dev:neuroweb": "tsx watch src/neuroweb.ts",
"dev:acala": "tsx watch src/acala.ts",
Expand Down
42 changes: 42 additions & 0 deletions apps/balances-bench/src/bittensor-subscribe.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import { webcrypto } from "crypto"

import { log } from "extension-shared"

import { setupModule } from "./common/setupModule"
import { testSubscribeBalances } from "./common/testSubscribeBalances"

// Ensure globalThis.crypto is available (for Node.js)
if (typeof globalThis.crypto === "undefined") {
globalThis.crypto = webcrypto
}

const NETWORK_CONFIG = {
id: "bittensor",
rpcs: ["wss://entrypoint-finney.opentensor.ai"],
nativeCurrency: { coingeckoId: "bittensor" },
tokens: {},
}

// Set up the module and run performance test
setupModule(NETWORK_CONFIG, "substrate-dtao")
.then(async (setup) => {
log.log()
log.log("=".repeat(80))
log.log("Running subscribeBalances performance test...")
log.log("=".repeat(80))
log.log()

// Run the performance test
await testSubscribeBalances(NETWORK_CONFIG, setup.miniMetadata, setup.tokens, {
module: "substrate-dtao",
iterations: 3, // Measure 3 poll iterations
iterationTimeout: 30000, // 30 second timeout per iteration
})

log.log("Performance test completed successfully")
process.exit(0)
})
.catch((error) => {
log.error("Error:", error)
process.exit(1)
})
89 changes: 89 additions & 0 deletions apps/balances-bench/src/common/setupModule.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import { existsSync, mkdirSync, readFileSync, writeFileSync } from "fs"
import { dirname } from "path"

import { BALANCE_MODULES, MiniMetadata } from "@talismn/balances"
import { ChainConnectorDotStub, IChainConnectorDot } from "@talismn/chain-connectors"
import { DotNetwork, Token, TokenType } from "@talismn/chaindata-provider"
import { fetchBestMetadata } from "@talismn/sapi"
import { decAnyMetadata, unifyMetadata } from "@talismn/scale"

import { DotNetworkConfig } from "./testSubscribeBalances"

export type ModuleSetup = {
connector: IChainConnectorDot
miniMetadata: MiniMetadata
tokens: Token[]
networkId: string
specVersion: number
}

/**
* Sets up a balance module for testing by fetching metadata and tokens
*/
export const setupModule = async (
network: DotNetworkConfig,
moduleType: TokenType,
): Promise<ModuleSetup> => {
const connector = new ChainConnectorDotStub(network as unknown as DotNetwork)

const { specVersion } = await connector.send<{ specVersion: number }>(
network.id,
"state_getRuntimeVersion",
[],
)

const networkId = network.id

// Load or fetch metadata
const metadataFilePath = `./cache/metadata/${network.id}-${specVersion}.scale`
if (!existsSync(metadataFilePath)) {
const dir = dirname(metadataFilePath)
if (!existsSync(dir)) mkdirSync(dir, { recursive: true })

const metadataRpc = await fetchBestMetadata(
(...args) => connector.send(networkId, ...args),
false,
)
writeFileSync(metadataFilePath, metadataRpc)
}

const metadataRpc = readFileSync(metadataFilePath, "ascii") as `0x${string}`
const anyMetadata = decAnyMetadata(metadataRpc)
unifyMetadata(anyMetadata)

// Get the module
const mod = BALANCE_MODULES.find((m) => m.type === moduleType && m.platform === "polkadot")
if (!mod) {
throw new Error(`Module ${moduleType} not found`)
}

// Get mini metadata
const miniMetadata = mod.getMiniMetadata({
networkId,
specVersion,
metadataRpc,
config: network.balancesConfig?.[moduleType],
})

// Fetch tokens
const tokenConfigs =
moduleType === "substrate-native"
? [network.nativeCurrency]
: (network.tokens[moduleType] ?? [])

const tokens = await mod.fetchTokens({
networkId,
tokens: tokenConfigs as never,
connector,
miniMetadata: miniMetadata as never,
cache: {},
})

return {
connector,
miniMetadata,
tokens,
networkId,
specVersion,
}
}
Loading
Loading