Skip to content

add multiple fallback pools#119

Open
xyephy wants to merge 3 commits intostratum-mining:mainfrom
xyephy:2026-04-22-multiple-fallback-pools
Open

add multiple fallback pools#119
xyephy wants to merge 3 commits intostratum-mining:mainfrom
xyephy:2026-04-22-multiple-fallback-pools

Conversation

@xyephy
Copy link
Copy Markdown

@xyephy xyephy commented Apr 23, 2026

Adds support for configuring fallback pools with automatic failover, in both the setup wizard and the settings tab.

  • "+ Add fallback pool" picker in the wizard
  • Editable card list in settings with reorder, remove, Save & Restart
  • TCP reachability check on save catches typos before the restart

Closes #45.
01-initial
02-form-opens-directly
06-duplicate-warning

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented Apr 23, 2026

Quick glance, this PR touches on too many things, making it hard to review. 1300 lines of codes just for adding a simple option to add a fallback pool seems too much. My biggest concern here is that you touched all configuration steps with this as well, so for example now the flow is completely disrupted. For example, now in the setup user can't add custom pool, but it needs to add a fallback pool. I don't know what approach did @GitGab19 have in mind, but this is pretty large scope for a simple feature. On top of that needs rebase and conflict resolving.

@xyephy
Copy link
Copy Markdown
Author

xyephy commented Apr 23, 2026

Quick glance, this PR touches on too many things, making it hard to review. 1300 lines of codes just for adding a simple option to add a fallback pool seems too much. My biggest concern here is that you touched all configuration steps with this as well, so for example now the flow is completely disrupted. For example, now in the setup user can't add custom pool, but it needs to add a fallback pool. I don't know what approach did @GitGab19 have in mind, but this is pretty large scope for a simple feature. On top of that needs rebase and conflict resolving.

I'm simplifying, I realised somethings are already implemented. Now doing cleanup.

@xyephy xyephy marked this pull request as draft April 23, 2026 09:02
@xyephy xyephy force-pushed the 2026-04-22-multiple-fallback-pools branch from 656ba31 to 57df22d Compare April 23, 2026 10:37
@xyephy xyephy marked this pull request as ready for review April 23, 2026 10:42
@xyephy
Copy link
Copy Markdown
Author

xyephy commented Apr 23, 2026

I've pushed a simplified version.

@pavlenex pavlenex force-pushed the 2026-04-22-multiple-fallback-pools branch from 57df22d to 66b5d6c Compare April 24, 2026 06:45
@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented Apr 24, 2026

@xyephy Much better and cleaner direction. Thanks for addressing. I was able to test it and my pool correctly fall back to a secondary pool.

On my first test I encountered the issue where in the setting page I wasn't able to edit or add more fallbacks pool, fallback pools completely disappeared, see video.

Screen.Recording.2026-04-24.at.11.56.50.mov

Also CI is failing. It also doesn't seem I can add more than one fallback pool, earlier you had a way to add and drag drop them which imo was good UX.

@plebhash

This comment was marked as off-topic.

@pavlenex

This comment was marked as off-topic.

@pavlenex
Copy link
Copy Markdown
Contributor

Current flow works if SV2 UI offers only one pool, then user needs obviously to enter a fallback pool manually. What's now clear to me , and I'd like to see a elegant solution on the more likely scenario as we more foward to more pool adoption is how user can select primary and then assign fallback pool(s), second, third even forth, from a pre-selected pools list or by entering a custom one.

Here's a concrete scenario where we have 2 pools but user needs to manually enter details.
Screenshot 2026-04-25 at 12 53 08

@xyephy xyephy force-pushed the 2026-04-22-multiple-fallback-pools branch from 66b5d6c to de8d579 Compare April 29, 2026 14:45
@xyephy
Copy link
Copy Markdown
Author

xyephy commented Apr 29, 2026

@pavlenex addressed everything from your review. Thank you, let me know if the fallbacks work on your end.

@pavlenex
Copy link
Copy Markdown
Contributor

Hi @xyephy I've looked into PR, and I like the new flow.

Found few concerns:

  1. Can you explain why the selected primary pool is always listed in the fallback? Is there a UX improvement I am missing, it may look a bit odd to show pool that's already primary in the fallback option? Should we exclude primary pool if user already selected it in the flow?
Screenshot 2026-04-30 at 10 32 28 2. @GitGab19 can you confirm that the **fallback only happens in JD mode**? In that case how come we have it in the Pool Non JD mode in the UI? And if it's at tProxy level, then it should also be available in the non JD mode for pool solo mine? Screenshot 2026-04-30 at 10 35 08

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented Apr 30, 2026

@GitGab19 I am really confused on how fallback works, so it apparently exists in tProxy? But just doesn't work?

2026-04-30 10:54:37.122 | 2026-04-30T05:54:37.122178Z  INFO translator_sv2: Starting Translator Proxy...
2026-04-30 10:54:37.122 | 2026-04-30T05:54:37.122234Z  INFO translator_sv2: Initializing upstream connection...
2026-04-30 10:54:37.122 | 2026-04-30T05:54:37.122237Z  INFO translator_sv2: Trying upstream 1 of 2: 54.251.17.13:3333
2026-04-30 10:54:37.122 | 2026-04-30T05:54:37.122240Z  INFO translator_sv2: Connection attempt 1/3...
2026-04-30 10:54:38.128 | 2026-04-30T05:54:38.128640Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 54.251.17.13:3333
2026-04-30 10:55:53.131 | 2026-04-30T05:55:53.131580Z ERROR translator_sv2::sv2::upstream: Failed to connect to 54.251.17.13:3333: Connection refused (os error 111).
2026-04-30 10:55:53.132 | 2026-04-30T05:55:53.131697Z ERROR translator_sv2::sv2::upstream: Failed to connect to any configured upstream.
2026-04-30 10:55:53.132 | 2026-04-30T05:55:53.131712Z  WARN translator_sv2: Attempt 1/3 failed for 54.251.17.13:3333: CouldNotInitiateSystem
2026-04-30 10:55:53.132 | 2026-04-30T05:55:53.131728Z  INFO translator_sv2: Connection attempt 2/3...
2026-04-30 10:55:54.136 | 2026-04-30T05:55:54.136319Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 54.251.17.13:3333
2026-04-30 10:57:09.141 | 2026-04-30T05:57:09.141074Z ERROR translator_sv2::sv2::upstream: Failed to connect to 54.251.17.13:3333: Connection refused (os error 111).
2026-04-30 10:57:09.141 | 2026-04-30T05:57:09.141225Z ERROR translator_sv2::sv2::upstream: Failed to connect to any configured upstream.
2026-04-30 10:57:09.141 | 2026-04-30T05:57:09.141237Z  WARN translator_sv2: Attempt 2/3 failed for 54.251.17.13:3333: CouldNotInitiateSystem
2026-04-30 10:57:09.141 | 2026-04-30T05:57:09.141244Z  INFO translator_sv2: Connection attempt 3/3...
2026-04-30 10:57:10.147 | 2026-04-30T05:57:10.147044Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 54.251.17.13:3333
2026-04-30 10:58:25.151 | 2026-04-30T05:58:25.151341Z ERROR translator_sv2::sv2::upstream: Failed to connect to 54.251.17.13:3333: Connection refused (os error 111).
2026-04-30 10:58:25.151 | 2026-04-30T05:58:25.151438Z ERROR translator_sv2::sv2::upstream: Failed to connect to any configured upstream.
2026-04-30 10:58:25.151 | 2026-04-30T05:58:25.151450Z  WARN translator_sv2: Attempt 3/3 failed for 54.251.17.13:3333: CouldNotInitiateSystem
2026-04-30 10:58:25.151 | 2026-04-30T05:58:25.151458Z  WARN translator_sv2: Max retries reached for 54.251.17.13:3333, moving to next upstream
2026-04-30 10:58:25.151 | 2026-04-30T05:58:25.151461Z  INFO translator_sv2: Trying upstream 2 of 2: stratum.braiins.com:3333
2026-04-30 10:58:25.151 | 2026-04-30T05:58:25.151476Z  INFO translator_sv2: Connection attempt 1/3...
2026-04-30 10:58:26.158 | 2026-04-30T05:58:26.157978Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at stratum.braiins.com:3333
2026-04-30 10:58:26.158 | 2026-04-30T05:58:26.158062Z  INFO stratum_apps::network_helpers::resolve_hostname: Resolving hostname 'stratum.braiins.com' via DNS...
2026-04-30 10:58:26.390 | 2026-04-30T05:58:26.389584Z  INFO translator_sv2::sv2::upstream: Connected to upstream at 172.65.65.63:3333
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567045Z  INFO translator_sv2::sv2::upstream::common_message_handler: Received: SetupConnectionSuccess(used_version: 2, flags: 0x00000000)
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567128Z  INFO translator_sv2::sv1::sv1_server: Starting SV1 server on 0.0.0.0:34255
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567339Z  INFO translator_sv2::sv1::sv1_server: Translator Proxy: listening on 0.0.0.0:34255
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567445Z  INFO translator_sv2: Launching ChannelManager tasks...
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567500Z  INFO translator_sv2: Initializing monitoring server on http://0.0.0.0:9092
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567512Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Variable difficulty adjustment enabled - starting vardiff loop
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567530Z  INFO translator_sv2::sv1::sv1_server: Starting job keepalive loop with interval of 60 seconds
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567784Z  INFO stratum_apps::monitoring::http_server: Starting monitoring server on http://0.0.0.0:9092
2026-04-30 10:58:26.567 | 2026-04-30T05:58:26.567800Z  INFO stratum_apps::monitoring::http_server: Cache refresh interval: 15s
2026-04-30 10:58:26.570 | 2026-04-30T05:58:26.570252Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Starting vardiff loop for downstreams
2026-04-30 10:58:26.570 | 2026-04-30T05:58:26.570634Z  INFO stratum_apps::monitoring::http_server: Swagger UI available at http://0.0.0.0:9092/swagger-ui
2026-04-30 10:58:26.570 | 2026-04-30T05:58:26.570651Z  INFO stratum_apps::monitoring::http_server: Prometheus metrics available at http://0.0.0.0:9092/metrics
2026-04-30 10:58:26.721 | 2026-04-30T05:58:26.721404Z  INFO translator_sv2::sv1::sv1_server: New SV1 downstream connection from 172.19.0.1:64484
2026-04-30 10:58:26.721 | 2026-04-30T05:58:26.721595Z  INFO translator_sv2::sv1::sv1_server: Downstream 1 registered successfully (channel will be opened after first message)
2026-04-30 10:58:26.750 | 2026-04-30T05:58:26.749975Z  INFO translator_sv2::sv1::sv1_server: SV1 server: opening extended mining channel for downstream 1 after first message
2026-04-30 10:58:26.750 | 2026-04-30T05:58:26.750192Z  INFO translator_sv2::sv2::channel_manager: Sending OpenExtendedMiningChannel message to upstream: OpenExtendedMiningChannel(request_id: 1, user_identity: test.miner1, nominal_hash_rate: 100000000000000, max_target: U256(000000000000480ebe770f1b30ac02f530ceed6b3bc5c7c3cf1fb75f9e8d3e48), min_extranonce_size: 4)
2026-04-30 10:58:26.912 | 2026-04-30T05:58:26.911788Z  INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: OpenExtendedMiningChannelSuccess(request_id: 1, channel_id: 1, target: U256(0000000000003fffc00000000000000000000000000000000000000000000000), extranonce_size: 6, extranonce_prefix: B032(), group_channel_id: 0), user_identity: test.miner1, nominal_hashrate: 100000000000000
2026-04-30 10:58:26.912 | 2026-04-30T05:58:26.911906Z  INFO translator_sv2::sv1::sv1_server: Processing 2 queued Sv1 messages for downstream 1
2026-04-30 10:58:26.912 | 2026-04-30T05:58:26.911948Z  INFO translator_sv2::sv1::sv1_server::downstream_message_handler: Received mining.subscribe from Sv1 downstream
2026-04-30 10:58:26.912 | 2026-04-30T05:58:26.912009Z  INFO translator_sv2::sv1::sv1_server::downstream_message_handler: Received mining.authorize from Sv1 downstream 1
2026-04-30 10:58:26.912 | 2026-04-30T05:58:26.912046Z  INFO translator_sv2::sv1::sv1_server: Down: Handling mining.authorize after upstream channel is open
2026-04-30 10:58:27.008 | 2026-04-30T05:58:27.007635Z ERROR translator_sv2::sv1::downstream: Error receiving downstream message: RecvError
2026-04-30 10:58:27.008 | 2026-04-30T05:58:27.007673Z ERROR translator_sv2::sv1::downstream: Downstream 1: error in downstream message handler: TproxyError { kind: ChannelErrorReceiver(RecvError), action: Disconnect(1), _owner: PhantomData<translator_sv2::error::Downstream> }
2026-04-30 10:58:27.008 | 2026-04-30T05:58:27.007703Z  WARN translator_sv2::sv1::downstream: Downstream 1: unified task shutting down
2026-04-30 10:58:27.008 | 2026-04-30T05:58:27.007886Z  WARN translator_sv2: Downstream 1 disconnected — cleaning up sv1_server state.
2026-04-30 10:58:27.008 | 2026-04-30T05:58:27.007901Z  INFO translator_sv2::sv1::sv1_server: 🔌 Downstream: 1 disconnected and removed from sv1 server downstreams
2026-04-30 10:58:27.008 | 2026-04-30T05:58:27.007908Z  INFO translator_sv2::sv1::sv1_server: Sending CloseChannel message: 1 for downstream: 1
2026-04-30 10:58:27.441 | 2026-04-30T05:58:27.441661Z  INFO translator_sv2::sv1::sv1_server: New SV1 downstream connection from 172.19.0.1:64930
2026-04-30 10:58:27.441 | 2026-04-30T05:58:27.441690Z  INFO translator_sv2::sv1::sv1_server: Downstream 2 registered successfully (channel will be opened after first message)
2026-04-30 10:58:27.441 | 2026-04-30T05:58:27.441784Z  INFO translator_sv2::sv1::sv1_server: SV1 server: opening extended mining channel for downstream 2 after first message
2026-04-30 10:58:27.441 | 2026-04-30T05:58:27.441800Z  INFO translator_sv2::sv2::channel_manager: Sending OpenExtendedMiningChannel message to upstream: OpenExtendedMiningChannel(request_id: 2, user_identity: test.miner2, nominal_hash_rate: 100000000000000, max_target: U256(000000000000480ebe770f1b30ac02f530ceed6b3bc5c7c3cf1fb75f9e8d3e48), min_extranonce_size: 4)
2026-04-30 10:58:27.514 | 2026-04-30T05:58:27.514341Z  INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: NewExtendedMiningJob(channel_id: 1, job_id: 0, min_ntime: Sv2Option(None), version: 0x20000000, version_rolling_allowed: true, merkle_path: Seq0255<len=12: [e1e188962c18bfa8cf3ecb840bb4bf8cc36ade32fc1bd99786c67d30919351e6, c3c127551f91108b90cc57ac3250cbf131e9aa2cf8c5ad3137c52782efe7a41d, ... , ebfb1e9480b4dec5a1b5e9f586f8d3d1b7fa5c1e58cc1d8fe4d7b4a0f059c511, 53ea41a3722e66595b2b170f5d9d603ecc1ca239604ea4564d87832be5052734], coinbase_tx_prefix: B064K(01000000010000000000000000000000000000000000000000000000000000000000000000ffffffff4c0330740e0f2f736c7573682f6a00a00150a4b229fabe6d6d451632554e74e713744c70ee5c81ae5dd2268dd1be2cc51c2aa13fa78384434110000000000000000000451e1700), coinbase_tx_suffix: B064K(ffffffff03ff1bc0120000000017a9141f0cbbec8bc4c945e4e16249b11eee911eded55f870000000000000000266a24aa21a9edaf2890becd364498253f7e5932a4df9ce5b8633ddbddecfec54543e0aca080c500000000000000002b6a2952534b424c4f434b3aceea6dd65a706bf27a4ab0138cc5c74a65f90f282cfbe4e38ea790160086144b00000000))
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514533Z ERROR translator_sv2::sv2::channel_manager::mining_message_handler: Channel not found: 1, ignoring NewExtendedMiningJob message
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514557Z  WARN translator_sv2::status: Log-only error from ChannelManager(Sender { .. }): ChannelNotFound
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514653Z  INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: SetNewPrevHash(channel_id=0, job_id=0, prev_hash=U256(00000000000000000000fbd448b62a610b877baae24bdc101382b060818344ef), min_ntime=1777528694, nbits=0x17021369)
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514677Z ERROR translator_sv2::sv2::channel_manager::mining_message_handler: Failed to set new prev hash: JobIdNotFound
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514772Z  WARN translator_sv2: Upstream connection dropped: FailedToProcessSetNewPrevHash — attempting reconnection...
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514895Z  INFO translator_sv2: Monitoring server: fallback triggered.
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514918Z  INFO stratum_apps::monitoring::http_server: Monitoring server received shutdown signal, stopping...
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514963Z  INFO translator_sv2::sv1::downstream: Downstream 2: fallback triggered
2026-04-30 10:58:27.515 | 2026-04-30T05:58:27.514973Z  WARN translator_sv2::sv1::downstream: Downstream 2: unified task shutting down
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515357Z  INFO translator_sv2::sv1::sv1_server: SV1 Server: fallback triggered, clearing state
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515366Z  INFO stratum_apps::monitoring::http_server: Monitoring server stopped
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515415Z  INFO translator_sv2: Monitoring server task exited and signaled fallback coordinator
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515450Z  WARN translator_sv2::io_task: Reader task exited.
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515453Z ERROR translator_sv2::sv2::upstream: Upstream: receiver channel closed unexpectedly: receiving from an empty and closed channel
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515466Z  WARN translator_sv2::sv2::upstream: Upstream: task shutting down cleanly.
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515479Z  WARN translator_sv2::status: Channel Manager shutdown requested due to error: ChannelErrorReceiver(RecvError)
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515489Z  WARN translator_sv2::sv2::channel_manager: ChannelManager: unified message loop exited.
2026-04-30 10:58:27.516 | 2026-04-30T05:58:27.515805Z  WARN translator_sv2::io_task: Writer task exited.
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.515992Z  INFO translator_sv2: All components finished fallback cleanup
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.516113Z ERROR translator_sv2: All upstreams failed after 3 retries each
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.516123Z ERROR translator_sv2: Couldn't perform fallback, shutting system down: CouldNotInitiateSystem
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.516136Z  WARN translator_sv2: Graceful shutdown: waiting 5 seconds for tasks to finish
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.516164Z  INFO translator_sv2: All tasks joined cleanly
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.516168Z  INFO translator_sv2: TranslatorSv2 shutdown complete.
2026-04-30 10:58:27.517 | 2026-04-30T05:58:27.516192Z  INFO translator_sv2: TranslatorSv2 dropped

My understanding was that failover happens only in JDC and that such functionality doesn't exist in tProxy. But now I am seeing in logs, that tproxy in no jd mode has that functionality, and if functionality exists in both (great) then we can do pool fallback for both solo pooled mining and JD mining, and even non jd pool mining.

@xyephy
Copy link
Copy Markdown
Author

xyephy commented Apr 30, 2026

Hi @xyephy I've looked into PR, and I like the new flow.

Found few concerns:

  1. Can you explain why the selected primary pool is always listed in the fallback? Is there a UX improvement I am missing, it may look a bit odd to show pool that's already primary in the fallback option? Should we exclude primary pool if user already selected it in the flow?

its a UX bug, I just need to add validation to prevent primary pool from appearing in the fallback option. Yes we should exclude primary pool if user has already selected.

@GitGab19
Copy link
Copy Markdown
Member

@pavlenex fallback exists in both JDC and tProxy, more precisely that's what happens:

  • in JD mode, JDC fallbacks to backup pools, if any, and to solo mining as a last resort
  • in NON JD mode, tProxy fallbacks to backup pools, if any

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented Apr 30, 2026

In that case this PR needs to be properly done then, it seems it's missing critical structural elements. For example in solo pool mode we should have fallback, currently we don't. It seems fallback in tproxy doesn't work either I doubt that's because of this pr.

@xyephy
Copy link
Copy Markdown
Author

xyephy commented May 1, 2026

@pavlenex thanks @GitGab19 for confirming.

Pushed ec0175a addressing the structural concerns:

  1. Primary excluded from fallback picker (filter + duplicate-triplet validation if a user types matching custom values). Match is now full address+port+pubkey, not just host:port.
  2. Solo + no-JD now exposes fallback section — was hidden by !isSoloMode in the wizard while settings already used !isSovereignSolo. Aligned to !isSovereignSolo, matching GitGab's confirmation that tProxy supports failover in non-JD mode.
  3. Reconfigure no longer wipes existing fallbacks when user picks the same mode.

@GitGab19
Copy link
Copy Markdown
Member

GitGab19 commented May 1, 2026

I like the progresses, but I noticed some issues.

I selected SRI pool and Blitzpool as fallback, running in SOLO POOL mode.

The fallback on tProxy happened, but the UI is still showing "Connected to SRI Community Solo Pool":
image

@xyephy xyephy force-pushed the 2026-04-22-multiple-fallback-pools branch from ec0175a to 5c666c5 Compare May 4, 2026 19:24
@xyephy
Copy link
Copy Markdown
Author

xyephy commented May 4, 2026

I like the progresses, but I noticed some issues.

I selected SRI pool and Blitzpool as fallback, running in SOLO POOL mode.

The fallback on tProxy happened, but the UI is still showing "Connected to SRI Community Solo Pool": image

It should now display the correct fallback connection, thank you pointing it out.

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented May 5, 2026

I tried Solo with SRI as default BlitzPool as a fallback and it crashed:

2026-05-05T07:12:24.861690793Z [translator] [stdout] 2026-05-05T07:12:24.861493Z  INFO translator_sv2: Starting Translator Proxy...
2026-05-05T07:12:24.861706335Z [translator] [stdout] 2026-05-05T07:12:24.861540Z  INFO translator_sv2: Initializing upstream connection...
2026-05-05T07:12:24.861707543Z [translator] [stdout] 2026-05-05T07:12:24.861542Z  INFO translator_sv2: Trying upstream 1 of 2: 75.119.150.111:3333
2026-05-05T07:12:24.861708251Z [translator] [stdout] 2026-05-05T07:12:24.861543Z  INFO translator_sv2: Connection attempt 1/3...
2026-05-05T07:12:25.865733960Z [translator] [stdout] 2026-05-05T07:12:25.864978Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T07:12:25.950044252Z [translator] [stdout] 2026-05-05T07:12:25.949738Z ERROR translator_sv2::sv2::upstream: Failed to connect to 75.119.150.111:3333: Connection refused (os error 111).
2026-05-05T07:12:25.950073835Z [translator] [stdout] 2026-05-05T07:12:25.949835Z  WARN translator_sv2: Attempt 1/3 failed for 75.119.150.111:3333: Io(Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })
2026-05-05T07:12:25.950076210Z [translator] [stdout] 2026-05-05T07:12:25.949844Z  INFO translator_sv2: Connection attempt 2/3...
2026-05-05T07:12:26.956843336Z [translator] [stdout] 2026-05-05T07:12:26.956462Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T07:12:27.044275419Z [translator] [stdout] 2026-05-05T07:12:27.043846Z ERROR translator_sv2::sv2::upstream: Failed to connect to 75.119.150.111:3333: Connection refused (os error 111).
2026-05-05T07:12:27.044346419Z [translator] [stdout] 2026-05-05T07:12:27.043930Z  WARN translator_sv2: Attempt 2/3 failed for 75.119.150.111:3333: Io(Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })
2026-05-05T07:12:27.044352877Z [translator] [stdout] 2026-05-05T07:12:27.043966Z  INFO translator_sv2: Connection attempt 3/3...
2026-05-05T07:12:28.046886128Z [translator] [stdout] 2026-05-05T07:12:28.046470Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T07:12:28.140134669Z [translator] [stdout] 2026-05-05T07:12:28.139972Z ERROR translator_sv2::sv2::upstream: Failed to connect to 75.119.150.111:3333: Connection refused (os error 111).
2026-05-05T07:12:28.140160128Z [translator] [stdout] 2026-05-05T07:12:28.140000Z  WARN translator_sv2: Attempt 3/3 failed for 75.119.150.111:3333: Io(Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })
2026-05-05T07:12:28.140162253Z [translator] [stdout] 2026-05-05T07:12:28.140008Z  WARN translator_sv2: Max retries reached for 75.119.150.111:3333, moving to next upstream
2026-05-05T07:12:28.140163586Z [translator] [stdout] 2026-05-05T07:12:28.140010Z  INFO translator_sv2: Trying upstream 2 of 2: blitzpool.yourdevice.ch:3333
2026-05-05T07:12:28.140164794Z [translator] [stdout] 2026-05-05T07:12:28.140012Z  INFO translator_sv2: Connection attempt 1/3...
2026-05-05T07:12:29.142996545Z [translator] [stdout] 2026-05-05T07:12:29.142565Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at blitzpool.yourdevice.ch:3333
2026-05-05T07:12:29.143076003Z [translator] [stdout] 2026-05-05T07:12:29.142630Z  INFO stratum_apps::network_helpers::resolve_hostname: Resolving hostname 'blitzpool.yourdevice.ch' via DNS...
2026-05-05T07:12:29.259985753Z [translator] [stdout] 2026-05-05T07:12:29.259639Z  INFO translator_sv2::sv2::upstream: Connected to upstream at 62.2.188.226:3333
2026-05-05T07:12:29.490527170Z [translator] [stdout] 2026-05-05T07:12:29.490069Z  INFO translator_sv2::sv2::upstream::common_message_handler: Received: SetupConnectionSuccess(used_version: 2, flags: 0x00000000)
2026-05-05T07:12:29.490645628Z [translator] [stdout] 2026-05-05T07:12:29.490165Z  INFO translator_sv2::sv1::sv1_server: Starting SV1 server on 0.0.0.0:34255
2026-05-05T07:12:29.490651420Z [translator] [stdout] 2026-05-05T07:12:29.490335Z  INFO translator_sv2::sv1::sv1_server: Translator Proxy: listening on 0.0.0.0:34255
2026-05-05T07:12:29.490896170Z [translator] [stdout] 2026-05-05T07:12:29.490490Z  INFO translator_sv2: Launching ChannelManager tasks...
2026-05-05T07:12:29.490911212Z [translator] [stdout] 2026-05-05T07:12:29.490534Z  INFO translator_sv2: Initializing monitoring server on http://0.0.0.0:9092
2026-05-05T07:12:29.490920128Z [translator] [stdout] 2026-05-05T07:12:29.490540Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Variable difficulty adjustment enabled - starting vardiff loop
2026-05-05T07:12:29.490924920Z [translator] [stdout] 2026-05-05T07:12:29.490568Z  INFO translator_sv2::sv1::sv1_server: Starting job keepalive loop with interval of 60 seconds
2026-05-05T07:12:29.490928087Z [translator] [stdout] 2026-05-05T07:12:29.490737Z  INFO stratum_apps::monitoring::http_server: Starting monitoring server on http://0.0.0.0:9092
2026-05-05T07:12:29.490931128Z [translator] [stdout] 2026-05-05T07:12:29.490750Z  INFO stratum_apps::monitoring::http_server: Cache refresh interval: 15s
2026-05-05T07:12:29.490964920Z [translator] [stdout] 2026-05-05T07:12:29.490828Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Starting vardiff loop for downstreams
2026-05-05T07:12:29.492450462Z [translator] [stdout] 2026-05-05T07:12:29.492287Z  INFO stratum_apps::monitoring::http_server: Swagger UI available at http://0.0.0.0:9092/swagger-ui
2026-05-05T07:12:29.492471253Z [translator] [stdout] 2026-05-05T07:12:29.492310Z  INFO stratum_apps::monitoring::http_server: Prometheus metrics available at http://0.0.0.0:9092/metrics
2026-05-05T07:12:34.949311673Z [translator] [stdout] 2026-05-05T07:12:34.949136Z  INFO translator_sv2::sv1::sv1_server: New SV1 downstream connection from 192.168.65.1:46394
2026-05-05T07:12:34.949341006Z [translator] [stdout] 2026-05-05T07:12:34.949182Z  INFO translator_sv2::sv1::sv1_server: Downstream 1 registered successfully (channel will be opened after first message)
2026-05-05T07:12:34.963988006Z [translator] [stdout] 2026-05-05T07:12:34.963802Z  INFO translator_sv2::sv1::sv1_server: SV1 server: opening extended mining channel for downstream 1 after first message
2026-05-05T07:12:34.964017923Z [translator] [stdout] 2026-05-05T07:12:34.963850Z  INFO translator_sv2::sv2::channel_manager: Sending OpenExtendedMiningChannel message to upstream: OpenExtendedMiningChannel(request_id: 1, user_identity: sri/solo/bc1qj658gf9c2kltge5k2x0a2p6cqme5fceacavwst, nominal_hash_rate: 500000000000, max_target: U256(0000000000384b84d4713ac163c43e9b0dddd2725a1947c189b7d325336a7bb5), min_extranonce_size: 4)
2026-05-05T07:12:35.139319923Z [translator] [stdout] 2026-05-05T07:12:35.138754Z ERROR translator_sv2::io_task: Reader error error=SocketClosed
2026-05-05T07:12:35.139355839Z [translator] [stdout] 2026-05-05T07:12:35.138903Z  WARN translator_sv2::io_task: Reader task exited.
2026-05-05T07:12:35.139977839Z [translator] [stdout] 2026-05-05T07:12:35.139116Z  WARN translator_sv2::sv2::upstream: Upstream::handle_upstream_message requested fallback error_kind=ChannelErrorReceiver(RecvError)
2026-05-05T07:12:35.140007589Z [translator] [stdout] 2026-05-05T07:12:35.139149Z  WARN translator_sv2::sv2::upstream: Upstream: task shutting down cleanly.
2026-05-05T07:12:35.140012798Z [translator] [stdout] 2026-05-05T07:12:35.139163Z  INFO translator_sv2::sv1::sv1_server: SV1 Server: fallback triggered, clearing state
2026-05-05T07:12:35.140016339Z [translator] [stdout] 2026-05-05T07:12:35.139238Z  INFO translator_sv2::sv2::channel_manager: ChannelManager: fallback triggered, resetting state
2026-05-05T07:12:35.140034131Z [translator] [stdout] 2026-05-05T07:12:35.139248Z  WARN translator_sv2::sv2::channel_manager: ChannelManager: unified message loop exited.
2026-05-05T07:12:35.140038006Z [translator] [stdout] 2026-05-05T07:12:35.139255Z  INFO translator_sv2: Monitoring server: fallback triggered.
2026-05-05T07:12:35.140041006Z [translator] [stdout] 2026-05-05T07:12:35.139259Z  INFO stratum_apps::monitoring::http_server: Monitoring server received shutdown signal, stopping...
2026-05-05T07:12:35.140044173Z [translator] [stdout] 2026-05-05T07:12:35.139341Z  INFO translator_sv2::sv1::downstream: Downstream 1: fallback triggered
2026-05-05T07:12:35.140047089Z [translator] [stdout] 2026-05-05T07:12:35.139352Z  WARN translator_sv2::sv1::downstream: Downstream 1: unified task shutting down
2026-05-05T07:12:35.140050006Z [translator] [stdout] 2026-05-05T07:12:35.139382Z  INFO translator_sv2: Preparing fallback
2026-05-05T07:12:35.140052756Z [translator] [stdout] 2026-05-05T07:12:35.139580Z  INFO stratum_apps::monitoring::http_server: Monitoring server stopped
2026-05-05T07:12:35.140055673Z [translator] [stdout] 2026-05-05T07:12:35.139588Z  INFO translator_sv2: Monitoring server task exited and signaled fallback coordinator

This may be due to our SV2-App logic, not sure, but adding user identiy as an address is a requirement in BlitzPool so I used it in SRI Pool as well. Unsure what went wrong but hopefully logs are helpful.

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented May 5, 2026

Same happened vice versa, blitz default sri secondary, something is fishy here:

2026-05-05T07:18:49.289320Z INFO translator_sv2: Starting Translator Proxy...
2026-05-05T07:18:49.289406Z INFO translator_sv2: Initializing upstream connection...
2026-05-05T07:18:49.289414Z INFO translator_sv2: Trying upstream 1 of 2: blitzpool.yourdevice.ch:3333
2026-05-05T07:18:49.289416Z INFO translator_sv2: Connection attempt 1/3...
2026-05-05T07:18:50.294462Z INFO translator_sv2::sv2::upstream: Trying to connect to upstream at blitzpool.yourdevice.ch:3333
2026-05-05T07:18:50.294515Z INFO stratum_apps::network_helpers::resolve_hostname: Resolving hostname 'blitzpool.yourdevice.ch' via DNS...
2026-05-05T07:18:50.410182Z INFO translator_sv2::sv2::upstream: Connected to upstream at 62.2.188.226:3333
2026-05-05T07:18:50.625375Z INFO translator_sv2::sv2::upstream::common_message_handler: Received: SetupConnectionSuccess(used_version: 2, flags: 0x00000000)
2026-05-05T07:18:50.625463Z INFO translator_sv2::sv1::sv1_server: Starting SV1 server on 0.0.0.0:34255
2026-05-05T07:18:50.625620Z INFO translator_sv2::sv1::sv1_server: Translator Proxy: listening on 0.0.0.0:34255
2026-05-05T07:18:50.625774Z INFO translator_sv2: Launching ChannelManager tasks...
2026-05-05T07:18:50.625811Z INFO translator_sv2: Initializing monitoring server on http://0.0.0.0:9092
2026-05-05T07:18:50.625826Z INFO translator_sv2::sv1::sv1_server::difficulty_manager: Variable difficulty adjustment enabled - starting vardiff loop
2026-05-05T07:18:50.625852Z INFO translator_sv2::sv1::sv1_server: Starting job keepalive loop with interval of 60 seconds
2026-05-05T07:18:50.626089Z INFO stratum_apps::monitoring::http_server: Starting monitoring server on http://0.0.0.0:9092
2026-05-05T07:18:50.626116Z INFO stratum_apps::monitoring::http_server: Cache refresh interval: 15s
2026-05-05T07:18:50.627581Z INFO stratum_apps::monitoring::http_server: Swagger UI available at http://0.0.0.0:9092/swagger-ui
2026-05-05T07:18:50.627606Z INFO stratum_apps::monitoring::http_server: Prometheus metrics available at http://0.0.0.0:9092/metrics
2026-05-05T07:18:50.627617Z INFO translator_sv2::sv1::sv1_server::difficulty_manager: Starting vardiff loop for downstreams
2026-05-05T07:19:08.684059Z INFO translator_sv2::sv1::sv1_server: New SV1 downstream connection from 192.168.65.1:44236
2026-05-05T07:19:08.684128Z INFO translator_sv2::sv1::sv1_server: Downstream 1 registered successfully (channel will be opened after first message)
2026-05-05T07:19:08.689001Z INFO translator_sv2::sv1::sv1_server: SV1 server: opening extended mining channel for downstream 1 after first message
2026-05-05T07:19:08.689073Z INFO translator_sv2::sv2::channel_manager: Sending OpenExtendedMiningChannel message to upstream: OpenExtendedMiningChannel(request_id: 1, user_identity: bc1qj658gf9c2kltge5k2x0a2p6cqme5fceacavwst.miner1, nominal_hash_rate: 500000000000, max_target: U256(0000000000bba6656ece6391342eb4b41716d2d8e0da6d36ab2087e29ad4922f), min_extranonce_size: 8)
2026-05-05T07:19:08.877471Z INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: OpenExtendedMiningChannelSuccess(request_id: 1, channel_id: 148, target: U256(00000000008cbccc12f83a0e9418223609426be0eaa8e6adb2cb8b5eabee610c), extranonce_size: 8, extranonce_prefix: B032(00000092), group_channel_id: 0), user_identity: bc1qj658gf9c2kltge5k2x0a2p6cqme5fceacavwst.miner1, nominal_hashrate: 500000000000
2026-05-05T07:19:08.877607Z INFO translator_sv2::sv1::sv1_server: Processing 2 queued Sv1 messages for downstream 1
2026-05-05T07:19:08.877626Z INFO translator_sv2::sv1::sv1_server::downstream_message_handler: Received mining.subscribe from Sv1 downstream
2026-05-05T07:19:08.877762Z INFO translator_sv2::sv1::sv1_server::downstream_message_handler: Received mining.authorize from Sv1 downstream 1
2026-05-05T07:19:08.877980Z INFO translator_sv2::sv1::sv1_server: Down: Handling mining.authorize after upstream channel is open
2026-05-05T07:19:08.878300Z INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: NewExtendedMiningJob(channel_id: 148, job_id: 6526162, min_ntime: Sv2Option(None), version: 0x20000000, version_rolling_allowed: true, merkle_path: Seq0255<len=9: [9feaafb4e01a4b8687d6b2ea8d2b0625a15b8d3dc5d6d8f5d84b367d9f1ae6d0, 6580cfdb8eb7b4298473c60acae948fb439235b7b319444dcac59f4d0c5822c6, ... , 00976da76a078e1e38a6fda3f43d07f79707e7b33043491ff298ecc7b5689a21, 3aa38005ad7ad7e6d0bbf709db8c8410b1724fae07ce7bb1e58c0a5f0daa7214], coinbase_tx_prefix: B064K(02000000010000000000000000000000000000000000000000000000000000000000000000ffffffff190314770e626c69747a706f6f6c), coinbase_tx_suffix: B064K(ffffffff02c8a2a3120000000016001496a87424b855beb46696519fd5075806f344e33d0000000000000000266a24aa21a9ed48783647bca4e8079e97eab7b828a1e42551eece2ffb3a064835970d7f6e044500000000))
2026-05-05T07:19:08.878418Z INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: SetNewPrevHash(channel_id=148, job_id=6526162, prev_hash=U256(000000000000000000019d344b9e89785259d08a4fd469a236440233bc026f5e), min_ntime=1777965519, nbits=0x17021ff0)
2026-05-05T07:19:08.878697Z INFO translator_sv2::sv2::channel_manager::mining_message_handler: Received: NewExtendedMiningJob(channel_id: 148, job_id: 6526163, min_ntime: Sv2Option(1777965519), version: 0x20000000, version_rolling_allowed: true, merkle_path: Seq0255<len=9: [9feaafb4e01a4b8687d6b2ea8d2b0625a15b8d3dc5d6d8f5d84b367d9f1ae6d0, 6580cfdb8eb7b4298473c60acae948fb439235b7b319444dcac59f4d0c5822c6, ... , 00976da76a078e1e38a6fda3f43d07f79707e7b33043491ff298ecc7b5689a21, 3aa38005ad7ad7e6d0bbf709db8c8410b1724fae07ce7bb1e58c0a5f0daa7214], coinbase_tx_prefix: B064K(02000000010000000000000000000000000000000000000000000000000000000000000000ffffffff190314770e626c69747a706f6f6c), coinbase_tx_suffix: B064K(ffffffff02c8a2a3120000000016001496a87424b855beb46696519fd5075806f344e33d0000000000000000266a24aa21a9ed48783647bca4e8079e97eab7b828a1e42551eece2ffb3a064835970d7f6e044500000000))
2026-05-05T07:19:08.975003Z ERROR translator_sv2::sv1::downstream: Error receiving downstream message: RecvError
2026-05-05T07:19:08.975066Z ERROR translator_sv2::sv1::downstream: Downstream 1: error in downstream message handler: TproxyError { kind: ChannelErrorReceiver(RecvError), action: Disconnect(1), _owner: PhantomData<translator_sv2::error::Downstream> }
2026-05-05T07:19:08.975086Z WARN translator_sv2::sv1::downstream: Downstream::handle_downstream_message requested disconnect; cancelling downstream token downstream_id=1 error_kind=ChannelErrorReceiver(RecvError)
2026-05-05T07:19:08.975100Z WARN translator_sv2::sv1::downstream: Downstream 1: unified task shutting down
2026-05-05T07:19:08.975117Z INFO translator_sv2::sv1::sv1_server: 🔌 Downstream: 1 disconnected and removed from sv1 server downstreams
2026-05-05T07:19:08.975126Z INFO translator_sv2::sv1::sv1_server: Sending CloseChannel message: 148 for downstream: 1
2026-05-05T07:19:09.087026Z ERROR translator_sv2::io_task: Reader error error=SocketClosed
2026-05-05T07:19:09.087105Z WARN translator_sv2::io_task: Reader task exited.
2026-05-05T07:19:09.087200Z WARN translator_sv2::io_task: Outbound channel closed
2026-05-05T07:19:09.087203Z WARN translator_sv2::io_task: Writer task exited.
2026-05-05T07:19:09.087356Z WARN translator_sv2::sv2::upstream: Upstream::handle_upstream_message requested fallback error_kind=ChannelErrorReceiver(RecvError)
2026-05-05T07:19:09.087495Z WARN translator_sv2::sv2::upstream: Upstream: task shutting down cleanly.
2026-05-05T07:19:09.087514Z INFO translator_sv2::sv2::channel_manager: ChannelManager: fallback triggered, resetting state
2026-05-05T07:19:09.087549Z WARN translator_sv2::sv2::channel_manager: ChannelManager: unified message loop exited.
2026-05-05T07:19:09.087559Z INFO translator_sv2: Monitoring server: fallback triggered.
2026-05-05T07:19:09.087563Z INFO stratum_apps::monitoring::http_server: Monitoring server received shutdown signal, stopping...
2026-05-05T07:19:09.087647Z INFO translator_sv2::sv1::sv1_server: SV1 Server: fallback triggered, clearing state
2026-05-05T07:19:09.087818Z INFO translator_sv2: Preparing fallback
2026-05-05T07:19:09.087845Z INFO stratum_apps::monitoring::http_server: Monitoring server stopped
2026-05-05T07:19:09.087850Z INFO translator_sv2: Monitoring server task exited and signaled fallback coordinator
2026-05-05T07:19:09.087910Z INFO translator_sv2: All components finished fallback cleanup
2026-05-05T07:19:09.088104Z INFO translator_sv2: Trying upstream 2 of 2: 75.119.150.111:3333
2026-05-05T07:19:09.088117Z INFO translator_sv2: Connection attempt 1/3...
2026-05-05T07:19:10.094996Z INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T07:19:10.179036Z ERROR translator_sv2::sv2::upstream: Failed to connect to 75.119.150.111:3333: Connection refused (os error 111).
2026-05-05T07:19:10.179181Z WARN translator_sv2: Attempt 1/3 failed for 75.119.150.111:3333: Io(Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })
2026-05-05T07:19:10.179215Z INFO translator_sv2: Connection attempt 2/3...
2026-05-05T07:19:11.184212Z INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T07:19:11.267743Z ERROR translator_sv2::sv2::upstream: Failed to connect to 75.119.150.111:3333: Connection refused (os error 111).
2026-05-05T07:19:11.267829Z WARN translator_sv2: Attempt 2/3 failed for 75.119.150.111:3333: Io(Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })
2026-05-05T07:19:11.267848Z INFO translator_sv2: Connection attempt 3/3...
2026-05-05T07:19:12.268675Z INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T07:19:12.352914Z ERROR translator_sv2::sv2::upstream: Failed to connect to 75.119.150.111:3333: Connection refused (os error 111).
2026-05-05T07:19:12.352947Z WARN translator_sv2: Attempt 3/3 failed for 75.119.150.111:3333: Io(Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })
2026-05-05T07:19:12.352955Z WARN translator_sv2: Max retries reached for 75.119.150.111:3333, moving to next upstream
2026-05-05T07:19:12.352957Z ERROR translator_sv2: All upstreams failed after 3 retries each
2026-05-05T07:19:12.352960Z ERROR translator_sv2: Couldn't perform fallback, shutting system down: CouldNotInitiateSystem
2026-05-05T07:19:12.352964Z WARN translator_sv2: Graceful shutdown: waiting 5 seconds for tasks to finish
2026-05-05T07:19:12.352979Z INFO translator_sv2: All tasks joined cleanly
2026-05-05T07:19:12.352981Z INFO translator_sv2: TranslatorSv2 shutdown complete.
2026-05-05T07:19:12.352999Z INFO translator_sv2: TranslatorSv2 dropped

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented May 5, 2026

When fallback isn't enabled in the UI I can connect to either or without any issues. Seems like we should test this one more.

@Shourya742
Copy link
Copy Markdown
Member

Having a look

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented May 5, 2026

@Shourya742 Not really sure what's triggering but I am sure you'll figure it out :) One thing to narrow down the search could be:

  • Payout Address
  • Worker Name

It could be if worker name is empty it can happen, as from what I can tell naively BitzPool expects payout address and treats it as a worker name, but this is just a naive assumption, this config made things work for me using same for worker and payout with sri as primary pool, but it could as well be wider incopatability issue between the pools.

Screenshot 2026-05-05 at 12 36 16

@xyephy
Copy link
Copy Markdown
Author

xyephy commented May 5, 2026

@pavlenex you might be right on what causing the fail.
The two pools use completely different identity conventions:

  • Scenario 1 (SRI primary): identity sent was sri/solo/bc1qj658gf9c2kltge5k2x0a2p6cqme5fceacavwst
  • Scenario 2 (BlitzPool primary): identity sent was bc1qj658gf9c2kltge5k2x0a2p6cqme5fceacavwst.miner1

Why it breaks with fallback: the translator only has one user_identity field globally (server/src/config-generator.ts:139). Whatever format we built for the primary gets sent verbatim to whichever upstream is currently connected.

So when BlitzPool gets sri/solo/<addr> (Scenario 1), it kicks the connection within 200ms — and that's exactly what the log shows: Reader error error=SocketClosed immediately after the OpenExtendedMiningChannel was sent.

@xyephy
Copy link
Copy Markdown
Author

xyephy commented May 5, 2026

Fix would be to make user_identity per-upstream instead of a single global value in sv2-apps since the translator already iterates upstreams for failover, it just needs to read the matching identity from the active block. Same for JDC.
Until sv2-apps supports per-upstream identity, anything we do here is just UX guarding. I don't see an existing sv2-apps issue for this, happy to open one and pick it up unless there's one already that I might have missed.

@pavlenex
Copy link
Copy Markdown
Contributor

pavlenex commented May 5, 2026

@xyephy Were you able to replicate this locally?

@Shourya742
Copy link
Copy Markdown
Member

@pavlenex I was able to replicate this in apps. It seems a writer task isn’t undergoing fallback, which is causing the executor to hang. With the local fix, I was able to connect to our SRI pool after the fallback.

2026-05-05T09:34:05.343331Z  INFO translator_sv2: Starting Translator Proxy...
2026-05-05T09:34:05.343472Z  INFO translator_sv2: Initializing upstream connection...
2026-05-05T09:34:05.343495Z  INFO translator_sv2: Trying upstream 1 of 2: blitzpool.yourdevice.ch:3333
2026-05-05T09:34:05.343503Z  INFO translator_sv2: Connection attempt 1/3...
2026-05-05T09:34:06.345518Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at blitzpool.yourdevice.ch:3333
2026-05-05T09:34:06.345586Z  INFO stratum_apps::network_helpers::resolve_hostname: Resolving hostname 'blitzpool.yourdevice.ch' via DNS...
2026-05-05T09:34:06.701409Z  INFO translator_sv2::sv2::upstream: Connected to upstream at 62.2.188.226:3333
2026-05-05T09:34:07.108148Z  INFO translator_sv2::sv2::upstream::common_message_handler: Received: SetupConnectionSuccess(used_version: 2, flags: 0x00000000)
2026-05-05T09:34:07.108279Z  INFO translator_sv2::sv1::sv1_server: Starting SV1 server on 0.0.0.0:34255
2026-05-05T09:34:07.108427Z  INFO translator_sv2::sv1::sv1_server: Translator Proxy: listening on 0.0.0.0:34255
2026-05-05T09:34:07.108636Z  INFO translator_sv2: Launching ChannelManager tasks...
2026-05-05T09:34:07.108706Z  INFO translator_sv2: Initializing monitoring server on http://0.0.0.0:9092
2026-05-05T09:34:07.108797Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Variable difficulty adjustment enabled - starting vardiff loop
2026-05-05T09:34:07.108898Z  INFO translator_sv2::sv1::sv1_server: Starting job keepalive loop with interval of 60 seconds
2026-05-05T09:34:07.109359Z  INFO stratum_apps::monitoring::http_server: Starting monitoring server on http://0.0.0.0:9092
2026-05-05T09:34:07.109402Z  INFO stratum_apps::monitoring::http_server: Cache refresh interval: 15s
2026-05-05T09:34:07.110122Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Starting vardiff loop for downstreams
2026-05-05T09:34:07.113040Z  INFO stratum_apps::monitoring::http_server: Swagger UI available at http://0.0.0.0:9092/swagger-ui
2026-05-05T09:34:07.113077Z  INFO stratum_apps::monitoring::http_server: Prometheus metrics available at http://0.0.0.0:9092/metrics
2026-05-05T09:34:13.677080Z  INFO translator_sv2::sv1::sv1_server: New SV1 downstream connection from 127.0.0.1:50146
2026-05-05T09:34:13.677180Z  INFO translator_sv2::sv1::sv1_server: Downstream 1 registered successfully (channel will be opened after first message)
2026-05-05T09:34:13.677320Z  INFO translator_sv2::sv1::sv1_server: SV1 server: opening extended mining channel for downstream 1 after first message
2026-05-05T09:34:13.677378Z  INFO translator_sv2::sv2::channel_manager: Sending OpenExtendedMiningChannel message to upstream: OpenExtendedMiningChannel(request_id: 1, user_identity: sri/solo/bc1q9f9vj7spn8h7qda6pn8d4g4j99f0mn9lhwz55j, nominal_hash_rate: 10000000000000, max_target: U256(000000000002d09371a41ebef0f5ac163c7c5d80571129ae7f7d75deba45e0c0), min_extranonce_size: 6)
2026-05-05T09:34:13.996069Z  WARN translator_sv2::sv2::channel_manager::mining_message_handler: Received: OpenMiningChannelError(request_id: 1, error_code: unknown-user)
2026-05-05T09:34:13.996161Z  WARN translator_sv2::sv2::channel_manager: ChannelManager::handle_upstream_frame requested fallback error_kind=OpenMiningChannelError
2026-05-05T09:34:13.996218Z  WARN translator_sv2::sv2::channel_manager: ChannelManager: unified message loop exited.
2026-05-05T09:34:13.996250Z  INFO translator_sv2: Preparing fallback
2026-05-05T09:34:13.996266Z  INFO translator_sv2::sv2::upstream: Upstream: fallback triggered
2026-05-05T09:34:13.996396Z  WARN translator_sv2::sv2::upstream: Upstream: task shutting down cleanly.
2026-05-05T09:34:13.996440Z  WARN translator_sv2::io_task: Writer task exited.
2026-05-05T09:34:13.996432Z  INFO translator_sv2::sv1::downstream: Downstream 1: fallback triggered
2026-05-05T09:34:13.996279Z  INFO translator_sv2::sv1::sv1_server: SV1 Server: fallback triggered, clearing state
2026-05-05T09:34:13.996355Z  INFO translator_sv2: Monitoring server: fallback triggered.
2026-05-05T09:34:13.996552Z  INFO stratum_apps::monitoring::http_server: Monitoring server received shutdown signal, stopping...
2026-05-05T09:34:13.996552Z  WARN translator_sv2::io_task: Reader task exited.
2026-05-05T09:34:13.996862Z  INFO stratum_apps::monitoring::http_server: Monitoring server stopped
2026-05-05T09:34:13.996892Z  INFO translator_sv2: Monitoring server task exited and signaled fallback coordinator
2026-05-05T09:34:13.996471Z  WARN translator_sv2::sv1::downstream: Downstream 1: unified task shutting down
2026-05-05T09:34:13.997059Z  INFO translator_sv2: All components finished fallback cleanup
2026-05-05T09:34:13.997252Z  INFO translator_sv2: Trying upstream 2 of 2: 75.119.150.111:3333
2026-05-05T09:34:13.997286Z  INFO translator_sv2: Connection attempt 1/3...
2026-05-05T09:34:14.998440Z  INFO translator_sv2::sv2::upstream: Trying to connect to upstream at 75.119.150.111:3333
2026-05-05T09:34:15.297641Z  INFO translator_sv2::sv2::upstream: Connected to upstream at 75.119.150.111:3333
2026-05-05T09:34:15.708197Z  INFO translator_sv2::sv2::upstream::common_message_handler: Received: SetupConnectionSuccess(used_version: 2, flags: 0x00000000)
2026-05-05T09:34:15.708287Z  INFO translator_sv2::sv1::sv1_server: Starting SV1 server on 0.0.0.0:34255
2026-05-05T09:34:15.708399Z  INFO translator_sv2::sv1::sv1_server: Translator Proxy: listening on 0.0.0.0:34255
2026-05-05T09:34:15.708540Z  INFO translator_sv2: Launching ChannelManager tasks...
2026-05-05T09:34:15.708574Z  INFO translator_sv2: Reinitializing monitoring server on http://0.0.0.0:9092
2026-05-05T09:34:15.708595Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Variable difficulty adjustment enabled - starting vardiff loop
2026-05-05T09:34:15.708719Z  INFO translator_sv2::sv1::sv1_server: Starting job keepalive loop with interval of 60 seconds
2026-05-05T09:34:15.708872Z  INFO translator_sv2: Upstream and ChannelManager restarted successfully.
2026-05-05T09:34:15.708969Z  INFO stratum_apps::monitoring::http_server: Starting monitoring server on http://0.0.0.0:9092
2026-05-05T09:34:15.709007Z  INFO stratum_apps::monitoring::http_server: Cache refresh interval: 15s
2026-05-05T09:34:15.709859Z  INFO translator_sv2::sv1::sv1_server::difficulty_manager: Starting vardiff loop for downstreams
2026-05-05T09:34:15.712064Z  INFO stratum_apps::monitoring::http_server: Swagger UI available at http://0.0.0.0:9092/swagger-ui
2026-05-05T09:34:15.712096Z  INFO stratum_apps::monitoring::http_server: Prometheus metrics available at http://0.0.0.0:9092/metrics
2026-05-05T09:34:30.694962Z  INFO translator_sv2::sv1::sv1_server: New SV1 downstream connection from 127.0.0.1:50480
2026-05-05T09:34:30.695067Z  INFO translator_sv2::sv1::sv1_server: Downstream 1 registered successfully (channel will be opened after first message)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add ability for users to add multiple fallback pools through the UI

5 participants