forked from ElementsProject/lightning
-
Notifications
You must be signed in to change notification settings - Fork 2
Update remote hsmd to v23.08rc1 #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
ksedgwic
wants to merge
584
commits into
master
Choose a base branch
from
2023-08-remote-hsmd-v23.08rc1
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We also document how the grpc test mode works, and why it currently lacks coverage. Changelog-Changed: pyln-testing: The grpc dependencies are now optional.
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
Most code stolen from commando, but uses db directly not datastore. Signed-off-by: Rusty Russell <[email protected]>
This looks suspiciously like `commando-rune`! Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
This extracts the core checking functionality for a rune, so they can easily be used more widely than just commando. Signed-off-by: Rusty Russell <[email protected]>
…ed commando ones. Rune functionality moved into core from commando plugin. Changelog-Added: JSON-RPC: `checkrune`: check rune validity for authorization; `createrune` to create/modify rune; `listrunes` to list existing runes; `blacklistrune` to revoke permission of rune Changelog-Deprecated: JSON-RPC: `commando-rune`, `commando-listrunes` and `commando-blacklist`. No-schema-diff-check
This is neater anyway, but we still need to tell memleak code about the htable. Signed-off-by: Rusty Russell <[email protected]>
Writing to the gossip_store file is not explicitly synchronized, so it seems that connectd has not caught up with the dying flags we've set. Simply wait for a second. Hacky, but should work. ``` def test_gossip_not_dying(node_factory, bitcoind): l1 = node_factory.get_node() l2, l3 = node_factory.line_graph(2, wait_for_announce=True) l1.rpc.connect(l2.info['id'], 'localhost', l2.port) # Wait until it sees all the updates, node announcments. wait_for(lambda: len([n for n in l1.rpc.listnodes()['nodes'] if 'alias' in n]) + len(l1.rpc.listchannels()['channels']) == 4) def get_gossip(node): out = subprocess.run(['devtools/gossipwith', '--initial-sync', '--timeout-after=2', '{}@localhost:{}'.format(node.info['id'], node.port)], check=True, timeout=TIMEOUT, stdout=subprocess.PIPE).stdout msgs = [] while len(out): l, t = struct.unpack('>HH', out[0:4]) msg = out[2:2 + l] out = out[2 + l:] # Ignore pings, timestamp_filter if t == 265 or t == 18: continue # channel_announcement node_announcement or channel_update assert t == 256 or t == 257 or t == 258 msgs.append(msg) return msgs assert len(get_gossip(l1)) == 5 # Close l2->l3, mine block. l2.rpc.close(l3.info['id']) bitcoind.generate_block(1, wait_for_mempool=1) l1.daemon.wait_for_log("closing soon due to the funding outpoint being spent") # We won't gossip the dead channel any more (but we still propagate node_announcement) > assert len(get_gossip(l1)) == 2 E assert 4 == 2 E + where 4 = len([b'\x01\x01L\xc2\xbe\x08\xbb\xa8~\x8f\x80R\x9e`J\x1cS\x18|\x12\n\xe5_6\xb0\xa6S\x9fU\xae\x19\x9c\x1fXB\xab\x81N\x13\xdc\x8e}\xb9\xb0\xb6\xe6\x14h\xd4:\x90\xce\xc3\xad\x9ezR`~\xba@\xc9\x91e\x89\xab\x00\x07\x88\xa0\x00\n\x02i\xa2d\xb8\xa9`\x02-"6 \xa3Y\xa4\x7f\xf7\xf7\xacD|\x85\xc4l\x92=\xa53\x89"\x1a\x00T\xc1\x1c\x1e<\xa3\x1dY\x02-"SILENTARTIST-27fc801-modded\x00\x00\x00\x00\x00\x00\x00', b'\x01\x01M\x00\x86\x8e4\xc8\x90p\n\x98\xf7\xce4\x1e\xd9\xd6-6\xfb(\xf0\xe4\xb7\x90\x7f\x89\xb9\xfa\x00\x82\x1b\xeb\x1fY\x93\x1e\xe0c\xb2\x0e<\xe6\x06x\xb7\xe54};\xfbd\xa0\x01S\xcf\xe8{\xf8\x8f/\xa7\xc0\xe2h\x00\x07\x88\xa0\x00\n\x02i\xa2d\xb8\xa9`\x03]+\x11\x92\xdf\xba\x13N\x10\xe5@\x87]6n\xbc\x8b\xc3S\xd5\xaavk\x80\xc0\x90\xb3\x9c:]\x88]\x03]+HOPPINGFIRE-27fc801-modded\x00\x00\x00\x00\x00\x00\x00\x00', b'\x01\x02~\xe0\x13\xb4\x84Gz\xcf(\xd4w\xa7\x9bZ\x1a\xe82\xd1\xe1\x1bLm\xc8\n\xcd\xd4\xfb\x88\xf8\xc6\xdbt\\v\x89~\xd1.e\xc8\xa8o\x9c`\xd5\xa8\x97\x11l\xf2g\xcb\xa8\xcf\r\x869\xd3\xb5\xd5\x9a\xa0my\x9f\x87\xebX\x0b\x9e_\x11\xdc!\x1e\x9f\xb6j\xbb6\x99\x99\x90D\xf8\xfe\x14h\x01\x16#\x936B\x86\xc6\x00\x00g\x00\x00\x01\x00\x00d\xb8\xa9d\x01\x02\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\n\x00\x00\x00\x00;\x023\x80', b"\x01\x0284\xf1a\x86z\x8e\xf2\xe5'\xf7\xfe1\x8d\x96R\x0c\xe7\x1fj#\xaf\xbd/\xba\x10e\xd1\xccQ-\xcf/>\xa5g\xc6\xd8\x9cO \xe7~\xb3\xda\xe0\\vg\xfb\x02&T\x93\xa0\xd4\x95\x8e\xd5L\x12\x9a\xf7\xe6\x9f\x87\xebX\x0b\x9e_\x11\xdc!\x1e\x9f\xb6j\xbb6\x99\x99\x90D\xf8\xfe\x14h\x01\x16#\x936B\x86\xc6\x00\x00g\x00\x00\x01\x00\x00d\xb8\xa9d\x01\x03\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\n\x00\x00\x00\x00;\x023\x80"]) ``` Signed-off-by: Rusty Russell <[email protected]>
We left their_features dangling allocated off the hook_payload, whereas we explicitly move it to the peer, so steal/take it onto that. ``` - Node /tmp/ltests-gh55a_h2/test_connection_moved_1/lightning-2/ has memory leaks: [ { "backtrace": [ "/home/runner/work/lightning/lightning/ccan/ccan/tal/tal.c:477 (tal_alloc_)", "/home/runner/work/lightning/lightning/ccan/ccan/tal/tal.c:506 (tal_alloc_arr_)", "/home/runner/work/lightning/lightning/connectd/connectd_wiregen.c:410 (fromwire_connectd_peer_connected)", "/home/runner/work/lightning/lightning/lightningd/peer_control.c:1418 (peer_connected)", "/home/runner/work/lightning/lightning/lightningd/connect_control.c:592 (connectd_msg)", "/home/runner/work/lightning/lightning/lightningd/subd.c:557 (sd_msg_read)", "/home/runner/work/lightning/lightning/ccan/ccan/io/io.c:59 (next_plan)", "/home/runner/work/lightning/lightning/ccan/ccan/io/io.c:407 (do_plan)", "/home/runner/work/lightning/lightning/ccan/ccan/io/io.c:417 (io_ready)", "/home/runner/work/lightning/lightning/ccan/ccan/io/poll.c:453 (io_loop)", "/home/runner/work/lightning/lightning/lightningd/io_loop_with_timers.c:22 (io_loop_with_timers)", "/home/runner/work/lightning/lightning/lightningd/lightningd.c:1243 (main)" ], "label": "connectd/connectd_wiregen.c:410:u8[]", "parents": [ "lightningd/peer_control.c:1415:struct peer_connected_hook_payload", "lightningd/plugin_hook.c:260:struct plugin_hook_request **NOTLEAK**", "lightningd/plugin_hook.c:87:struct hook_instance *[] **NOTLEAK**" ], "value": "0x5582622b1ff8" } ] ``` Signed-off-by: Rusty Russell <[email protected]>
l1 shuts down too fast, channeld doesn't get to tell gossipd about the channel, and l2 sends (private) update_channel and we complain: ``` lightningd-2 2023-07-20T03:42:37.744Z DEBUG gossipd: received private channel announcement from channeld for 103x1x0 lightningd-2 2023-07-20T03:42:37.791Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-hsmd: Got WIRE_HSMD_CUPDATE_SIG_REQ lightningd-2 2023-07-20T03:42:37.796Z DEBUG hsmd: Client: Received message 3 from client lightningd-1 2023-07-20T03:42:37.857Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Bad gossip order: WIRE_CHANNEL_UPDATE before announcement 103x1x0/0 lightningd-1 2023-07-20T03:42:37.864Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Bad gossip order: WIRE_CHANNEL_UPDATE before announcement 103x1x0/0 ``` Signed-off-by: Rusty Russell <[email protected]>
Sometimes syncing 4032 blocks can take too long: ``` # This can timeout, so do it in easy stages. for i in range(16): bitcoind.generate_block(4032 // 16) > sync_blockheight(bitcoind, [l2, l3]) tests/test_closing.py:996: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ contrib/pyln-testing/pyln/testing/utils.py:135: in sync_blockheight wait_for(lambda: n.rpc.getinfo()['blockheight'] == height) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ success = <function sync_blockheight.<locals>.<lambda> at 0x7ffb00227670> timeout = 180 def wait_for(success, timeout=TIMEOUT): start_time = time.time() interval = 0.25 while not success(): time_left = start_time + timeout - time.time() if time_left <= 0: > raise ValueError("Timeout while waiting for {}".format(success)) E ValueError: Timeout while waiting for <function sync_blockheight.<locals>.<lambda> at 0x7ffb00227670> ``` Signed-off-by: Rusty Russell <[email protected]>
h->ss is allocated, but the previous not freed. It will be freed as soon as the `struct handshake` is freed, but a temporary "leak" got reported: ``` **BROKEN** connectd: MEMLEAK: 0x55adfcff2f48 **BROKEN** connectd: label=connectd/handshake.c:647:struct secret **BROKEN** connectd: backtrace: **BROKEN** connectd: ccan/ccan/tal/tal.c:477 (tal_alloc_) **BROKEN** connectd: connectd/handshake.c:647 (act_one_initiator) **BROKEN** connectd: connectd/handshake.c:1023 (initiator_handshake_) **BROKEN** connectd: connectd/connectd.c:615 (connection_out) **BROKEN** connectd: ccan/ccan/io/io.c:59 (next_plan) **BROKEN** connectd: ccan/ccan/io/io.c:407 (do_plan) **BROKEN** connectd: ccan/ccan/io/io.c:423 (io_ready) **BROKEN** connectd: ccan/ccan/io/poll.c:453 (io_loop) **BROKEN** connectd: connectd/connectd.c:2215 (main) **BROKEN** connectd: parents: **BROKEN** connectd: connectd/handshake.c:402:struct handshake **BROKEN** connectd: connectd/connectd.c:1774:struct connecting ``` Signed-off-by: Rusty Russell <[email protected]>
We tried to fix this flake before, but now it actually happened again it shows that b5845af wasn't correct. ``` # If this happens fast enough, connect fails with "disconnected # during connection" try: l1.rpc.connect(l2.info['id'], 'localhost', l2.port) except RpcError as err: > assert "disconnected during connection" in err.error E assert 'disconnected during connection' in {'code': 402, 'message': 'disconnected during connection'} E + where {'code': 402, 'message': 'disconnected during connection'} = RpcError("RPC call failed: method: connect, payload: {'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59', 'host': 'localhost', 'port': 41849}, error: {'code': 402, 'message': 'disconnected during connection'}").error tests/test_misc.py:2728: AssertionError ``` Signed-off-by: Rusty Russell <[email protected]>
The logs show we're not waiting for the right transaction, so be explicit. ``` # Check that channels gets refreshed! scid = l1.get_channel_scid(l2) l1.rpc.setchannel(scid, feebase=123) wait_for(lambda: l3.rpc.sql("SELECT short_channel_id FROM channels WHERE base_fee_millisatoshi = 123;")['rows'] == [[scid]]) l3.daemon.wait_for_log("Refreshing channels...") l3.daemon.wait_for_log("Refreshing channel: {}".format(scid)) # This has to wait for the hold_invoice plugin to let go! l1.rpc.close(l2.info['id']) bitcoind.generate_block(13, wait_for_mempool=1) > wait_for(lambda: len(l3.rpc.listchannels()['channels']) == 2) tests/test_plugin.py:4143: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ success = <function test_sql.<locals>.<lambda> at 0x7fde3b9cd3a0>, timeout = 180 def wait_for(success, timeout=TIMEOUT): start_time = time.time() interval = 0.25 while not success(): time_left = start_time + timeout - time.time() if time_left <= 0: > raise ValueError("Timeout while waiting for {}".format(success)) E ValueError: Timeout while waiting for <function test_sql.<locals>.<lambda> at 0x7fde3b9cd3a0> ``` Signed-off-by: Rusty Russell <[email protected]>
Recent cppcheck doesn't like our code; until we fix that, make it easy to run every other source check. Signed-off-by: Rusty Russell <[email protected]>
This way you can run pytest with '-k no _tor_'. Signed-off-by: Rusty Russell <[email protected]>
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.1 to 41.0.2. - [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst) - [Commits](pyca/cryptography@41.0.1...41.0.2) --- updated-dependencies: - dependency-name: cryptography dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]>
For now, it's set from the global config. Signed-off-by: Rusty Russell <[email protected]>
…llers. Since it's going to be per-channel, this makes more sense. Signed-off-by: Rusty Russell <[email protected]>
Changelog-Added: JSON-RPC: `listpeerchannels` has a new field `ignore_fee_limits` Signed-off-by: Rusty Russell <[email protected]>
It's always true, now deprecated APIs removed. Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]> Changelog-Added: JSON-RPC: `setchannel` adds a new `ignorefeelimits` parameter to allow peer to set arbitrary commitment transaction fees on a per-channel basis.
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
…rk, and vlsd job control
Skip test with bolt12 invoice for now
- "channel stub can only return point for commitment number zero" - likely caused by CLN commit c0cc285 - possibly we relax the check, may not be security critical - See issue https://gitlab.com/lightning-signer/validating-lightning-signer/-/issues/245
2c88844
to
54cac98
Compare
54cac98
to
874aac8
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
DON'T MERGE THIS, give it a good branch / tag when it is done