-
Notifications
You must be signed in to change notification settings - Fork 10
sugondat-shim #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I don't understand where the shim you're talking about is placed, in the introduction and in the first paragraph (UX/DX) it seems to be just another binary that implements all the ad-hoc/common divisor RPCs that will be used by adapters but just after you talk about key-managment, but as far as I understood that shim should be agnostic to the rollup and the adapters use the shim to easily communicate with the sugondat node and the shim should not be able to do key-management Using the correct keys is responsability of the rollups developers that inject correctly the keys in the adapters that will sign and send transactions to the shim (or directly to the sugondat-node) |
Yeah, I envisioned it as a binary that exposes an HTTP-based API that for the near term would communicate with a sugondat node.
It would still be rollup agnostic. The blob submission requires signing of blobs to send on the DA, which is sugondat.
For me it's not obvious why the shim should not be able do key management and that the adapters should be responsible for the DA key management. It certainly a design point, for sure, but I wonder why do you think it would be a better approach? I am leaning towards pushing that complexity to the shim, because otherwise, you will have to implement it in each adapter independently. |
ok, I think I'm missing something then, because I don't think as a better approach, but more like the only one (not the real only, but the only I can image). I will make sure to explain what I mean to avoid misunderstandings: In my idea the shim is some sort of middleman between the adapters and the sugondat nodes BUT as I'm writing this I start imagining how the shim could not be placed parallel to the sugondat-nodes but more parallel to the da-adapter, so that each rollup will launch a rollup-node and alongside a shim that will translate the communication towards the sugondat-nodes |
cool pic
I am not sure what you mean by parallel, but
sounds accurate. That is, each rollup full node would run 3 processes: sugondat-node, sugondat-shim and the rollup node. Although is tehcnically possible and in some cases desirable, I don't think that it would be common for a rollup node to connect to some public node/shim. If anything, I think it would be more probably for multiple instances of rollup+shim to connect to a public sugondat-node. |
mhh, ok, now I see why you talked about the key-managment in the shim
perfect, here I was missing something, in my inexperience I thought: why a rollup host should also execute a sugondat-node, and then I envisioned a structure where each rollup communicate with some shim, that are executed with the sugondat-node. But you're saying that it is not common, so having 2 (rollup + shim) or 3 (rollup + shim + sugondat-node) process executing on the same machine is ok. I can't understand the convenience of this against what I thought, asking for demo-rollup full nodes to run also other processes seems to me an overhead for the rollup developer while keeping the structure more separate and split the jobs amount rollup and sugondat nodes mantainers seems cleaner |
I was lukewarm on the idea of the shim at first, but the testing / emulation benefits do seem fairly useful. My main concern is that we may end up with logic split halfway between the shim and the node. Certain operations are done more efficiently embedded alongside the node directly and I wouldn't want to end up implementing custom RPCs on the node so the shim can do its work, etc. |
Uh oh!
There was an error while loading. Please reload this page.
So let's say we implement #23
The question is how? I'd like to propose we consider implementing it as a shim/sidecar server running along the rollup and the sugondat-node, instead of implementing it as part of the RPC of sugondat-node.
There are several facets, let's go through each of them.
UX/DX
Yes, that would mean that the user will need to run another thing. IMO it's fine. I agree it would be better to run a single binary, but we are already past this: in the minimal deployment the rollup user would run the rollup node and the sugondat node as well. This already warrants using something like docker-compose. More realistically, the user will run some monitoring tools as well in a typical deployment. Adding another app here doesn't feel like much difference.
In a development environment, the dev would need to run the rollup node, sugondat-node and perhaps one or more polkadot validators, which already kinda assumes there would be some orchestration tooling (zombienet or docker-compose).
However, having a shim may improve the experience though. Imagine that we could provide the
sugondat-shim simulate --data /tmp
where instead of depending on a full fledged sugondat environment, it would simulate a DA layer. I expect this would improve the DX significantly.Key management
There is a problem right now: #22 we completely ignore the transaction signing aspect of blob submission in adapters. As I alluded to in #23 I think it's worthwhile to shift the complexity into the common API layer and make the adapters dumb. This would fit perfectly into the shim set of responsibilities.
E.g. when running in non-simulation mode, the user would be able to specify:
sugondat-shim --submit-private-key=/var/secrets/my_key
(orsugondat-shim --submit-dev-alice
to preserve the existing behavior) and that would enable the blob submission endpoint.Flexibility
We would decouple running sugondat-node from the rollup client. That:
Shortcomings of Substrate RPC
Funny thing, but this approach doesn't address the issue we discussed in slack: we still have to request the full blocks. This is fine for the initial time: for local use-case it works, for the remote use-case it's worse, but in the future, sugondat-node should expose more efficient.
Embedding
In case embedding is needed badly, it will be possible to arrange that anyway at the relatively low cost through a hourglass pattern. This is how we can achieve that. We link the shim to the adapter directly. The shim publishes a very slim FFI API that configures and sets up a server and some FFI functions to send a message to the server and receive a result, very much like in-process HTTP server (although the API would be more complex in case websocket/JSON-RPC are used for the shim transport).
The text was updated successfully, but these errors were encountered: