Fee mechanism #1217
Replies: 26 comments 22 replies
-
|
Thank you for the great write-up! This covers pretty much everything. A few comments below:
The general mechanism would be the same as on Ethereum: we define "base" level of resources and if a given block consumes more resources the base fee increases, but if a block consumes fewer resources, base fee decreases. For example, let's say we set the base for the number of output notes at 256 per block. If a block produces 512 notes, the fee per note will increase in the next block (the exact parameters TBD).
On Ethereum, the base fee can change by at most 12.5% from block to block (and this is the worst case if a block consumed 100% of allowed gas). We can use something similar or come up with a different value (but would probably need to think through the rationale).
On Ethereum, the base fee is burned and the "tip" part of the fee goes to the operator. In our case, the "tip" would be w/e remains after the base fee is satisfied. I think the tip can always go to the operator, but with the base fee we could have the following options:
In the centralized setting, the operator and batch builders would be the same - so, options 2 and 3 are basically equivalent.
Yes, but the dependency is logarithmic. For example, it takes a bit more effort to validate a proof of 2^16 cycles vs a proof for 2^20 cycles - but the difference between these is like 10%.
In the long run, we'd probably enforce something like that it is a valid (and maybe fungible) asset. In the short run, we'd probably dictate a specific asset (e.g., POL).
I think there are two parts here:
DOS attacks for network transactions are still an open question. A while back we discussed it in #18 - but some things may have changed since then. One mitigating circumstance is that the attack is not "free" - i.e., a network transaction consumes notes which are already on-chain - and so, someone had to pay to put these notes there. Though, I think this is a relatively weak defense. The second point also needs to be flashed out in more detail. One approach could be to have the account against which the notes are executed pay the fee (i.e., the account would expose a procedure for this). This procedure could calculate the fee based on the current base fee levels - and so, it may make sense to record base fee levels in a block header somehow. One other open question which I think is important to think through is: how do fees work in the context of transaction batches? Specifically, batches are atomic and so we either include all transactions from a batch into a block or none. Thus, from a block's stand point, the fees are collected per batch rather than per transaction. And the issue is that ideally we don't want to put a transaction that pays high fees relative to the resources it consumes with a transaction which pays low fees into the same batch. But not sure if there is a good way to enforce it. |
Beta Was this translation helpful? Give feedback.
-
One potential way to address this is to "anchor" each batch to some block already existing in the chain which is not too far from the tip. Then, transactions in the the batch would need to satisfy the base fees of the referenced block. The delta between the anchored block and the chain tip could be pretty low (e.g., 8 blocks). If a batch references a block which is bigger than this delta, it cannot be included in the chain. This still leaves the following undesirable effects:
|
Beta Was this translation helpful? Give feedback.
-
Wouldn't this still allow an attacker to post many transactions that consume those notes but fail close to the end of execution to make the node/batch builder waste cycles on failing transactions? Saw this in #18:
Indeed, this would be ideal. Other chains require the fee to be known "upfront" so to say. For example in Sui you need to attach a "gas object" that you own to the transaction from which the fees will be paid, and the fees will be deducted from it whether the transaction fails or not. Although after writing this I've realized that it's a problem here that there is no separation between the transaction kernel and note scripts in terms of aborting. I.e. if a note script fails and aborts, the kernel does too, right? So we would not get to
Why do we want to enforce it in the first place, instead of encourage it? We don't want to put high and low fee transactions into the same batch because then the batches' fees and tips will average out on those transactions, meaning that the batch won't be as likely to be included as a batch containing just high-fee transactions - is that the motivation? Isn't this best left as an optimization problem for the batch builder, which is what I mean by "encourage"? Sponsored TransactionsI'm also wondering if we have to design for transaction sponsoring, i.e. how can transactions be sponsored by a third party? Even though we have client-side execution, local transactions still incur a fee, and for many use cases transactions (local or network) may want to be sponsored, because the transaction creator might not have gas tokens themselves and/or might not want to own tokens for legal reasons, so sponsoring seems essential in such cases. The first idea that came to mind was that a gas sponsor could create many notes ("gas notes") with fungible assets containing the gas token. A user would ask the sponsor for a gas note and create their transaction with the gas note to pay the fee from. The sponsor provides a signature for the gas note which allows the user to consume a certain amount from it, according to the note script, which is defined by the sponsor. The note script would create a new note out of the remaining gas tokens after the gas fee is paid. There may be a circular dependency here between creating the transaction on user-side and giving the sponsor the transaction to produce a signature for its node (haven't thought it through in detail), but from a high-level it looks reasonable. A downside is that the gas notes must be "locked" on the sponsor side to prevent handing it out to different users, and I recall this being a tricky approach to get right from previous work. |
Beta Was this translation helpful? Give feedback.
-
This is in part why Starknet had the huge cairo -> Sierra code migration. To allow for provable failure, motivated largely by collecting fees from aborted transactions. I think at the time roughly 80% of transactions were invalid and being rejected post-execution. These often weren't even malicious; but simply bad dApp logic, poor fee estimation mechanisms, or fee fluctuations on L1. Effectively this meant that sequencer throughput was 5x from what was seen on-chain. What Sierra does is essentially turn every Gas remaining checks are also injected intelligently, allowing the kernel to abort once gas has been depleted in a provable way. |
Beta Was this translation helpful? Give feedback.
-
This wouldn't be a problem for the batch builder because by the time we get to the point of putting transactions into a batch, the transactions would be individually proven (and so they would be valid). It would be a problem for building a single transaction which consumes many notes against a single account where some note may fail after a bunch of notes have already been executed. This is a more general problem though (executing a note may fail for legitimate reasons) and to handle it robustly we probably need to implement "checkpointing" in the VM. This would allow us, in case of a failure, to roll-back the VM state to some previously set check point without having to re-run the whole program from scratch.
I think the problem is actually a bit broader here: it is very difficult to tell whether a note execution fails for a legitimate or illegitimate reasons. For example, let's say that we have a note which can be consumed only after block But also, this is a problem only for notes where the node cannot check "statically" (i.e., without execution) if a given note can be applied to a given account. For example, for Maybe a long-term solution here is for the node to somehow analyze incoming notes intended for public execution (based on note tags) to see under which conditions they can be executed. This will will probably limit how complex such notes can get though.
I think this should work, but agree that the user is basically relying on the sponsor not to "double-use" the note with someone else. Another possible solution in the long run is to allow transactions to pay fees in any asset. Under this design, batches would still need to pay fees to the block in the native token, but transactions could pay fees to the batch in anything. It then would be the job of the batch builder to convert the fees paid by the transaction into w/e fees are needed to make the batch go through. To give an example, let's say the native fee is POL and I want to pay transaction fee in USDC. I would include some amount in USDC as a fee with my transaction. The batch builder in the process of building the batch would pocket this USDC fee and would put up POL that would be necessary for including the batch into the block. Here again we run into the issue of "averaging fees across all transactions in a batch" though.
Yeah, maybe that's not a bad idea but I haven't thought through the potential implications. For example, would this allow batch builders to include their (or someone else's) transactions into the batch "for free"? And is that a bad thing? |
Beta Was this translation helpful? Give feedback.
-
|
I think one thing so far that seems fairly certain is that we should update transaction kernel outputs to look something like this: Where |
Beta Was this translation helpful? Give feedback.
-
|
I have one question about the block kernel and it seems to be related to fees (if not, we can continue the discussion elsewhere too). The batch kernel implements this "note erasures" mechanism by which a note created by a transaction in the batch which is also consumed as an unauthenticated note by another transaction is erased in the sense that it doesn't appear in the input notes commitment or the output notes SMT. Should the block kernel implement the same mechanism? Technically it would be possible to track inputs and output notes across all batches, but I wonder if this is useful from a "protocol perspective". Incentive for unauthenticated notes on batch-level I basically wonder how likely it is that this mechanism would be used in the block kernel in the first place. So batch builders are incentivized to build batches in such a way, as it maximizes their rewards. Hence I assume they would rarely or never defer "note erasure" to the block kernel.
Which is all to say, that it seems somewhat unlikely that "note erasure" would be used a lot in the block kernel. Overall, that raises the question whether it makes sense to implement this feature in the block kernel? Or am I missing an obvious common case? One other question this brings up is why transaction creator's would make use of unauthenticated notes in the first place, but that's another thing to discuss. |
Beta Was this translation helpful? Give feedback.
-
This just happens naturally when you submit multiple sequential transactions - I imagine this would happen all the time? So long as the transaction is accepted by RPC the caller either has to wait until it is onchain (latency), or just proceeds as if the transaction will be placed and submits the next one.
This will happen even in the single batch producer scenario just due to timing (iiuc). Batch gets built and sent off for proving, next transactions rolls in and would have been optimally included in the prior batch but alas. This also means we're speculating that note erasure will be a major optimal strategy. |
Beta Was this translation helpful? Give feedback.
-
Mh, could be. Maybe I can't think of the right use cases.
That's a good point, thanks. Yeah that does seem like a common enough case for it to make sense to include note erasure in the block kernel. |
Beta Was this translation helpful? Give feedback.
-
|
Yes, "note erasure" at the block level is possible (of the reasons discussed above) - so, ideally, we should support it. The only reason maybe not to consider this is the potential complexity. That is, if the process of figuring out which notes cancel out adds too much complexity to the block kernel, we could consider making simplifying assumptions (at least for now). One thing to note, by delegating note authentication to the batch/block producers, the users give up some privacy. Specifically, they reveal exactly which notes are being consumed in a given transaction (though, the contents of the notes could still remain private). |
Beta Was this translation helpful? Give feedback.
-
|
There are a lot of interesting resources on multi-dimensional fee design. For example:
And there are many others. There is no optimal solution and so it seems fairly likely that we will not come up with the ideal solution in the first implementation. Transient and Non-Transient CostsStill, one goal I think would be to make sure the design allows for sustainable network operation, even if parameters have to be tweaked. Because of that, on a high-level, what some other networks are doing is to split fees into two categories: transient and non-transient costs to the network.
The rationale for this is from the medium post above:
Examples of where this is/was done:
For public accounts and notes I think this applies to Miden as well since the network has to assume it has to store that data in perpetuity, and so its costs should be covered. I think the SUI model of storage pricing could be interesting to follow, though ideally we'd look at other chains that do this to get a bigger picture. I think the main point is that storage costs are a separate dimension from the transient costs, since it must inherently be priced differently. This is a good summary of the mechanism (but this is just one possibility):
What do you think about this on a high level (the exact mechanism in Miden is tbd)? I mainly wanted to bring this up here so that, if you roughly agree, we can exclude this topic for this task. So at first, I would go for fees for transient costs which seems closer to what this issue is about and that way it's a slightly more managable task. |
Beta Was this translation helpful? Give feedback.
-
EIP-1559 in MidenIn EIP-1559 there are basically three parameters:
Users specify the tip and the max fee for a given transaction, and the base fee is protocol-defined. While building a block, if the base fee of the last block plus the tip end up exceeding the max fee, the transaction is not included. This gives users the ability to set a limit on the transaction fee they are willing to pay. One of the tricky things with adapting EIP-1559 directly to Miden is that so far we said a transaction should pay an arbitrary asset as an output of the transaction. This is processed by the batch builder who will convert this into the fee token (let's say POL). The question is: How should a transaction creator determine the amount of the fee asset and can they specify an equivalent of "max fee"? A previously mentioned approach could be:
The main concern I have with this approach is a scenario like this: Account A creates a transaction with an old reference block and pays 200 POL as the entire fee, based on that reference block's base fee of 190 POL plus a tip of 10 POL. The batch builder computes the base fee from its reference blocks which results in 180 POL, i.e. the base fee has gone down. Now account A has already taken 200 POL out of their vault and because the account may be private, there is no way for the batch builder to refund 10 POL to account A. This is because to refund the account, at least the delta of the account would have to be known. So with this approach, the transaction creator would essentially simply pay 200 POL, i.e. 180 POL in base fee and 20 POL in tip, even though the tip was supposed to be just 10. In other words, transactions cannot specify a "max fee". A worse scenario would be if the base fee goes up instead and the transaction no longer includes a high enough fee to be included at all. Now we could argue that the transaction creator should just pick a more recent reference block to avoid this situation and be closer to the chain tip, but I think this would limit Miden's model somewhat arbitrarily. That is, if transactions can be executed against old reference blocks generally, but the only reason to stay close to the chain tip is to be up-to-date on fees, then this feels limiting. It also makes certain scenarios like offline signing harder, which may take some time. So what I take away from this is:
Fee computationIn Ethereum, the base fee changes with every block, making the base fee react immediately to changes in demand. In Sui, the "reference price" is set once per epoch (~24h). In Sui then, it is therefore possible to compute the required fee, minus the tip, for a long period of time. One of the major annoyances with fees is unpredictability, as explored above and was one of the motivations for EIP-1559, and if we were to make it possible to compute fees for a stable period of time, that would help the UX a lot. Granted, the presence of the base fee alone makes it generally more predictable to estimate the total fees than in a complete first-price auction. However, I wonder if we can improve this situation further. Could we instead allow transactions to compute their fee from their reference block? Yes, this would allow them to choose the price to a certain degree, and we could limit this by setting an oldest block limit. For example, we could make the protocol compute a new base fee every 100 blocks:
Transactions can choose any block This overall trades off two sides:
Note: How exactly the base fee is updated as well as the exact numbers used here is something that can be discussed later, if this idea should be explored further. In the scenarios above, this approach would mean the transaction fee payment could work something like this:
That way, the transaction now pays exactly what the user intended or it won't suceed and should be fairly straightforward to work with programmatically. Moreover, the scenario where a transaction pays too much or too little, and cannot be included in a batch is very unlikely, if not impossible, at least when using the "native" fee asset. The only new condition that can happen is that a transaction becomes outdated. Aside: There is a circular dependency with this approach if we start signing the delta (#1198). That is, we cannot refund the account some asset if the delta was already signed, but we can't compute fees if the most expensive part (signature verification) is not yet completed. Not sure yet how to resolve this, but there may be a way. Base Fee BurnAs a mostly independent topic, based on this analysis of EIP-1559, I think we should also burn the base fee, even though collusion is not a concern in the centralized setting yet. If we know this will become a problem eventually, then we might as well design it to be compatible with that from the start.
(The extended argument is in Section 8.1 of that paper.) |
Beta Was this translation helpful? Give feedback.
-
I am not against splitting fees into transient vs. non-transient, but the benefit is not immediately clear to me yet (I probably need to think this through a bit more). Basically, we'll need to compensate validators for storing data (though, in Miden case, validators may not need to store much as they would be able to check the validity of blocks by verifying ZK proofs - so, theoretically, we may even be able to have "stateless validators") but why could this not come from inflation rather than storage fund?
I think we could try to be somewhere between Ethereum and Sui. The main issue I see with Sui's model is that in periods of high demand, the fee won't be able to adjust fast enough which would result in a lot of dropped or delayed transactions (i.e., we won't be able to use the fee as anti-congestion mechanism). At the same time, changing the base fee with every block by 12.5% could lead to a very rapid change in fees which would make fee prediction much less accurate. So, what we could do is set this limit to be lower - e.g., a fee can be changed by at most 1% (or some other small value) per block. This may be predictable enough and if a transaction wanted to increase the probability of making it into the block it could pay a slightly higher fee - e.g., paying 25% over the current base fee would guarantee that it would not fall below the base fee for the next 23 blocks. Overall, I think we could keep the mechanism of setting the fee pretty flexible at the transaction level to accomodate 2 scenarios (this is similar to what you've proposed but there are also some differences):
Yeah, it is not clear to me if there is a good way to support "max fee". My current thinking is that the overall model could work as follows:
A couple of open questions:
|
Beta Was this translation helpful? Give feedback.
-
I'm not sure yet what entity exactly would store public account and note data, but I assume those who store it should be compensated for it, to have an incentive for storage in the first place. I don't have a clear picture of Miden post-centralization yet, which makes this a bit difficult.
It definitely can come from inflation. The storage fund is just one option and inflation is another. My main point was about separating storage costs out from transient costs.
In my mind, Ethereum's model works best if the "max fee" mechanism is in place. Unfortunately, I don't see a reasonable way to support refunds at the protocol level, unless we calculate the base fee in the transaction. If we don't want to do this, then indeed the second best option to make things more predictable is to change the "base fee change" parameter, as you mentioned. If we assume a 15s block time in Ethereum and a 3s block time in Miden, then Miden is five times faster than Ethereum. So to get the equivalent "reactiveness" to changes in demand of EIP-1559 in Miden, we would have to set that parameter to 12.5% / 5 = 2.5%. So we could start with something like that, or lower (like 1%) to make things more predictable. In any case, I agree this would be a parameter we could tweak to improve UX at the cost of reactiveness. Still, this would mean that transactions can technically reference any block, but some block header close to the chain tip would have to be fetched as well to get an up-to-date base fee reference from which to set the transaction's fee (but to be clear, this recent header is used for informational purposes only).
Yeah, I think implicitly requiring that network transactions reference a recent block is fine, so that it can calculate the fees in the most precise way possible. I assume such transactions would not be offline signed or in other ways slow from creation to reaching the mempool.
I thought that the block builder does not necessarily have to get the entire tip. The batches could be in competition with each other too and keep some of the tip I imagine? Or is there a reason why the entire tip should be paid forward? As long as the batch kernel ensures that the base fee of each included transaction was burnt, there could be a parameter that specifies what fraction or absolute number of the tip the batch should output for the block builder. Since building a block in Miden is basically an effort supported by the batch and block builder, my initial thought was that if we have some kind of "block reward" it would be split amongst those two. Not necessarily in equal parts, since building blocks requires more infrastructure to be maintained and data to be stored, but technically both could get their fair share of that reward (e.g. from inflation).
This would have to be handled as part of the batch kernel. And this question could be answered in the larger context of how the batch or block builder would receive rewards, since the mechanism for the batch builder's account to receive new funds as part of the batch might use a similar mechanism as how to get some funds from the account "into the batch". Maybe with a regular transaction, or maybe with a more built-in mechanism. I'll think about this. |
Beta Was this translation helpful? Give feedback.
-
I was thinking this info could be provided in a variety of ways. For example, a wallet provider could have an API to retrieve the current base fee that would not require the user to sync to the chain tip (or download the latest block) to get this info.
Yes, we can definitely split the tip part between the block producer and the batch producer. In fact, we could let the batch producer claim the entire tip from the transaction fees and then decide separately how much of a tip they can include with the batch. This would effectively mean that transactions pay the fee to batch builders and batch builders pay fees to block producers but these two are determined/set separately. But we should probably think though this a bit more holistically in case there are some unintended consequences. Splitting block rewards is also possible, but we'll need to think about this more carefully so that there no incentive for batch builders to produce empty batches. Another thing to note is that batch builders could benefit from note erasure since at the time a transaction is created, the user cannot know whether an unauthenticated note will be erased and so would need to cover the base fee associated consuming a note. In case the note can be erased, this fee can then be claimed by the batch producer. Whether this ends up being a material component of batch builder compensation is probably something we won't know until the system is running and we have some "real-world" data. |
Beta Was this translation helpful? Give feedback.
-
IncentivesBlock ProducerWe allow empty blocks so the chain can make progress even in the absence of transactions or batches. Block producers receive a block reward (tbd) independent of the block size. If the reward was tied to the block size it would just incentivize the block producer to produce useless transactions and batches themselves to fill it up, in the absence of useful transactions, which would be bad for the network. Including batches comes with more work than an empty block, but the tip should be high enough to compensate for that additional work and the risk that another block producer would produce a block faster (i.e. the uncle risk in Ethereum). Although since there is just a single block producer for now, that is an irrelevant point. There is also a weak incentive for block producers to have an interest in a functioning network. Although individual block producers may exploit this still for a quick profit, which is why it is weak. Another weak incentive is that block producers would benefit in tiny ways from burnt fees through overall deflation. So the higher the burnt base fees the better, which is an incentive to include transactions/batches that are not created by the block producer itself, because burning its own fees would be a net loss. Batch ProducerContrary to blocks, we don't have to allow empty batches and the current Rust batch kernel already disallows those. So at least one transaction must be included. Batch producers are incentivized to fill batches, i.e. include transactions through the tip of the transaction.
Good point, thanks for bringing that up. It seems to me like giving rewards to batch producers would be a problematic incentive and I can't come up with a remedy. If each batch is eligible for a reward independent of the transaction content, then this would mean that by default blocks would be filled with as many batches a possible and transactions would be as distributed over these batches as possible (e.g. one transaction per batch, and 64 batches per block), since that would maximize the reward of batch and block producers. We don't need to incentivize empty batch creation in the same way as empty block creation, so this does not seem like a useful incentive. Making the batch reward dependent on the number of transactions would mean batch producers would create useless transactions themselves to fill up their own batches, so this is not a good incentive either. Because of the aforementioned necessity for the block producer to get a reward independent of block size, we can't incentivize the block producer to maximize the ratio of transactions to batches, or some other similar metric, which would seem like it would have a positive effect on the batch producer to mitigate this above problem. This is a complex problem, so I might be easily missing something, but at this time I don't see a viable option for protocol-generated batch rewards. The most natural option is for the batch and block producer to split the tips in some way (again, not necessarily in equal parts). I'm not sure how to estimate whether this would be significant enough to incentivize batch and block producers without making transaction tips very high.
I agree, there should be an incentive for batch builders to erase notes. This implies we should calculate base fees in the batch based on the final number of input and output notes, so the batch producer has to burn fewer base fees than the sum of all transaction's base fees. The batch producer could pocket the excess. The same may also apply to transactions executed against the same account that can be aggregated. For example, if there are ten transactions against the same account and each updates the same set of key-value pairs in a storage map, then the intermediate transactions can be erased. Because each transaction paid the base fee for the individual update, the batch producer can keep the excess after aggregation. This would only meaningfully apply to public account updates and then only to those that can be meaningfully aggregated. But this is a good incentive for batch producers to aggregate as many transactions against the same account as possible which is good for the network, as it lessens the load. The note erasure incentive could also apply to the block builder. One the one hand, it should also be incentivized to erase notes, on the other hand, this puts the block producer somewhat in competition with the batch producers. The block producer would profit more from a set of batches that erase fewer notes than a set of batches that erase more notes. I'm not sure how significant that is. Since the block producer is essentially forced to erase the notes from batches through the execution of the block kernel and it receives a block reward anyway, I would err on the side of caution and not incentivize the block producer to erase notes in the same way as batch producers. SummarySo to summarize:
Pay Fee ProcedureLocal TransactionsFor regular non-network transactions, users can call a Network TransactionsFor network transactions we have an additional I'm assuming that network transactions are executed against accounts that do not require authentication, so calculating fees at the end of the tx script should be quite accurate in terms of the number of cycles of the transaction (that go into the fee calculation). This is accurate as long as the reference block is recent. Still it could happen that a network transaction calculates the base fee with the current Block Producer RewardI think as part of building the block we can add a procedure in the block kernel that can be called at most once that mints the rewards of the block and adds it to a provided account's delta. |
Beta Was this translation helpful? Give feedback.
-
Is this not solved with a form of free market? Note erasure is opaque to the transaction so the batch builder gets incentivized to build efficient batches and thereby pockets the excess as you suggest. The batch builder should then pass some percentage of this excess on to the block builder to increase his odds of inclusion. Or put differently - can the relationship between transactions and batches not be roughly the same as batches to blocks? You increase your odds of inclusion by increasing the "tip". Side note: efficient batches includes stacking txs from the same account as well as note erasure imo. |
Beta Was this translation helpful? Give feedback.
-
Is the incentive not simply that it is more time efficient to build bigger blocks? An empty block would give some pittance that just about covers the proving cost (which itself is minimal for an empty block). But given that blocks are limited to one per slot it makes sense for them to assemble the maximum profit per slot and not necessarily the most efficient. As in its better to sell a million compute units at a lower rate for a larger total profit than less units at a better rate but less total profit. |
Beta Was this translation helpful? Give feedback.
-
I think there could be other incentives as well. For example, there could be some consensus-specific incentives to include batches - e.g., batches carry validator signatures or some other "weight" such that including more batches means higher probability that the block will be accepted by the network (e.g., something like the FruitChain protocol).
Maybe we should skip block rewards entirely for now and introduce them only when we start working on decentralization/consensus. With a centralized block producer, there is a very strong incentive for the block producer to produce blocks and follow the protocol.
I agree with this with the only comment that (as mentioned above), I'd delay thinking about protocol-generated rewards for block producers till later (i.e., when we start thinking about decentralization).
Largely agree, but a couple of comments:
I think there could be a pretty accurate way of estimating how many cycles the epilogue for a given transaction takes (e.g., based on the number of output notes and account delta size). We could make sure that it is always a slight over-estimate - but overall, it shouldn't be too far off.
As I mentioned above, I'd probably skip the block rewards for now - but this does bring up a good point about how batch and block builders claim their portions of the tip. For block builders, it could be similar to what you described above. An alternative could be to create a note (probably a private note) in the block kernel which would include the tip and send it to w/e account the block producer choses. Similar could be done at the batch level - i.e., as a part of the batch kernel, batch producers would create a note which carries their portion of the tip to the account of their choosing. There could be other ways to do this too - e..g, keep track of "pending" rewards at the block header level and pay them out periodically (e.g., once per epoch). Though, this probably adds too much complexity. |
Beta Was this translation helpful? Give feedback.
-
I agree, that relationship should be similar. The difference is that transactions pay the base fee, while batches don't. In terms of incentives for the batch to include TXs and the block to include batches, it does seem equivalent though (i.e. tip = incentive).
Fully agree, I did not mean those to be separate or in competition to one another.
That's an interesting angle and I think its true. My main concern is that tips may not be significant enough by the time they get to the block producer, i.e. the sum of tips may be much smaller compared to the block reward. Once we have some more concrete ideas for how to calculate base fees and what the tip might be, then we'll have a better picture of this.
Good point. I think having just a single kernel procedure for fee payment that can be used with assets from accounts and notes is preferable.
Right, and to support this we may need to be able to query the current fee asset as well, so that scripts can get the asset and add to it if the asset type matches or not do anything otherwise (instead of aborting if they would call Block Producer Tip Claiming
That would work, too. The implementation details on this are not quite clear to me yet. The On the other hand, if a block happens to already contain 2^16 account updates that is not even a theoretical problem because that is just a list instead of a tree, so appending another account update is just fine. As discussed in our call, if we want to restrict access to the reward for block producers, then with notes we can very easily restrict the reward by using a time or block height locked In Ethereum, stake withdrawals cannot happen immediately. 16 withdrawal requests are processed in each block, so it would take days for all validators to withdraw fully. This, afaict, is done to prevent sudden mass exits and avoid validators evading slashing or penalties. However, tips are available immediately to the validator. If we don't deal with staking / block rewards right now and only have the tips, then tips being spendable immediately is fine, so the note and account approach would work. The account approach may be simpler from the implementation POV, but private notes would impose less burden on the network. Both mechanisms are exceptions to the rule: Notes are created by TXs and accounts are modified by TXs. Either approach would violate one of those rules. I don't see a clear winner, but would propose going with the note option. That results in a somewhat messy implementation, but the overall protocol and network benefits a bit more and it would fairly easily enable timelocking in the future. Batch Producer Tip ClaimingThe batch producer must also be able to claim the tips of transactions. This has basically the same considerations as the block producer claiming tips. Whatever we conclude there applies here as well I believe. The note and account concerns are essentially the same. Non-native assets in batchesWhen transactions pay fees in non-native assets, the batch producer must be able to 1) transfer the tips to itself somehow and 2) put up a large enough amount of the native fee asset that covers the base fees of the transactions. A solution that works out of the box would be for the batch producer to create and include a transaction itself in the batch that outputs a large enough
Avoiding the base fee burnA mechanism that avoids this base fee burn would still be desirable to not disincentivize and penalize batch producers from offering this conversion service. One approach is to allow the batch producer to exempt a base fee from a transaction like above and prove that it is theirs via authentication (plus some other restrictions, similar to below). However the authentication alone would add a lot of cycles, so maybe there's a better way. Another approach could be to allow batch producers (generally) to provide a
This should have the following desirable properties:
This approach makes the batch kernel more complicated. I think it might be worth it to not disincentivize conversion of non-native fees into native fees, which would be a nice feature. One practical issue with using a transaction may be that the batch producer needs multiple accounts to choose from when building batches. While a batch is inflight with some transaction against account Here are some other ideas for completeness / future reference, but they're too complex or impossible (unformatted).Executing a batch against an account is a messy, but probably possible option. This account must exist, so we don't have to deal with account creation here. The `ACCOUNT_INIT_STATE_COMMITMENT` could be another input of the batch kernel to facilitate this. We'd probably also need an equivalent `ACCOUNT_FINAL_STATE_COMMITMENT` output or a general `ACCOUNT_UPDATES_COMMITMENT` where this can be included (see also this). The only operations supported against the account would then be to remove an asset from its vault and call its authentication procedure. That would then imply supporting passing a `TransactionAuthenticator` to a batch executor to support getting that signature. We should be able to enforce in the batch kernel "epilogue" that only the native fee asset was removed from an account's vault and that its nonce was incremented. Anything else would abort. I'm probably still overlooking something about this - the complexity of this option is high. Somehow letting the batch consume a note directly is an impossible option. Note consumption by definition requires an account to execute against and since it's not there, the procedures from the note script won't be able to execute. |
Beta Was this translation helpful? Give feedback.
-
|
To calculate the actual fees of a transaction I would suggest the following approach. On the highest level, I think it makes sense to price computation and storage separately, as mentioned previously. This means we'll have a At first let's deal with computation in isolation. The following transaction metrics add to the total computation units of a transaction:
All the public data processed by the transaction incurs a small computational cost for the network that we could reflect in the computational cost. We could also not do this and price this completely in the storage units. For now I included it. I wrote a Python script (https://github.com/0xPolygonMiden/miden-base/blob/pgackst-fee-calculator/scripts/feecalc.py) to experiment with the parameters, but the quick overview is this: All the Here are some examples of the total computation units with a variety of transactions. The number in parentheses is the computation units contributed by that column. To keep the combinatorial explosion reasonable this always has account delta = 0, because the influence of the data is already reflected through the public notes.
Main questions/thoughts:
Once we have an idea on the order of magnitude of computation units we can come up with the appropriate base fee calculation. |
Beta Was this translation helpful? Give feedback.
-
|
Something I'm not entirely understanding and that I've probably missed - but why do we care about the difference between batch and block proving costs? Or rather, why do we care about teasing them apart. I'm ignoring token/fee burning because I feel like that's orthogonal to the problem at hand (but maybe this is where my misunderstanding comes in). Why does the entire tip not go to the batch producer, who then selects how much of this tip to forward to the block producer to include the batch? If the batch producer is greedy then the batch is never included and this is enough incentive to split according to effort. Where cost of effort is determined by the market. Is the issue that this depends on the consensus mechanism? e.g. if a block builder is pre-determined with staking then the builder has "complete" power for a block? |
Beta Was this translation helpful? Give feedback.
-
|
I was thinking about network transactions, fees and incentives and wanted to write down some quick thoughts. Generally, each transaction needs to pay fees. Network transaction builders (NTB) are unlikely to build transactions and pay the fees themselves. Most likely, each note intended for consumption by a network transaction should include some payment that gets aggregated into the network transaction's fee, plus a tip for the NTB (the tip could be added later for the decentralized setting). I guess the NTB and the batch builder could be similar entities ultimately participating in building parts of blocks, and each must be incentivized to do so. The main challenges with this are:
|
Beta Was this translation helpful? Give feedback.
-
|
I am a bit later to the party. I am trying to read everything. One thought, and please ignore if this was already discussed and dismissed. Can we have a fee share mechanism between the smart contract used and the network? I would love to incentivize products on Miden such that they get a share of the fee. That can mean, that whenever a user transfers USDC, the USDC contract owner gets 10% of the tx fee. That mechanism will help applications to monetize faster. Which will help them raise due to an existing cashflow. |
Beta Was this translation helpful? Give feedback.
-
|
Here is a summary of the most important points from the discussion so far. Base FeesEIP-1559-style fee mechanism. Fee of a transaction consists of:
How does the base fee change?
What is the target time?
This should make fees predictable because the base fee is protocol-defined and changes only according to protocol rules, while tips are only relevant under congestion. What happens to the base fee?
What happens to the tip?Given to the batch producer as an incentive to include transactions. So transactions are in competition with each other to be included in batches, while batches are in competition with each other to be included in blocks. However, this competition should only really occur and matter under congestion. Otherwise, the block producer will be happy to include any batches at all, since it can pocket their tips. (Non-)Transient Costs
Dimensions
These reduce to the following three major components, with the primary cost driver of each mentioned below:
Fee Kernel APIs
OverviewHere is a visualization for a typical transaction inclusion in a block:
Misc Notes
|
Beta Was this translation helpful? Give feedback.
-
|
I realize I’m quite late to this discussion, but after following the conversation, I wanted to share a perspective that might help simplify things. I can’t shake the feeling that we might be overthinking the fee model at this stage. The assumptions I’m working with are:
With that in mind, a simple approach might be: A. Prioritize transactions by the fee paid. As we approach decentralization and need to properly incentivize batch/block builders and other stakeholders, we can revisit a more nuanced, multi-dimensional fee model—possibly even Ethereum-style. But for now, I’d advocate for keeping it simple. A complex fee model won’t move the needle in the short term and only adds friction for developers and users. In these early years, our priority should be ease of use, not fee equilibrium. We're happy to subsidize—profitability of the centralized node isn't a concern right now. Adoption is. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We need a fee mechanism in Miden and this issue is the start of a description of it.
As mentioned here (#525 (comment)), the mechanism is intended to be multi-dimensional and depend on a number of parameters:
On the implementation side a transaction must be able to specify how it wants to pay its fees. One idea from @bobbinth was to have a procedure in the transaction kernel which users can call, likely from the transaction script, and pass a fungible asset. The fees will be payed with this fungible asset. The asset must be produced as a public output of the transaction kernel next to the current outputs
[OUTPUT_NOTES_COMMITMENT, FINAL_ACCOUNT_HASH]. Presumably, this asset would be continue to be processed by the operator in the block kernel an, for example, moved to their account's asset vault.The epilogue invariant that checks that no net asset creation or destruction happened must take this into account.
Open Questions
Beta Was this translation helpful? Give feedback.
All reactions