[DISCUSSION] - BRC-95 Atomic BEEF vs BRC-62 BEEF #84
Replies: 14 comments 9 replies
-
The BRC-62 (V1) BEEF spec was motivated to support the validity of a single transaction. But the serialization format trivially extends to support multiple new transactions, in which case it becomes a rather arbitrary requirement to say which one is "special". The sort order does require dependencies to appear before dependents, but it explicitly does not require that transactions must be no more than one generation from a BUMP. That is, from a "newness" perspective, a BEEFs purpose is to contain many transactions that are new. BRC-95 clarifies and makes explicit the originally assumed but not enforced use case of a BEEF in support of a single newest transaction. But in many use cases it makes a great deal of sense to aggregate and share sets of "new" transactions. Consider the ARC endpoint https://tapi.taal.com/arc/v1/tx to which a serialized BEEF may be posted:
A central point of Bitcoin is to support the validity of data encoded as transaction outputs. I believe the BEEF format can and should directly support this mission where it is viewed as a list of transactions and BUMP proof data which together allows for the validity of the data in those transaction outputs to be fully trusted. Applications and services should not be sending transactions, they should be sending BEEFs. Receive a BEEF, verify its validity, and all the transaction data it contains is good to go. The @bsv/sdk-ts Beef class implements the BEEF V1, V2 and Atomic specifications. It supports merging Beefs, raw transactions, and BUMPs. It implements validation and verification. This includes being able to filter down an arbitrary collection of BUMPs and transactions into an AtomicBEEF. Using the Beef class, here's a snapshot at the inner workings of a BEEF enabled wallet:
To understand the motivation for the V2 Beef extension specification, consider the exchange of Beefs between the local wallet and its back end services:
|
Beta Was this translation helpful? Give feedback.
-
The BEEF V2 Spec is here: BEEF V2 |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response and for emphasizing the importance of Atomic BEEF. After revisiting the specifications and discussions, I’ve realized that you're referring to BEEF v2 (although in the Atomic BEEF specification BRC-62 is mentioned). I now understand that BEEF v2 introduces support for multiple transaction graphs. However, this has led me to reassess how BEEF, as originally defined, relates to Atomic BEEF and its motivations. From my perspective, the issue Atomic BEEF aims to address as described in the Motivation section does not seem to exist in the context of the original BEEF (BRC-62). The Motivation section suggests that BEEF can contain multiple unrelated transactions, but due to the required Kahn sorting, which enforces a valid topological order, BEEF inherently operates on a single Directed Acyclic Graph (DAG). Transactions that are not part of the same graph as the last transaction can validly be ignored by any tool parsing the BEEF into a class/structure. With regard to BEEF v2, while it introduces support for multiple transaction graphs, I believe this represents a significant departure from the single-transaction focus of the original BEEF. This divergence, while valuable for broader use cases, complicates the narrative that BEEF v2 is simply an iteration or extension of the original format. The reliance on Kahn’s sorting in BEEF makes it inherently unsuitable for multiple DAGs, as attempting to use it for such purposes could result in unpredictable behavior or ignored unrelated graphs. I fully recognize that you may have a different interpretation. To better understand your perspective, could you provide an example—ideally with a diagram or drawing (or ascii art) — demonstrating how multiple DAGs could be encoded in BEEF (BRC-62)? Furthermore, it would be helpful to see how Atomic BEEF specifically addresses issues that a correctly constructed BEEF (BRC-62) cannot resolve. Looking forward to your clarification. |
Beta Was this translation helpful? Give feedback.
-
There are two distinct points here that I'll respond to:
Expanding on #2 first, "multiple-new-transactions". Even in the original single DAG view of BEEF, sending a BEEF with multiple new transactions to ARC's post BEEF endpoints currently works as it should: each new transaction (never broadcast before) gets processed into the chain. It would make no sense for a recipient to complain about an internal dependency transaction that they had not seen before but whose validity was fully supported by the BEEF. So even in the "only-one-newest" transaction case, single DAG, I argue it makes no sense to disallow multiple new transactions to be sent in a single BEEF. On to full "multiple-new-transactions", imagine we start with proven transaction A and create new transactions B and C, each of which consumes one output from A. Now I want to send B and C for processing. Am I going to create two BEEFs, each containing a full copy of A just to adhere to "AtomicBEEF"?? What I want to do is throw all three in a single BEEF with a BUMP that supports A and send the single BEEF to ARC for processing. And this works... Why would ARC ignore valid transactions for which it can easily prove validity? So, it seems to come down to sort order. As I see it, the critical requirement is that dependencies are serialized first: If B consumes outputs from A, then A must be serialized first. This requirement does not yield a unique sort order in the general case, but I see no reason for requiring one. The use of Kahn's algorithm seems like an example to me, not a requirement. Expanding on #1, BEEF V2 When you start using BEEF to send transactions with validity data in any kind of a multi-step transaction data exchange process you should rapidly realize that V1 often requires you to return data to its source, because it is required to support the validity of yet unmined new work. You could serialize incomplete, unverifiable BEEFs and send that, expecting the recipient to realize they possess the missing data. This choice greatly expands the potential work to invalidate actually invalid BEEFs. BEEF validity would also depend on context: Do you possess an external validity store that has the missing data. BEEF V2 extends V1 by allowing transactions to be listed in the BEEF in two ways: full serialized raw transaction OR just it's 32 byte TXID. When determining the validity of a V2 BEEF, a transaction is valid either in the V1 sense OR if it is just a TXID. The recipient of a V2 BEEF can verify validity efficiently yielding an explicit list of TXIDs they are expected to already know are valid. |
Beta Was this translation helpful? Give feedback.
-
From BRC-62:
"Transactions..." plural. Reading this to mean only the last transaction really matters or that they must all form a single DAG is greatly narrowing the obvious intent and utility of the specification.
"Must ensure" that inputs appear as outputs before use as inputs... "DAG subset"..."recommended for complex transaction chains" The algorithm is explicitly not a requirement. DAG already allows for multiple new transactions B and C depending on A but not each other. |
Beta Was this translation helpful? Give feedback.
-
But why would you want to?? If you are a transaction processor and someone sends you a BEEF containing multiple DAGs with validity support for each one, why would you not process all the new transactions?? If you are a business data processing service and someone sends you a BEEF with multiple DAGs, it either makes sense or doesn't based on service level context. It may certainly be beneficial. Only in the case where you must consider the BEEF as supporting a single transaction are you justified in ignoring unrelated data. This is the precise point of AtomicBEEF. |
Beta Was this translation helpful? Give feedback.
-
I understand your point and I didn't ever question the idea that we need some format for transferring, but to have a better view. I would like to ask you for an example how can I prepare with ts-sdk, Transaction so the toBEEF (BRC-62) will contain both B and C, because I would like to examine such a BEEF. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response and for elaborating on the purpose of AtomicBEEF. I’d like to better understand why introducing AtomicBEEF is necessary, given that BEEF v1, implemented with Kahn sorting, appears to solve the same problem without requiring additional metadata. From my involvement in preparing BEEF v1, it was designed as an extension to EF, which itself extends the RAW TX format. Our intention with BEEF was to create a format for a single transaction that provides all the information required for SPV validation in a compact, efficient structure. Rather than introducing a new AtomicBEEF format, wouldn’t it make sense to simply add a requirement to BEEF that the last transaction is the subject of the BEEF and that all included transactions must belong to the strict subtree of this transaction? Any unrelated transactions would then be ignored. This approach eliminates the need for adding a Regarding BEEF v2, I believe it could be more aptly named Aggregated BEEF to reflect its broader scope and differentiate it from the single-transaction focus of BEEF. Aggregated BEEF could use a different magic number while retaining most of the structure of BEEF, but without the single-DAG requirement (and with appropriate sorting rules). This approach preserves the understanding that has already been established in the BSV ecosystem around BEEF, while clearly delineating the new functionality. To illustrate the established understanding of BEEF as a single-transaction format, I’d like to point to the ts-sdk API, where the I hope this perspective contributes constructively to the discussion, and I look forward to further clarifications and insights. |
Beta Was this translation helpful? Give feedback.
-
Certainly! The Beef class in ts-sdk supports all the formats. You could use something like the following:
|
Beta Was this translation helpful? Give feedback.
-
Note that Beef.fromBinary() will read V1, V2, and AtomicBEEF serialized formats. You can force V1 format only. By default it serializes to V2 only if you use mergeTxidOnly(txid) or if you use fromBinary() on data containing txid only transactions. |
Beta Was this translation helpful? Give feedback.
-
Thank you for the example code! I will experiment with it to gain a better understanding. In the meantime, I’m still looking forward to your opinion or comment regarding my proposition to use BRC-62 BEEF instead of AtomicBEEF, as well as the suggestion to redefine BEEF V2 as a separate format—such as "Aggregated BEEF"—for handling a list of any transactions. |
Beta Was this translation helpful? Give feedback.
-
First paragraph of BRC-62:
If that last "transaction" had been "transactions", you wouldn't have a leg to stand on :-) As it is, the recipient of a BRC-62 BEEF for SPV payment will treat it differently than a transaction processor should. From an SPV point of view, the last transaction is the purpose of the BEEF, the remaining transactions support its validity and serve only to generate the right hash values. A transaction processor, however, should treat any valid transaction in the BEEF as a new transaction to be added to the next block. It makes no sense to ignore valid transactions that support the validity of the latest transaction(s). If they weren't accepted, the last transaction wouldn't be mineable. We have the BEEF BRC-62 format. It is eminently suitable for sending multiple new transactions with validity support to a transaction processor. Are you seriously proposing we bump the version number just to reflect that the BEEF contains multiple "newest" transactions? I would counter argue that it makes more sense to start with a flexible base standard and then extend that to handle specific use cases. Or perhaps, let this standard cover the encoding of the data and let the context by which they are exchanged handle the nuances on how the data is to be used. |
Beta Was this translation helpful? Give feedback.
-
I really like the extension to the original idea of BEEF @tonesnotes describes in BRC-96 in that it is indeed a trivial extension of the original idea to have completely separate transactions encapsulated in the same blob of binary. Why not? The reason I chose 0100BEEF as the version number was so that we could easily adopt 0200BEEF when the time came to extend or improve on the idea. The inclusion of the two main ideas are welcome in my view:
|
Beta Was this translation helpful? Give feedback.
-
The part I don't actually think is necessary is Atomic BEEF BRC-95. I'm going to go back again and check my premises and the intention to try and re-evaluate having heard Damian's point that it might be redundant. BEEF_V1 might be just as valid a format for such an encoding... trying to determine what I've missed. Probably txidOnly encoding of a singleton? |
Beta Was this translation helpful? Give feedback.
-
BRC ID
95
Discussion
It looks for me, that there may be a misunderstanding regarding BRC-62.
The Ordering subsection, of the BEEF specification, requires Kahn's Topological Sorting, which by definition applies to a single DAG.
I would like @sirdeggen to confirm my understanding there.
Additionally, once sorting is applied, the final transaction naturally becomes the subject, removing the need for an extra field to specify it proposed in Atomic BEEF.
Beta Was this translation helpful? Give feedback.
All reactions