Skip to content

Commit 9baef8b

Browse files
authored
Update Lighthouse book (#8284)
Co-Authored-By: Tan Chee Keong <[email protected]> Co-Authored-By: chonghe <[email protected]>
1 parent d67ae92 commit 9baef8b

File tree

11 files changed

+44
-60
lines changed

11 files changed

+44
-60
lines changed

book/src/advanced.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@ tips about how things work under the hood.
1919
* [Release Candidates](./advanced_release_candidates.md): latest release of Lighthouse to get feedback from users.
2020
* [Maximal Extractable Value](./advanced_builders.md): use external builders for a potential higher rewards during block proposals
2121
* [Late Block Re-orgs](./advanced_re-orgs.md): read information about Lighthouse late block re-orgs.
22-
* [Blobs](./advanced_blobs.md): information about blobs in Deneb upgrade
22+
* [Blobs](./advanced_blobs.md): information about blobs

book/src/advanced_blobs.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,27 @@
1-
# Blobs
1+
# Data columns
2+
3+
With the [Fusaka](https://ethereum.org/roadmap/fusaka) upgrade, the main feature [PeerDAS](https://ethereum.org/roadmap/fusaka#peerdas) allows storing only a portion of blob data, known as data columns, thus reducing the storage and bandwidth requirements of a full node. This however also means that a full node will not be able to serve blobs after Fusaka. To continue serving blobs, run the beacon node with `--semi-supernode` or `--supernode`. Note that this comes at a significant increase in storage and bandwidth requirements, see [this blog post about PeerDAS](https://blog.sigmaprime.io/peerdas-distributed-blob-building.html) and [Fusaka bandwidth estimation](https://ethpandaops.io/posts/fusaka-bandwidth-estimation/) for more details.
4+
5+
> Note: the above assumes that the beacon node has no attached validators. If the beacon node has attached validators, then it is required to custody (store) a certain number of data columns which increases with the number of staked ETH. For example, if the staked ETH is `$\geq$` 2048 ETH, then due to custody requirement, it will make the beacon node a semi-supernode ; if `$\geq$` 4096 ETH, the beacon node will be a supernode without needing the flag.
6+
7+
Table below summarizes the role of relevant flags in Lighthouse beacon node:
8+
9+
| | Post-Deneb, Pre-Fulu || Post-Fulu ||
10+
|-------|----------|----------|-----------|----------|
11+
| Flag | Usage | Can serve blobs? | Usage | Can serve blobs? |
12+
| --prune-blobs false | Does not prune blobs since using the flag | Yes, for blobs since using the flag and for the past 18 days | Does not prune data columns since using the flag | No |
13+
| --semi-supernode | - | - | Store half data columns | Yes, for blobs since using the flag for a max of 18 days |
14+
| --supernode | - | - | Store all data columns | Yes, for blobs since using the flag for a max of 18 days |
15+
16+
While both `--supernode` and `--semi-supernode` can serve blobs, a supernode will be faster to respond to blobs queries as it skips the blob reconstruction step. Running a supernode also helps the network by serving the data columns to its peers.
17+
18+
Combining `--prune-blobs false` and `--supernode` (or `--semi-supernode`) implies that no data columns will be pruned, and the node will be able to serve blobs since using the flag.
19+
20+
If you want historical blob data beyond the data availability period (18 days), you can backfill blobs or data columns with the experimental flag `--complete-blobs-backfill`. However, do note that this is an experimental feature and it may cause some issues, e.g., the node may block most of its peers.
21+
22+
**⚠️ The following section on Blobs is archived and not maintained as blobs are stored in the form of data columns after the Fulu fork ⚠️**
23+
24+
## Blobs
225

326
In the Deneb network upgrade, one of the changes is the implementation of EIP-4844, also known as [Proto-danksharding](https://blog.ethereum.org/2024/02/27/dencun-mainnet-announcement). Alongside with this, a new term named `blob` (binary large object) is introduced. Blobs are "side-cars" carrying transaction data in a block. They are mainly used by Ethereum layer 2 operators. As far as stakers are concerned, the main difference with the introduction of blobs is the increased storage requirement.
427

book/src/advanced_checkpoint_sync.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Once backfill is complete, a `INFO Historical block download complete` log will
8282
1. What if I have an existing database? How can I use checkpoint sync?
8383

8484
The existing beacon database needs to be deleted before Lighthouse will attempt checkpoint sync.
85-
You can do this by providing the `--purge-db` flag, or by manually deleting `<DATADIR>/beacon`.
85+
You can do this by providing the `--purge-db-force` flag, or by manually deleting `<DATADIR>/beacon`.
8686

8787
1. Why is checkpoint sync faster?
8888

@@ -92,7 +92,7 @@ Once backfill is complete, a `INFO Historical block download complete` log will
9292

9393
No, in fact it is more secure! Checkpoint sync guards against long-range attacks that genesis sync does not. This is due to a property of Proof of Stake consensus known as [Weak Subjectivity][weak-subj].
9494

95-
## Reconstructing States
95+
## How to run an archived node
9696

9797
> This section is only relevant if you are interested in running an archival node for analysis
9898
> purposes.
@@ -101,7 +101,7 @@ After completing backfill sync the node's database will differ from a genesis-sy
101101
lack of historic states. _You do not need these states to run a staking node_, but they are required
102102
for historical API calls (as used by block explorers and researchers).
103103

104-
You can opt-in to reconstructing all of the historic states by providing the
104+
To run an archived node, you can opt-in to reconstructing all of the historic states by providing the
105105
`--reconstruct-historic-states` flag to the beacon node at any point (before, during or after sync).
106106

107107
The database keeps track of three markers to determine the availability of historic blocks and
@@ -155,7 +155,7 @@ The command is as following:
155155
```bash
156156
curl -H "Accept: application/octet-stream" "http://localhost:5052/eth/v2/debug/beacon/states/$SLOT" > state.ssz
157157
curl -H "Accept: application/octet-stream" "http://localhost:5052/eth/v2/beacon/blocks/$SLOT" > block.ssz
158-
curl -H "Accept: application/octet-stream" "http://localhost:5052/eth/v1/beacon/blob_sidecars/$SLOT" > blobs.ssz
158+
curl -H "Accept: application/octet-stream" "http://localhost:5052/eth/v1/beacon/blobs/$SLOT" > blobs.ssz
159159
```
160160

161161
where `$SLOT` is the slot number. A slot which is an epoch boundary slot (i.e., first slot of an epoch) should always be used for manual checkpoint sync.

book/src/api_lighthouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -445,7 +445,7 @@ For archive nodes, the `anchor` will be:
445445

446446
indicating that all states with slots `>= 0` are available, i.e., full state history. For more information
447447
on the specific meanings of these fields see the docs on [Checkpoint
448-
Sync](./advanced_checkpoint_sync.md#reconstructing-states).
448+
Sync](./advanced_checkpoint_sync.md#how-to-run-an-archived-node).
449449

450450
## `/lighthouse/merge_readiness`
451451

book/src/faq.md

Lines changed: 1 addition & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22

33
## [Beacon Node](#beacon-node-1)
44

5-
- [I see a warning about "Syncing deposit contract block cache" or an error about "updating deposit contract cache", what should I do?](#bn-deposit-contract)
65
- [I see beacon logs showing `WARN: Execution engine called failed`, what should I do?](#bn-ee)
76
- [I see beacon logs showing `Error during execution engine upcheck`, what should I do?](#bn-upcheck)
87
- [My beacon node is stuck at downloading historical block using checkpoint sync. What should I do?](#bn-download-historical)
@@ -51,31 +50,6 @@
5150

5251
## Beacon Node
5352

54-
### <a name="bn-deposit-contract"></a> I see a warning about "Syncing deposit contract block cache" or an error about "updating deposit contract cache", what should I do?
55-
56-
The error can be a warning:
57-
58-
```text
59-
Nov 30 21:04:28.268 WARN Syncing deposit contract block cache est_blocks_remaining: initializing deposits, service: slot_notifier
60-
```
61-
62-
or an error:
63-
64-
```text
65-
ERRO Error updating deposit contract cache error: Failed to get remote head and new block ranges: EndpointError(FarBehind), retry_millis: 60000, service: deposit_contract_rpc
66-
```
67-
68-
This log indicates that your beacon node is downloading blocks and deposits
69-
from your execution node. When the `est_blocks_remaining` is
70-
`initializing_deposits`, your node is downloading deposit logs. It may stay in
71-
this stage for several minutes. Once the deposits logs are finished
72-
downloading, the `est_blocks_remaining` value will start decreasing.
73-
74-
It is perfectly normal to see this log when starting a node for the first time
75-
or after being off for more than several minutes.
76-
77-
If this log continues appearing during operation, it means your execution client is still syncing and it cannot provide Lighthouse the information about the deposit contract yet. What you need to do is to make sure that the execution client is up and syncing. Once the execution client is synced, the error will disappear.
78-
7953
### <a name="bn-ee"></a> I see beacon logs showing `WARN: Execution engine called failed`, what should I do?
8054

8155
The `WARN Execution engine called failed` log is shown when the beacon node cannot reach the execution engine. When this warning occurs, it will be followed by a detailed message. A frequently encountered example of the error message is:
@@ -335,7 +309,7 @@ expect, there are a few things to check on:
335309

336310
If you have incoming peers, it should return a lot of data containing information of peers. If the response is empty, it means that you have no incoming peers and there the ports are not open. You may want to double check if the port forward was correctly set up.
337311

338-
1. Check that you do not lower the number of peers using the flag `--target-peers`. The default is 100. A lower value set will lower the maximum number of peers your node can connect to, which may potentially interrupt the validator performance. We recommend users to leave the `--target peers` untouched to keep a diverse set of peers.
312+
1. Check that you do not lower the number of peers using the flag `--target-peers`. The default is 200. A lower value set will lower the maximum number of peers your node can connect to, which may potentially interrupt the validator performance. We recommend users to leave the `--target peers` untouched to keep a diverse set of peers.
339313

340314
1. Ensure that you have a quality router for the internet connection. For example, if you connect the router to many devices including the node, it may be possible that the router cannot handle all routing tasks, hence struggling to keep up the number of peers. Therefore, using a quality router for the node is important to keep a healthy number of peers.
341315

book/src/installation_binaries.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,10 @@ on Github](https://github.com/sigp/lighthouse/releases).
66

77
## Platforms
88

9-
Binaries are supplied for five platforms:
9+
Binaries are supplied for the following platforms:
1010

1111
- `x86_64-unknown-linux-gnu`: AMD/Intel 64-bit processors (most desktops, laptops, servers)
1212
- `aarch64-unknown-linux-gnu`: 64-bit ARM processors (Raspberry Pi 4)
13-
- `x86_64-apple-darwin`: macOS with Intel chips
1413
- `aarch64-apple-darwin`: macOS with ARM chips
1514
- `x86_64-windows`: Windows with 64-bit processors
1615

book/src/run_a_node.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Once the checkpoint is loaded, Lighthouse will sync forwards to the head of the
106106

107107
If a validator client is connected to the beacon node it will be able to start its duties as soon as forwards sync completes, which typically takes 1-2 minutes.
108108

109-
> Note: If you have an existing Lighthouse database, you will need to delete the database by using the `--purge-db` flag or manually delete the database with `sudo rm -r /path_to_database/beacon`. If you do use a `--purge-db` flag, once checkpoint sync is complete, you can remove the flag upon a restart.
109+
> Note: If you have an existing Lighthouse database, you will need to delete the database by using the `--purge-db-force` flag or manually delete the database with `sudo rm -r /path_to_database/beacon`. If you do use a `--purge-db-force` flag, once checkpoint sync is complete, you can remove the flag upon a restart.
110110
111111
> **Security Note**: You should cross-reference the `block_root` and `slot` of the loaded checkpoint
112112
> against a trusted source like another [public endpoint](https://eth-clients.github.io/checkpoint-sync-endpoints/),

book/src/ui_faqs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ Yes, if you need to access your beacon or validator from an address such as `htt
3030

3131
If your graph is not showing data, it usually means your validator node is still caching data. The application must wait at least 3 epochs before it can render any graphical visualizations. This could take up to 20min.
3232

33-
## 8. How can I connect to Siren using Wallet Connect?
33+
## 8. How can I connect to Siren using Reown (previously WalletConnect)?
3434

35-
Depending on your configuration, building with Docker or Local, you will need to include the `NEXT_PUBLIC_WALLET_CONNECT_ID` variable in your `.env` file. To obtain your Wallet Connect project ID, please follow the instructions on their [website](https://cloud.walletconnect.com/sign-in). After providing a valid project ID, the Wallet Connect option should appear in the wallet connector dropdown.
35+
Depending on your configuration, building with Docker or Local, you will need to include the `NEXT_PUBLIC_WALLET_CONNECT_ID` variable in your `.env` file. To obtain your Wallet Connect project ID, please follow the instructions on their [website](https://dashboard.reown.com/sign-in). After providing a valid project ID, the Wallet Connect option should appear in the wallet connector dropdown.
3636

3737
## 9. I can't log in to Siren even with correct credentials?
3838

book/src/ui_installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Siren requires a connection to both a Lighthouse Validator Client and a Lighthou
1313
Both the Beacon node and the Validator client need to have their HTTP APIs enabled.
1414
These ports should be accessible from Siren. This means adding the flag `--http` on both beacon node and validator client.
1515

16-
To enable the HTTP API for the beacon node, utilize the `--gui` CLI flag. This action ensures that the HTTP API can be accessed by other software on the same machine.
16+
To enable the HTTP API for the beacon node, utilize the `--gui` CLI flag. This action ensures that the HTTP API can be accessed by other software on the same machine. It also enables the validator monitoring.
1717

1818
> The Beacon Node must be run with the `--gui` flag set.
1919

book/src/validator_voluntary_exit.md

Lines changed: 2 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -120,26 +120,9 @@ There are two types of withdrawal credentials, `0x00` and `0x01`. To check which
120120

121121
- A fixed waiting period of 256 epochs (27.3 hours) for the validator's status to become withdrawable.
122122

123-
- A varying time of "validator sweep" that can take up to _n_ days with _n_ listed in the table below. The "validator sweep" is the process of skimming through all eligible validators by index number for withdrawals (those with type `0x01` and balance above 32ETH). Once the "validator sweep" reaches your validator's index, your staked fund will be fully withdrawn to the withdrawal address set.
123+
- A varying time of "validator sweep" that take a few days. The "validator sweep" is the process of skimming through all eligible validators by index number for withdrawals (those with type `0x01` and balance above 32ETH). Once the "validator sweep" reaches your validator's index, your staked fund will be fully withdrawn to the withdrawal address set.
124124

125-
<div align="center">
126-
127-
| Number of eligible validators | Ideal scenario _n_ | Practical scenario _n_ |
128-
|:----------------:|:---------------------:|:----:|
129-
| 300000 | 2.60 | 2.63 |
130-
| 400000 | 3.47 | 3.51 |
131-
| 500000 | 4.34 | 4.38 |
132-
| 600000 | 5.21 | 5.26 |
133-
| 700000 | 6.08 | 6.14 |
134-
| 800000 | 6.94 | 7.01 |
135-
| 900000 | 7.81 | 7.89 |
136-
| 1000000 | 8.68 | 8.77 |
137-
138-
</div>
139-
140-
> Note: Ideal scenario assumes no block proposals are missed. This means a total of withdrawals of 7200 blocks/day * 16 withdrawals/block = 115200 withdrawals/day. Practical scenario assumes 1% of blocks are missed per day. As an example, if there are 700000 eligible validators, one would expect a waiting time of slightly more than 6 days.
141-
142-
The total time taken is the summation of the above 3 waiting periods. After these waiting periods, you will receive the staked funds in your withdrawal address.
125+
The total time taken is the summation of the above 3 waiting periods. After these waiting periods, you will receive the staked funds in your withdrawal address.
143126

144127
The voluntary exit and full withdrawal process is summarized in the Figure below.
145128

0 commit comments

Comments
 (0)