-
Notifications
You must be signed in to change notification settings - Fork 0
Description
The Transaction Log Store owns a series of sequential append-only binary-formatted log files. Once a log file reaches a maximum size, entries are written to the next file in the sequence.
To get a transaction log and add an entry, the code looks like:
const log = db.useLog('foo');
await db.transaction(async (txn) => {
await txn.put('key', Buffer.from('value'));
log.addEntry(Buffer.from('put value'), txn.id);
});Log entries are queued in the TransactionHandle (e.g. txn). There's no limit to the number of entries or entry size.
When the transaction is being committed, the queued transaction log entries are written to disk before the RocksDB transaction is committed. The entries are written in batches using writev() on Posix to the current transaction log file. Note that Windows has a WriteFileGather() function, but there are significant complexities with this API, so the initial implementation should write each entry serially using multiple system calls. If the log file hits the max limit, the entries are continued in the next sequence file. Each entry is written to the transaction log using the transaction's commit timestamp which a monotonic timestamp created when the transaction is created.
Entries are release if the transaction is aborted.