Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] full-chain membership proof++ integration #9436

Draft
wants to merge 127 commits into
base: master
Choose a base branch
from

Conversation

j-berman
Copy link
Collaborator

@j-berman j-berman commented Aug 14, 2024

This is a WIP draft PR for the full-chain membership proof (fcmp++) integration. It's roughly following section 6 of the specification written by @kayabaNerve (paper, commit).

Checklist of items expected in this PR:

The above checklist does not include all items required to complete the integration.

I plan to divide the code into commits where each subsequent commit builds off the prior commit. I could eventually close this PR in favor of smaller PR's that can be reviewed sequentially and in isolation.

This PR description can function as living documentation for the code as work on the integration progresses (and audits/fcmp++ research progress in parallel) . In this description, I highlight the most critical components from the code, aiming to make the PR as a whole easier to understand. Thoughts/feedback is welcome at any time.

A. Rust FFI

Since much of the full-chain membership proof++ code is written in Rust, this PR implements a Foreign Function Interface (FFI) to call the Rust code from C++. Using cmake, the Rust code is compiled into a static lib (libfcmp_pp_rust.a) when you run make from the root of the monero repo. The static lib's functions are exposed via the C++ src/fcmp_pp/fcmp++.h header file (generated with the help of cbindgen and modified slightly). The heavy lifting on the Rust side is done in @kayabaNerve's full-chain-membership-proofs Rust crate; the Rust handles the math on the Helios and Selene curves, and fcmp++ construction and verification.

Here is what the structure looks like at time of writing:

rust_ffi_diagram

  • Major credit to @kayabaNerve for solid work getting cross-compilation and compilation on a number of architectures working.
  • I'm generally seeking to isolate all Rust types and minimize usage of the Rust lib to what's necessary for fcmp++'s in this PR.
    • The C++ has clear responsibilities: manage tree state, update/read the db, ed25519 math, tx changes, RPC routes.
    • The Rust: math on Helios and Selene curves, fcmp++ construction, and fcmp++ verification.
    • I think this strategy has so far been successful in avoiding spaghetti of mixed C++ and Rust across the code. I've spent the vast majority of my time working on C++ algorithms, not dealing with the Rust. The heavy majority of code in this PR is C++.

B. Curve trees merkle tree

The curve trees merkle tree is a new store for spendable transaction outputs in the chain. fcmp++'s work by proving you own (and can spend) an output in the tree, without revealing which output is yours. All existing valid cryptonote outputs will be inserted into the tree as soon as the outputs unlock. Once an output is in the tree, users can construct fcmp++'s with that output. Thus, the anon set will roughly be the entire chain since genesis.

The leaves in the tree are composed of output tuples {O.x, I.x, C.x}, and each layer after the leaf layer is composed of hashes of chunks of the preceding layer, as follows:

leaves:  [O.x,I.x,C.x,O.x,I.x,C.x,...]
layer 0: [Hash(first chunk of leaves), Hash(next chunk of leaves),...]
layer 1: [Hash(first chunk of layer 0 hashes), Hash(next chunk of layer 0 hashes)...]
...

Each layer is composed of points alternating on two curves (@tevador's proposed Selene and Helios curves). The leaves are Selene scalars (we convert ed25519 points to Selene scalars), the layer after leaves is composed of points on the Selene curve (we hash chunks of Selene scalars from the leaf layer to get this layer's Selene points), the following layer is composed of points on the Helios curve (we convert the prior layer's Selene points to Helios scalars, and hash chunks of those Helios scalars to get this layer's Helios points), the following layer is composed of points on the Selene curve (we convert the prior layer's Helios points to Selene scalars, and hash chunks of those Selene scalars to get this layer's Selene points), and so on. We continue until there is just one chunk in a layer to hash, leaving us with the tree root.

Each curve has a defined chunk width used when hashing the children in the preceding layer. The final layer has a single element in it: the root.

There are 3 critical steps to growing the tree:

  1. Keep track of the block in which outputs will unlock.
  2. Grow the tree with outputs in the block in which they unlock.
  3. Update the database.

a. Curve trees merkle tree: Preparing locked outputs for insertion to the tree upon unlock

We first need to determine the block in which outputs unlock. We keep track of locked outputs by unlock block in the database so we can grow the tree in the block they unlock.

Take note of the function: get_outs_by_unlock_block. Upon adding a block, we iterate over all the block's tx outputs in order, and place the outputs in the containerOutputsByUnlockBlock = std::unordered_map<uint64_t, std::vector<OutputContext>>. The uint64_t is the output's unlock height. The output's unlock height is calculated using the new get_unlock_block_index function. get_unlock_block_index is documented further below. The std::vector<OutputContext> for each unlock height should be sorted in the order outputs appear in the chain.

Upon adding a block, we'll add those outputs to the database here:

// Add the locked outputs from this block to the locked outputs table
for (const auto &unlock_block : outs_by_unlock_block)
{
  MDB_val_set(k_block_id, unlock_block.first);
  for (const auto &locked_output : unlock_block.second)
  {
    MDB_val_set(v_output, locked_output);
    result = mdb_cursor_put(m_cur_locked_outputs, &k_block_id, &v_output, MDB_APPENDDUP);
    if (result != MDB_SUCCESS)
      throw0(DB_ERROR(lmdb_error("Failed to add locked output: ", result).c_str()));
  }
}

LMDB table changes are documented further below in section A.d.

get_unlock_block_index

// Returns the block index in which the provided unlock_time unlocks
uint64_t get_unlock_block_index(uint64_t unlock_time, uint64_t block_included_in_chain);

The idea behind this function is to have a deterministic and efficient method of growing the tree when outputs unlock.

Most outputs in the chain don't include an unlock_time; those outputs unlock 10 blocks after they are included in the chain.

Some outputs include an unlock_time which should either be interpreted as the height at which an output should unlock, or the time at which an output should unlock. When the unlock_time should be interpreted as height, the response to get_unlock_block_index is trivial. When interpreted as time, the logic is less straightforward. In this PR, as proposed by @kayabaNerve, I use the prior hard fork's block and time as an anchor point, and determine the unlock block from that anchor point. By converting timestamped unlock_time to a deterministic unlock block, we avoid needing to search for outputs that unlock by timestamp.

Note it is possible (likely) for the returned unlock_block_index to be distinct from current consensus' enforced unlock block for timestamp-based locked outputs only. The proposal is for consensus to enforce this new rule for fcmp++'s (users won't be able to construct fcmp's until outputs unlock according to the rules of get_unlock_block_index).

Note: get_unlock_block_index from unlock_time is not in production form as is. The calculation should account for:

  • The period of time where consensus supported 60s block times.
  • The network (mainnet, testnet, stagenet).
  • It should be vetted against the chain to make sure it calculates reasonable values.

b. Curve trees merkle tree: grow_tree

This function takes a set of new outputs and uses them to grow the tree.

It has 3 core steps:

  1. Get the number of leaf tuples currently in the tree, as well as the last hash (i.e. most recently inserted) from every layer.
  2. Use the above two results to construct a tree extension, which we can use to extend the tree.
  3. Use the tree extension to extend the tree.

Steps 1 and 3 are fairly straightforward. Step 2 carries the most weight and is the most complex. It's implemented in the CurveTrees class get_tree_extension function documented further below.

This step-wise approach enables clean separation of the db logic (steps 1 and 3) from the grow logic (step 2). In my view, this separation enables cleaner, more efficient code, and stronger testing. It also enables reusable tree building code for wallet scanning.

get_tree_extension

// Take in the existing number of leaf tuples and the existing last hash in each layer in the tree, as well as new
// outputs to add to the tree, and return a tree extension struct that can be used to extend a tree
TreeExtension get_tree_extension(const uint64_t old_n_leaf_tuples,
    const LastHashes &existing_last_hashes,
    std::vector<OutputContext> &&new_outputs) const;

get_tree_extension has 2 core steps:

  1. Prepare new leaves for insertion into the tree.

    a. Sort new outputs by the order they appear in the chain (guarantees consistent insertion order in the tree).

    b. Convert valid outputs to leaf tuples (from the form {output_pubkey,commitment}to {O,I,C} to {O.x,I.x,C.x}).

    • Points must be torsion cleared before inserting into the tree.
    • We cannot insert points that are not on ed25519 into the tree, nor points equal to identity after clearing torsion.
    • Thus we ignore any outputs which have output_pubkey or commitment that are not on the ed255129 curve, or are equal to identity after clearing torsion.
    • From @kayabaNerve: such outputs should either not be spendable today or are worth 0.
    • See the CurveTrees<Helios, Selene>::LeafTuple CurveTrees<Helios, Selene>::leaf_tuple function for the code.

    c. Place all leaf tuple members in a flat vector ([{output 0 output pubkey and commitment}, {output 1 output pubkey and commitment},...] becomes[O.x,I.x,C.x,O.x,I.x,C.x,...]).

  2. Go layer by layer, hashing chunks of the preceding layer, and place results in the TreeExtension struct.

    a. Get GrowLayerInstructions for the current layer.

    • The logic to get GrowLayerInstructions for the layer after the leaf layer is distinct from all other layers after.
    • Using just the old_total_children, new_total_children, parent_chunk_width, and a bool for whether or not the last_child_will_change, we can determine how exactly we expect a layer to grow.
    • There are edge cases to watch out for. Edge case examples:
      • Example 1: the existing last hash in a layer may change from an old value to a new value.
      • Example 2: to efficiently update a hash from an old value to a new value, we may need to use the old AND new value of the prior last child because that child updated as well.
      • Example 3: if we need to use the old value of the prior last child, we need to adjust the offset we use to hash chunks.
      • Example 4: we may be adding a new layer after the existing root, and may need to use the existing root to get the next layer's first hash.

    b. Get the LayerExtension for the current layer to add to the TreeExtension struct.

    • We use GrowLayerInstructions to determine correct values when hashing the preceding "child" layer.

c. Curve trees merkle tree: trim_tree

This function trims the provided number of leaf tuples from the tree.

The function has 5 core steps:

  1. Get the number of leaf tuples currently in the tree.
  2. Use the number of leaf tuples currently in the tree to get TrimLayerInstructions, which we can use to know how to trim each layer in the tree.
  3. Get all the children we'll need to trim from the new last chunk in each layer, as well as the existing last hash from the new last chunk in each layer.
  4. Use the results from steps 2 and 3 to construct a TreeReduction struct, which we can use to trim the tree.
  5. Use the TreeReduction struct to trim the tree.

Step 1 is straightforward.

Step 2 carries the most weight and is the most complex. It's implemented in the CurveTrees class get_trim_instructions function documented further below.

In step 3, the "new last chunk in each layer" is referring to what will become the new last chunk in a layer after trimming that layer. We need values from those existing chunks in order to correctly and efficiently trim the chunk.

Step 4 is also complex, and is implemented in the CurveTrees class get_tree_reduction function documented further below.

In step 5, we also make sure to re-add any trimmed outputs back to the locked outputs table. We only trim the tree 1 block at a time. Therefore any trimmed outputs must necessarily be re-locked upon removal from the tree.

Like for grow_tree this step-wise approach enables clean separation of db logic (steps 1, 3, 5) from the trim logic (steps 2 and 4).

get_trim_instructions

// Get instructions useful for trimming all existing layers in the tree
std::vector<TrimLayerInstructions> get_trim_instructions(
    const uint64_t old_n_leaf_tuples,
    const uint64_t trim_n_leaf_tuples) const;

This function first gets instructions for trimming the leaf layer, then continues getting instructions for each subsequent layer until reaching the root.

The function doing the heavy lifting is:

static TrimLayerInstructions get_trim_layer_instructions(
    const uint64_t old_total_children,
    const uint64_t new_total_children,
    const std::size_t parent_chunk_width,
    const bool last_child_will_change);

Similar to growing a layer, there are edge cases to watch out for when trimming a layer:

  • Example 1: we may not be trimming any elements from a layer, only updating a hash.
  • Example 2: it may be more efficient to trim by effectively re-growing an entire chunk with all elements from that chunk. In this case, we'll need to be sure to get the remaining children from that chunk.
  • Example 3: it may be more efficient to trim by explicitly trimming children from that chunk. In this case, we need to be sure to get those existing children from that chunk.
  • Example 4: we may need to explicitly trim and update the last child in a chunk.

This function captures these edge cases and outputs a struct that tells the caller how exactly to handle them.

get_tree_reduction

// Take in the instructions useful for trimming all existing layers in the tree, all children to be trimmed from
// each last chunk, and the existing last hash in what will become the new last parent of each layer, and return
// a tree reduction struct that can be used to trim a tree
TreeReduction get_tree_reduction(
    const std::vector<TrimLayerInstructions> &trim_instructions,
    const LastChunkChildrenToTrim &children_to_trim,
    const LastHashes &last_hashes) const;

This function iterates over all layers, outputting a LayerReduction struct for each layer, which is a very simple struct we can use to trim a layer in the tree:

// A struct useful to trim a layer and update its last hash if necessary
template<typename C>
struct LayerReduction final
{
    uint64_t          new_total_parents{0};
    bool              update_existing_last_hash;
    typename C::Point new_last_hash;
};

It uses each layer's TrimLayerInstructions from above as a guide, dictating exactly what data to use to calculate a new last hash for each layer.

d. Curve trees merkle tree: LMDB changes

The following changes to the db are necessary in order to store and update the curve trees merkle tree.

NEW: locked_outputs table

Potential outputs to be inserted into the merkle tree, indexed by the block ID in which the outputs unlock.

Key: `block ID`
Data: `[{output ID, out_pubkey, commitment}...]`
DB Flags: `MDB_INTEGERKEY | MDB_DUPSORT | MDB_DUPFIXED | MDB_CREATE`

We store the ouput ID to guarantee outputs are inserted into the tree in the order they appear in the chain.

This table stores the output pub key and commitment (64 bytes) instead of {O.x,I.x,C.x}, since {O.x,I.x,C.x} (96 bytes) can be derived from the output pub key and commitment, saving 32 bytes per output. Note that we should theoretically be able to stop storing the output public key and commitment in the output_amounts table at the hard fork, since that table should only be useful to construct and verify pre-fcmp++ txs.

NEW: leaves table

Leaves in the tree.

Key: `leaf_idx`
Data: `{output_id, out_pubkey, commitment}`
DB Flags: `MDB_INTEGERKEY | MDB_DUPSORT | MDB_DUPFIXED | MDB_CREATE`

We store the output ID so that when we trim the tree, we know where to place the output back into the locked outputs table.

Same as above: this table stores the output pub key and commitment (64 bytes) instead of {O.x,I.x,C.x}, since {O.x,I.x,C.x} (96 bytes) can be derived from the output pub key and commitment, saving 32 bytes per output.

Note that we must save the output pub key for outputs in the chain before the fork that includes fcmp++, since we need to derive I from the pre-torsion cleared points. After the fork, we can store torsion cleared valid {O,C} pairs instead if we ban torsioned outputs and commitments at consensus, or if we redefine hash to point to use torsion cleared O.x as its input.

Note we also use the dummy zerokval key optimization for this table as explained in this comment:

 * Note: where the data items are of uniform size, DUPFIXED tables have
 * been used to save space. In most of these cases, a dummy "zerokval"
 * key is used when accessing the table; the Key listed above will be
 * attached as a prefix on the Data to serve as the DUPSORT key.
 * (DUPFIXED saves 8 bytes per record.)

NEW: layers table

Each record is a 32 byte hash of a chunk of children, as well as that hash's position in the tree.

Key: `layer_idx`
Data: `[{child_chunk_idx, child_chunk_hash}...]`
DB Flags: `MDB_INTEGERKEY | MDB_DUPSORT | MDB_DUPFIXED | MDB_CREATE`

The layer_idx is indexed starting at the layer after the leaf layer (i.e. layer_idx=0 corresponds to the layer after the leaf layer).

Example: {layer_idx=0, child_chunk_idx=4, child_chunk_hash=<31fa...>} means that the child_chunk_hash=<31fa...> is a hash of the 5th chunk of leaves, and is a Selene point. Another example: {layer_idx=1, child_chunk_idx=36, child_chunk_hash=<a2b5...>} means that the child_chunk_hash=<a2b5...> is a hash of the 37th chunk of elements from layer_idx=0, and is a Helios point.

An even layer_idx corresponds to Selene points. An odd layer_idx corresponds to Helios points.

The element with the highest layer_idx is the root (which should also be the last element in the table). There should only be a single element with the highest layer_idx (i.e. only one data item with key == max layer_idx).

UPDATED: block_info table

New fields:

uint64_t bi_n_leaf_tuples;
std::array<uint8_t, 32UL> bi_tree_root;

bi_n_leaf_tuples - the number of leaf tuples in the tree at that height.

bi_tree_root - the root hash of the tree at that height. It is a (compressed) Helios point or Selene point, which can be determined from the number of leaf tuples in the tree.

e. Curve trees merkle tree: Growing the tree as the node syncs

At each block, the tree must grow with (valid) outputs that unlock in that block. In the add_block function in db_lmdb.cpp, note the following:

// Grow the tree with outputs that unlock at this block height
auto unlocked_outputs = this->get_outs_at_unlock_block_id(m_height);
this->grow_tree(std::move(unlocked_outputs));

// Now that we've used the unlocked leaves to grow the tree, we can delete them from the locked outputs table
this->del_locked_outs_at_block_id(m_height);

Then when adding the block, we get the number of leaf tuples in the tree and tree root and store them on each block info record:

bi.bi_n_leaf_tuples = this->get_num_leaf_tuples();
bi.bi_tree_root = this->get_tree_root();

Finally, we use the container mentioned above to place the locked outputs from that block in a "staging" locked_outputs table, ready to be used to grow the tree once they unlock.

Comments

  • Converting outputs into leaf tuples and hashing chunks of scalars is multithreaded.
  • The field element inversion op when converting outputs to leaf tuples is batched.
  • There may be some more gains on the table to batch tasks into the threadpool. Added a TODO to investigate this in the code.

f. Curve trees merkle tree: Migrating cryptonote outputs into the tree

All existing cryptonote outputs need to be migrated into the merkle tree.

  • The migration determines the unlock block for every output and places them in the locked outputs table, and then grows the tree block by block with valid outputs.
  • The migration must run to completion before a node can start normal sync, same as past migrations.
    • If the node is killed while the migration is ongoing, the migration will resume from where it left off upon restarting the node.
  • The migration has 2 steps:
    1. Migrate outputs into the new locked_outputs table.
    2. Grow the tree block by block.
  • The migration can theoretically be made asynchronous (it can run in the background while nodes start immediately, until the hard fork when the migration must run before nodes can continue syncing). Such a change would be a solid lift.

g. Curve trees merkle tree: Key image migration

Removing the sign bit from key images enables an optimization for fcmp's (refer to the specification paper for further details on the optimization). If an fcmp includes a key image with sign bit cleared, while the same key image with sign bit set exists in the chain already via a ring signature, then the fcmp would be a double spend attempt and the daemon must be able to detect and reject it. In order for the daemon to detect such double spends, upon booting the daemon, we clear the sign bit from all key images already in the db. All key images inserted to the db have their sign bit cleared before insertion, and the db prevents duplicates. We also make sure that all key images held in memory by the pool have sign bit cleared (see key_images_container). Transactions must have unique key images with sign bit cleared too (see check_tx_inputs_keyimages_diff). Key images with sign bit cleared are a new type: crypto::key_image_y. The sign bit can be cleared via crypto::key_image_to_y. The _y denotes that the encoded point is now the point's y coordinate.

This PR aims to avoid a breaking change to the COMMAND_RPC_GET_TRANSACTION_POOL endpoint, which currently serves key images in the pool via the spent_key_image_info::id_hash response field. The PR does this by making sure the pool keeps track of the sign bit for each crypto::key_image_y held in the pool. The daemon still prevents duplicate crypto::key_image_y from entering the pool (except in the case of reorgs as is currently the case), but upon serving the response to COMMAND_RPC_GET_TRANSACTION_POOL, the daemon re-derives the crypto::key_image using crypto::key_image_y and the sign bit, and serves this original crypto::key_image via spent_key_image_info::id_hash. Note that it is possible for two distinct id_hash of the same key_image_y to exist, but the key_image has sign bit set for one id_hash and sign bit cleared for the other id_hash (thus 2 distinct id_hash's). This would be possible if during a grace period that allows both fcmp's and ring signatures, there exists an alternate chain where a user constructs an fcmp spending an output, and an alternate chain where a user constructs a ring signature spending the same output and the key image has sign bit set.

TODO: tests for this grace period scenario.

h. Curve trees merkle tree: Trim the tree on reorg and on pop blocks

  • On reorg/pop blocks, blocks are removed 1 block at a time via BlockchainLMDB::remove_block().
    • At the end of BlockchainLMDB::remove_block(), after removing the block from the block info table, we call BlockchainLMDB::trim_tree with the number of leaves to trim and the block id which we're trimming.
    • We use the provided trim block id to re-insert any trimmed outputs into the locked outputs table.
      • Note that we need the trimmed outputs' output_id to re-insert the output into the locked outputs table in the correct order.
    • We also remove all leaves and layer elems from the db as necessary, and update any layer elems as necessary.
  • On reorg/pop blocks, after calling BlockchainLMDB::remove_block(), the daemon removes all of the block's transactions from the db via BlockchainLMDB::remove_transaction.
    • Within BlockchainLMDB::remove_transaction is BlockchainLMDB::remove_output, which is called for all of a tx's outputs.
    • In BlockchainLMDB::remove_output we remove the output from the locked outputs table if it's present.
      • Valid spendable outputs should always be present.
      • Invalid un-spendable outputs may not be in the locked outputs table, since they can be removed from the table upon growing the tree, and then won't be re-inserted back into the locked outputs table upon reorg/pop blocks in BlockchainLMDB::trim_tree.

C. Transaction struct changes for fcmp++

cryptonote::transaction::rctSig

rctSigBase

Added a new RCTType enum usable in the type member of rctSigBase:

RCTTypeFcmpPlusPlus = 7

fcmp++ txs are expected to use this RCTType instead of RCTTypeBulletproofPlus (even though fcmp++ txs are still expected to have a bp+ range proof).


Added a new member to rctSigBase:

crypto::hash referenceBlock; // block containing the merkle tree root used for the tx's fcmp++

This member is only expected present on txs of rctSigBase.type == RCTTypeFcmpPlusPlus.

rctSigPrunable

Added 2 new members:

uint8_t curve_trees_tree_depth; // for fcmp++
fcmp_pp::FcmpPpProof fcmp_pp;

Note there is a single opaque fcmp++ struct per tx. The FcmpPpProof type is simply a std::vector<uint8_t>. The length of the FcmpPpProof is deterministic from the number of inputs in the tx and curve trees merkle tree depth. Thus, when serializing and de-serializing, we don't need to store the vector length, and can expect a deterministic number of bytes for the FcmpPpProof by calling fcmp_pp::proof_len(inputs, curve_trees_tree_depth).

Comments

  • The tx_fcmp_pp serialization test demonstrates what an expected dummy transaction struct looks like with dummy data.
  • TODO: implement the function to get fcmp++ size from number of inputs and tree depth. We will need to decide between the 2 options described in section 6.6 of the specification paper.
  • TODO: tests for JSON serialization/de-serialization.

D. Constructing fcmp++ transactions

TODO

E. Verifying fcmp++ transactions

TODO

F. Consensus changes for fcmp++

TODO

j-berman and others added 30 commits July 29, 2024 03:40
Identified by kayabaNerve, patch suggested by j-berman.
- When retrieving last chunks, set next_start_child_chunk_index
so can know the correct start index without needing to modify
the offset
- Other smaller cleanup
Comment on lines +229 to +232
// https://github.com/rust-lang/rust/issues/79609
#[cfg(all(target_os = "windows", target_arch = "x86"))]
#[no_mangle]
pub extern "C" fn _Unwind_Resume() {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If mingw-w64 was compiled with dwarf2 exceptions enabled this will cause a multiple definitions error. I do this in #9440, because if you cross-compile the rust standard library with sjlj mingw-w64, you run into the same undefined references to unwind issue. Building std without unwinding should be possible, but I ran into problems. Removing these lines here breaks CI, because distros package sjlj mingw-w64.

I'll try to come up with the least hacky way to work around this. IMO, we could consider dropping the 32-bit Windows target. Windows 11 requires 64-bit and I can't imagine anyone using a >18 year old CPU to run Windows 10.

- trim_tree now re-adds trimmed outputs back to the locked outputs
table. remove_output then deletes from the locked output table.
- Since outputs added to the tree in a specific block may have
originated from distinct younger blocks (thanks to distinct unlock
times), we need to store the 8 byte output_id in the leaves table
as well, so that in the event of a reorg, upon removing outputs
from the tree we can add them back to the locked outputs table
in the correct order.
- Save 8 bytes per leaf by using DUPFIXED table and dummy
"zerokval" key and attaching leaf_idx as prefix to data to serve
as DUPSORT key
- fixes usage of MDB_NEXT and MDB_NEXT_DUP, allowing the db call
to set key and value
- If the output is invalid/unspendable, upon unlock it will be
deleted from the locked outputs table and then won't be used to
grow the tree. Upon reorg/pop blocks, the invalid output won't be
re-added to the locked outputs table upon trimming the tree. Thus,
it's possible for an invalid/unspendable output to not be present
in the locked outputs table upon remove.
- If locked output migration step completes, then program exits
while migration step to grow the tree is in progress, make sure
the migration picks back up where it left off growing the tree.
- Make sure db cursor gets set in all cases when renaming
block infn table.
- Removing the sign bit from key images enables an optimization for
fcmp's.
- If an fcmp includes a key image with sign bit cleared,while the
same key image with sign bit set exists in the chain already via a
ring signature, then the fcmp would be a double spend attempt and
the daemon must be able to detect and reject it.
- In order for the daemon to detect such double spends, upon
booting the daemon, we clear the sign bit from all key images
already in the db. We also make sure that all key images held in
memory by the pool have sign bit cleared as well.
- Key images with sign bit cleared are a new type:
`crypto::key_image_y`. The sign bit can be cleared via
`crypto::key_image_to_y`.
- The `_y` denotes that the encoded point is now the point's y
coordinate.
- In order to maintain backwards compatibility with current RPC
consumers, the daemon keeps track of which key images have sign
bit cleared and not, so that upon serving
`spent_key_image_info::id_hash`, the daemon can re-construct the
original key image and serve it to clients.
Speeds up inverting many elems at once 95%+
- Moved functions around in unit_tests/curve_trees.{h,cpp} to ease
using the in-memory Global tree across tests
- Introduced PathV1 struct, which is a path in the tree containing
whole chunks at each layer
- Implemented functions to get_path_at_leaf_idx and get_tree_root
on in-memory Global tree
- Cleanly separate logic to set the hash_offset that we use when
calling hash_trim and hash_grow from the logic used to determine
which old child values we need from the tree
- The core logic error was not properly setting the range of
children needed from the tree when
need_last_chunk_remaining_children is true. The fix makes sure
to use the correct range, and to set hash_offset appropriately
for eveery case.
- In the case that get_next_layer_reduction doesn't actually
need to do any hashing, only tell the caller to trim to boundary,
the function now short-circuits and doesn't continue with hashing
- batch_start is the simplest function to use to resize db, since
resizing requires no active txns.
- batch_stop makes sure no active txns.
- need to decrement txns before calling migrate() so that do_resize
does not deadlock in wait_no_active_txns
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants