|
2 | 2 |
|
3 | 3 |  |
4 | 4 |
|
5 | | -# Queue |
6 | | -The Queue is the central component of the Key Server. |
7 | | -The Queue gives a definitive, consistent ordering to all changes made to the |
8 | | -key server data structure. |
9 | | - |
10 | | -The Queue is a Multi-Writer, Single Reader data object. |
11 | | -Previous iterations had the Queue as a Multi-Writer, Multi-Reader object, where |
12 | | -each node would receive the same data, much like the Raft Consesnsus Algorithm. |
13 | | -In the Single Reader scheme, the queue is keeping track of global state, ie. how |
14 | | -many items have been processed and added to the database. |
15 | | - |
16 | | -The Queue stores mutations that have not been processed by the signer and |
17 | | -written to the Merkle Tree data structures. |
18 | | - |
19 | | -## Queue Atomicity Notes |
20 | | -Mutations in the queue may be deleted from the queue once they have been |
21 | | -successfully processed by the signer and committed to the leaf database. We do |
22 | | -not require that an epoch advancement occur before queue entries may be deleted. |
23 | | - |
24 | | -Cross-Domain Transactions: |
25 | | -To process an item from the queue, the item is first deleted. If it cannot be |
26 | | -deleted (due to another process dequeueing it first or any other error), the next |
27 | | -item is fetched and processed. Once an item is safely deleted from the queue, it |
28 | | -is processed and the changes are written to the database. If the database commit |
29 | | -fails, all changes are rolled back. The item is not placed back in the queue and |
30 | | -is "lost". However, since clients perform retries until the data they are trying |
31 | | -to update appears in the database, this data loss does not violate the API |
32 | | -contract. |
33 | | - |
34 | | -## Queue Epoch Advancement Notes |
35 | | -When advancing epochs, we can't include the expected epoch number because |
36 | | -multiple epoch advancement requests could be received by the queue out-of-order. |
37 | | - |
38 | | -If we assume a single writer case, we could add fancy logic to the Signer such |
39 | | -that no more than one epoch advancement request is ever present in the queue at |
40 | | -once. This, however, would require the Signer to know what's in the queue when it |
41 | | -crashes and resumes. |
42 | | - |
43 | | -# Signer |
44 | | -The signer processes mutations out of the queue. |
45 | | -In the present configuration, the signer writes each node into the leaf table |
46 | | -with a version number in the future. If the signer crashes, it simply picks up |
47 | | -processing the queue where it left off. Writing to the same node twice with the |
48 | | -same data is permitted. |
49 | | - |
50 | | -The signer applies the mutations in the queue to the entries contained in |
51 | | -`current_epoch - 1`. Duplicate mutations processed during the same epoch will |
52 | | -succeed. Duplicate mutations processed across epochs SHOULD fail. (Each |
53 | | -mutation should be explicit about the previous version of data it is modifying.) |
54 | | - |
55 | | -To advance to the next epoch, the signer inserts an epoch advancement marker |
56 | | -into the queue and waits to receive it back on the queue before committing all |
57 | | -the changes received between epoch markers into a version of the sparse merkle |
58 | | -tree and signing the root node. |
59 | | - |
60 | | -The Signer also takes each item received in the queue and sends it to the |
61 | | -Log of Mutations, so that Monitors can recreate the tree by just reading the |
62 | | -Log of Mutations. |
63 | | - |
64 | | -# Front End Nodes |
65 | | -Front end nodes submit mutations into the queue. |
66 | | - |
67 | | -In previous iterations, the nodes would also receive all mutations and apply |
68 | | -them to their local copies of the tree. In this revision, we decided the tree |
69 | | -could be too big to fit on any particular node, so the tree has been moved to |
70 | | -a distributed database that the Signer updates. |
71 | | - |
72 | | -# Log of Mutations |
73 | | -Stores a signed list of mutations and epoch advancement markers that come out of |
74 | | -the queue. |
75 | | - |
76 | | -# Log of Signed Map Heads |
77 | | -Stores a signed list of Signed Map Heads (SMHs), one for each epoch. |
| 5 | +#Clients |
| 6 | +Clients make requests to Key Transparency servers over HTTPS / JSON or gRPC. |
| 7 | + |
| 8 | +# Key Transparency Server |
| 9 | +The front end servers reveal the mappings between user identifiers (e.g. email |
| 10 | +address) and their anonymized index in the Trillan Map with the Verifiable |
| 11 | +Random Function. |
| 12 | + |
| 13 | +The front ends also provide account data for particular users and the mappings |
| 14 | +between that account data and the public commitments stored in the Trillian |
| 15 | +Map. |
| 16 | + |
| 17 | +# Commitment Table |
| 18 | +The commitment table stores account values and the associated commitment key |
| 19 | +nessesary to verify the commitment stored in the Trillian Map. |
| 20 | + |
| 21 | +# Mutation Table |
| 22 | +When a user wishes to make a change to their account, they create a signed change |
| 23 | +request (also known as a mutation) and send it to a Key Transparency frontend. |
| 24 | + |
| 25 | +The frontend then saves the mutation in the mutation table, allowing the database |
| 26 | +to assign the mutation a monotonically increasing sequence number or timestamp, |
| 27 | +establishing an authoritative ordering for new mutations. |
| 28 | + |
| 29 | +This strict ordering requirement could be relaxed in the future for |
| 30 | +performance reasons. Strictly speaking only given sets (batches) of mutations |
| 31 | +that need to be ordered relative to other sets. |
| 32 | + |
| 33 | +# Trillian Map |
| 34 | +The Trillian Map stores the sparse merkle tree and is designed to scale to |
| 35 | +extremely large trees. The Trilian Map is updated in batches. |
| 36 | + |
| 37 | +# Trillian Log |
| 38 | +The Trillian Log stores a dense merkle tree in the style of Ceritificate |
| 39 | +Transparency. The Key Transparency Sequencer adds SignedMapRoots from the |
| 40 | +Trillian Map to the Trillian Log as they are created. |
| 41 | + |
| 42 | +# Key Transparency Sequencer |
| 43 | +The Key Transparency Sequencer runs periodically. It creates a batch of new |
| 44 | +mutations that have occurred since the last sequencing run. It then verifies |
| 45 | +the mutations and applies them to the currently stored values in the map. |
| 46 | + |
| 47 | +After each new map revision, the sequencer sends the new SignedMapRoot (SMR) to |
| 48 | +the Trillian Log which must sequence the new SMR before the front ends will |
| 49 | +start using the new map revision. |
| 50 | + |
| 51 | +After each new map revision, the sequencer will also send the new SignedMapRoot, |
| 52 | +SignedLogRoot, Mutations, and associated proofs to the front ends over a |
| 53 | +streaming gRPC channel. The frontends will then forward those same notification |
| 54 | +to active monitors over a streaming gRPC channel. |
| 55 | + |
| 56 | +# Mutation |
| 57 | +Mutations in Key Transparency are defined as a signed key-value object. |
| 58 | +- The Key must the the valid index associated with the user's identifier. |
| 59 | +- The Value is an object that contains |
| 60 | + - A cryptographic commitment to the user's data. |
| 61 | + - The set of public keys that are allowed to update this account. |
| 62 | +- The mutation also contains the hash of the previous mutation. This helps |
| 63 | + break race conditions and it forms a hash chain in each account. |
| 64 | + |
| 65 | +# Monitors |
| 66 | +Monitors process and verify the mutations that make each new epoch. |
| 67 | +Monitors verify various policy properties of the signed key-values in the |
| 68 | +Trillian Map. In particular, monitors verify that |
| 69 | +- Back pointers in each leaf do not skip over any values - an operation that |
| 70 | + would be bandwidth intensive for mobile clients. |
| 71 | +- Mutations are properly signed and verified by public keys declared in the |
| 72 | + prior epoch. |
| 73 | + |
| 74 | +Monitors also observe the Trillian Log proofs provided by the Key Transparency |
| 75 | +front end to detect any log forks. |
| 76 | + |
| 77 | +Monitors participate in a primitive form of gossip by signing the Trillian Log |
| 78 | +roots that they see and make them available over an API. |
| 79 | + |
| 80 | + |
78 | 81 |
|
79 | 82 |
|
0 commit comments