Skip to content
This repository was archived by the owner on Oct 11, 2024. It is now read-only.

Commit f881945

Browse files
committed
Merge remote-tracking branch 'isma-fork/non_verifying_monitor' into non_verifying_monitor
2 parents 16724f5 + 1544646 commit f881945

File tree

5 files changed

+78
-130
lines changed

5 files changed

+78
-130
lines changed

cmd/keytransparency-client/cmd/root.go

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,13 +65,14 @@ server provides to ensure that account data is accurate.`,
6565
kt.Vlog = log.New(os.Stdout, "", log.LstdFlags)
6666
}
6767
},
68+
SilenceUsage: true,
6869
}
6970

7071
// Execute adds all child commands to the root command sets flags appropriately.
7172
// This is called by main.main(). It only needs to happen once to the rootCmd.
7273
func Execute() {
7374
if err := RootCmd.Execute(); err != nil {
74-
log.Fatalf("%v", err)
75+
os.Exit(1)
7576
}
7677
}
7778

core/testutil/ctutil/ctutil.go

Lines changed: 0 additions & 50 deletions
This file was deleted.

docs/architecture.md

Lines changed: 76 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -2,78 +2,81 @@
22

33
![Architecture](images/architecture.png)
44

5-
# Queue
6-
The Queue is the central component of the Key Server.
7-
The Queue gives a definitive, consistent ordering to all changes made to the
8-
key server data structure.
9-
10-
The Queue is a Multi-Writer, Single Reader data object.
11-
Previous iterations had the Queue as a Multi-Writer, Multi-Reader object, where
12-
each node would receive the same data, much like the Raft Consesnsus Algorithm.
13-
In the Single Reader scheme, the queue is keeping track of global state, ie. how
14-
many items have been processed and added to the database.
15-
16-
The Queue stores mutations that have not been processed by the signer and
17-
written to the Merkle Tree data structures.
18-
19-
## Queue Atomicity Notes
20-
Mutations in the queue may be deleted from the queue once they have been
21-
successfully processed by the signer and committed to the leaf database. We do
22-
not require that an epoch advancement occur before queue entries may be deleted.
23-
24-
Cross-Domain Transactions:
25-
To process an item from the queue, the item is first deleted. If it cannot be
26-
deleted (due to another process dequeueing it first or any other error), the next
27-
item is fetched and processed. Once an item is safely deleted from the queue, it
28-
is processed and the changes are written to the database. If the database commit
29-
fails, all changes are rolled back. The item is not placed back in the queue and
30-
is "lost". However, since clients perform retries until the data they are trying
31-
to update appears in the database, this data loss does not violate the API
32-
contract.
33-
34-
## Queue Epoch Advancement Notes
35-
When advancing epochs, we can't include the expected epoch number because
36-
multiple epoch advancement requests could be received by the queue out-of-order.
37-
38-
If we assume a single writer case, we could add fancy logic to the Signer such
39-
that no more than one epoch advancement request is ever present in the queue at
40-
once. This, however, would require the Signer to know what's in the queue when it
41-
crashes and resumes.
42-
43-
# Signer
44-
The signer processes mutations out of the queue.
45-
In the present configuration, the signer writes each node into the leaf table
46-
with a version number in the future. If the signer crashes, it simply picks up
47-
processing the queue where it left off. Writing to the same node twice with the
48-
same data is permitted.
49-
50-
The signer applies the mutations in the queue to the entries contained in
51-
`current_epoch - 1`. Duplicate mutations processed during the same epoch will
52-
succeed. Duplicate mutations processed across epochs SHOULD fail. (Each
53-
mutation should be explicit about the previous version of data it is modifying.)
54-
55-
To advance to the next epoch, the signer inserts an epoch advancement marker
56-
into the queue and waits to receive it back on the queue before committing all
57-
the changes received between epoch markers into a version of the sparse merkle
58-
tree and signing the root node.
59-
60-
The Signer also takes each item received in the queue and sends it to the
61-
Log of Mutations, so that Monitors can recreate the tree by just reading the
62-
Log of Mutations.
63-
64-
# Front End Nodes
65-
Front end nodes submit mutations into the queue.
66-
67-
In previous iterations, the nodes would also receive all mutations and apply
68-
them to their local copies of the tree. In this revision, we decided the tree
69-
could be too big to fit on any particular node, so the tree has been moved to
70-
a distributed database that the Signer updates.
71-
72-
# Log of Mutations
73-
Stores a signed list of mutations and epoch advancement markers that come out of
74-
the queue.
75-
76-
# Log of Signed Map Heads
77-
Stores a signed list of Signed Map Heads (SMHs), one for each epoch.
5+
#Clients
6+
Clients make requests to Key Transparency servers over HTTPS / JSON or gRPC.
7+
8+
# Key Transparency Server
9+
The front end servers reveal the mappings between user identifiers (e.g. email
10+
address) and their anonymized index in the Trillan Map with the Verifiable
11+
Random Function.
12+
13+
The front ends also provide account data for particular users and the mappings
14+
between that account data and the public commitments stored in the Trillian
15+
Map.
16+
17+
# Commitment Table
18+
The commitment table stores account values and the associated commitment key
19+
nessesary to verify the commitment stored in the Trillian Map.
20+
21+
# Mutation Table
22+
When a user wishes to make a change to their account, they create a signed change
23+
request (also known as a mutation) and send it to a Key Transparency frontend.
24+
25+
The frontend then saves the mutation in the mutation table, allowing the database
26+
to assign the mutation a monotonically increasing sequence number or timestamp,
27+
establishing an authoritative ordering for new mutations.
28+
29+
This strict ordering requirement could be relaxed in the future for
30+
performance reasons. Strictly speaking only given sets (batches) of mutations
31+
that need to be ordered relative to other sets.
32+
33+
# Trillian Map
34+
The Trillian Map stores the sparse merkle tree and is designed to scale to
35+
extremely large trees. The Trilian Map is updated in batches.
36+
37+
# Trillian Log
38+
The Trillian Log stores a dense merkle tree in the style of Ceritificate
39+
Transparency. The Key Transparency Sequencer adds SignedMapRoots from the
40+
Trillian Map to the Trillian Log as they are created.
41+
42+
# Key Transparency Sequencer
43+
The Key Transparency Sequencer runs periodically. It creates a batch of new
44+
mutations that have occurred since the last sequencing run. It then verifies
45+
the mutations and applies them to the currently stored values in the map.
46+
47+
After each new map revision, the sequencer sends the new SignedMapRoot (SMR) to
48+
the Trillian Log which must sequence the new SMR before the front ends will
49+
start using the new map revision.
50+
51+
After each new map revision, the sequencer will also send the new SignedMapRoot,
52+
SignedLogRoot, Mutations, and associated proofs to the front ends over a
53+
streaming gRPC channel. The frontends will then forward those same notification
54+
to active monitors over a streaming gRPC channel.
55+
56+
# Mutation
57+
Mutations in Key Transparency are defined as a signed key-value object.
58+
- The Key must the the valid index associated with the user's identifier.
59+
- The Value is an object that contains
60+
- A cryptographic commitment to the user's data.
61+
- The set of public keys that are allowed to update this account.
62+
- The mutation also contains the hash of the previous mutation. This helps
63+
break race conditions and it forms a hash chain in each account.
64+
65+
# Monitors
66+
Monitors process and verify the mutations that make each new epoch.
67+
Monitors verify various policy properties of the signed key-values in the
68+
Trillian Map. In particular, monitors verify that
69+
- Back pointers in each leaf do not skip over any values - an operation that
70+
would be bandwidth intensive for mobile clients.
71+
- Mutations are properly signed and verified by public keys declared in the
72+
prior epoch.
73+
74+
Monitors also observe the Trillian Log proofs provided by the Key Transparency
75+
front end to detect any log forks.
76+
77+
Monitors participate in a primitive form of gossip by signing the Trillian Log
78+
roots that they see and make them available over an API.
79+
80+
7881

7982

docs/images/architecture.png

1.01 KB
Loading

integration/testutil.go

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@ import (
1818
"database/sql"
1919
"log"
2020
"net"
21-
"net/http/httptest"
2221
"testing"
2322

2423
"github.com/google/keytransparency/cmd/keytransparency-client/grpcc"
@@ -29,7 +28,6 @@ import (
2928
"github.com/google/keytransparency/core/keyserver"
3029
"github.com/google/keytransparency/core/mutator/entry"
3130
"github.com/google/keytransparency/core/sequencer"
32-
"github.com/google/keytransparency/core/testutil/ctutil"
3331
"github.com/google/keytransparency/impl/authorization"
3432
"github.com/google/keytransparency/impl/sql/commitments"
3533
"github.com/google/keytransparency/impl/sql/mutations"
@@ -88,7 +86,6 @@ type Env struct {
8886
Factory *transaction.Factory
8987
VrfPriv vrf.PrivateKey
9088
Cli pb.KeyTransparencyServiceClient
91-
mapLog *httptest.Server
9289
}
9390

9491
func staticVRF() (vrf.PrivateKey, vrf.PublicKey, error) {
@@ -115,7 +112,6 @@ f5JqSoyp0uiL8LeNYyj5vgklK8pLcyDbRqch9Az8jXVAmcBAkvaSrLW8wQ==
115112
// NewEnv sets up common resources for tests.
116113
func NewEnv(t *testing.T) *Env {
117114
ctx := context.Background()
118-
hs := ctutil.NewCTServer(t)
119115
sqldb := NewDB(t)
120116

121117
// Map server
@@ -199,7 +195,6 @@ func NewEnv(t *testing.T) *Env {
199195
Factory: factory,
200196
VrfPriv: vrfPriv,
201197
Cli: pb.NewKeyTransparencyServiceClient(cc),
202-
mapLog: hs,
203198
}
204199
}
205200

@@ -209,7 +204,6 @@ func (env *Env) Close(t *testing.T) {
209204
env.GRPCServer.Stop()
210205
env.mapEnv.Close()
211206
env.db.Close()
212-
env.mapLog.Close()
213207
}
214208

215209
// GetNewOutgoingContextWithFakeAuth returns a new context containing FakeAuth information to authenticate userID

0 commit comments

Comments
 (0)