Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a firestore key migration to Teleport 17 #46472

Merged
merged 10 commits into from
Sep 30, 2024
66 changes: 66 additions & 0 deletions lib/auth/migration/0002_firestore.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
/*
* Teleport
* Copyright (C) 2024 Gravitational, Inc.
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/

package migration

import (
"context"

"github.com/gravitational/trace"

"github.com/gravitational/teleport/lib/backend"
"github.com/gravitational/teleport/lib/backend/firestore"
)

// migrateFirestoreKeys performs a migration which transforms all incorrect
// key types (backend.Key and string) in the firestore backend to the correct type.
// This happens because the backend was incorrectly storing keys as strings and backend.Key
// types and Firestore clients mapped them to different database types. This forces calling ReadRange 3 times
// This migration will fix the issue by converting all keys to the correct type (bytes).
type migrateFirestoreKeys struct {
}

func (d migrateFirestoreKeys) Version() int64 {
return 2
}

func (d migrateFirestoreKeys) Name() string {
return "migrate_firestore_keys"
}

// Up scans the backend for keys that are stored as strings or backend.Key types
// and converts them to the correct type (bytes).
func (d migrateFirestoreKeys) Up(ctx context.Context, b backend.Backend) error {
ctx, span := tracer.Start(ctx, "migrateFirestoreKeys/Up")
defer span.End()

// if the backend is not firestore, skip this migration
if b.GetName() != firestore.GetName() {
return nil
}

// migrate firestore keys
return trace.Wrap(firestore.MigrateIncorrectKeyTypes(ctx, b))
}

// Down is a no-op for this migration.
func (d migrateFirestoreKeys) Down(ctx context.Context, _ backend.Backend) error {
_, span := tracer.Start(ctx, "migrateFirestoreKeys/Down")
defer span.End()
return nil
}
Copy link
Contributor

@rosstimothy rosstimothy Sep 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know that a one-shot migration will work. If we assume that a new Firestore cluster was created on v16.2.0, the release the introduced the latest broken key, and stays on that version until upgrading to v17, then there is a chance the upgrade isn't performed according to our guidelines and an auth at 16.2.0 and 17 are running simultaneously. In that case, the migration may complete successfully, however, it may also be immediately undone by the old auth instance.

If we are to proceed with a one-shot migration I don't know that we can safely do it until v18. I think the safest migration strategy might be to have the firestore backend always spin up a background goroutine that performs MigrateIncorrectKeyTypes. In that case though, I think we need to rate limit the migration in order to prevent having any impact on cluster reads and writes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking is that during a migration, the older version will simply upsert the heartbeats, which will reset once authentication restarts. However, there’s a possibility that someone might try to modify static objects during the migration process.

I don’t believe we can handle everything in a background goroutine as well, at least not without optimistic locking. There’s nothing stopping a role from being altered between the read and put operations, especially since authentication will remain fully functional. We also can’t enforce this during startup because it could take a long time, and backend creation might fail.

We’d need to restart the conversion loop whenever a conditional update issue occurs if we use bulk writing.

Given this, I think we should postpone it to v18 and treat it as part of the migration but this is becoming terrible as read range operations take tons of time.

What are your thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The older version of Auth may still be processing and handling traffic from users, which means any tctl create operation could land on the wrong Auth server during a migration.

As you mentioned, the background migration is a perpetual clean up operation. However, since in v17 all Auth servers are capable of reading the correct, legacy, and broken key types, if a v16 Auth server undoes a migration it's not the end of the world. By the time the migration was removed in v19, there would be no possibility of any new keys being stored in the wrong format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rosstimothy c2f93ec applied the changes. I also start the migration after a few minutes to avoid interfering with cache loads

1 change: 1 addition & 0 deletions lib/auth/migration/migration.go
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ func Apply(ctx context.Context, b backend.Backend, opts ...func(c *applyConfig))
cfg := applyConfig{
migrations: []migration{
createDBAuthority{},
migrateFirestoreKeys{},
},
}

Expand Down
22 changes: 3 additions & 19 deletions lib/backend/firestore/firestorebk.go
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,7 @@ func newRecord(from backend.Item, clock clockwork.Clock) record {
return r
}

// TODO(tigrato|rosstimothy): Simplify this function by removing the brokenRecord and legacyRecord struct
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a note indicating which version that it's safe to do this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in f74a94d

func newRecordFromDoc(doc *firestore.DocumentSnapshot) (*record, error) {
k, err := doc.DataAt(keyDocProperty)
if err != nil {
Expand Down Expand Up @@ -478,28 +479,11 @@ func (b *Backend) getRangeDocs(ctx context.Context, startKey, endKey backend.Key
if err != nil {
return nil, trace.Wrap(err)
}
legacyDocs, err := b.svc.Collection(b.CollectionName).
Where(keyDocProperty, ">=", startKey.String()).
Where(keyDocProperty, "<=", endKey.String()).
Limit(limit).
Documents(ctx).GetAll()
if err != nil {
return nil, trace.Wrap(err)
}
brokenDocs, err := b.svc.Collection(b.CollectionName).
Where(keyDocProperty, ">=", startKey).
Where(keyDocProperty, "<=", endKey).
Limit(limit).
Documents(ctx).GetAll()
if err != nil {
return nil, trace.Wrap(err)
}

allDocs := append(append(docs, legacyDocs...), brokenDocs...)
if len(allDocs) >= backend.DefaultRangeLimit {
if len(docs) >= backend.DefaultRangeLimit {
b.Warnf("Range query hit backend limit. (this is a bug!) startKey=%q,limit=%d", startKey, backend.DefaultRangeLimit)
}
return allDocs, nil
return docs, nil
}

// GetRange returns range of elements
Expand Down
101 changes: 101 additions & 0 deletions lib/backend/firestore/migration.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
/*
* Teleport
* Copyright (C) 2023 Gravitational, Inc.
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/

package firestore

import (
"context"

"github.com/gravitational/trace"

"github.com/gravitational/teleport/lib/backend"
)

// MigrateIncorrectKeyTypes migrates incorrect key types (backend.Key and string) to the correct type (bytes)
// in the backend. This is necessary because the backend was incorrectly storing keys as strings and backend.Key
// types and Firestore clients mapped them to different database types. This forces calling ReadRange 3 times.
// This migration will fix the issue by converting all keys to the correct type (bytes).
// TODO(tigrato|rosstimothy): DELETE In 18.0.0: Remove this migration in the next major release.
func MigrateIncorrectKeyTypes(ctx context.Context, b backend.Backend) error {
firestore, ok := b.(*Backend)
if !ok {
return trace.BadParameter("expected firestore backend")
}

// backend.Key is converted to array of ints when sending to the db.
toArray := func(key []byte) []any {
arrKey := make([]any, len(key))
for i, b := range key {
arrKey[i] = int(b)
}
return arrKey
}

if err := migrateKeyType[[]any](ctx, firestore, toArray); err != nil {
return trace.Wrap(err, "failed to migrate backend key")
}

stringKey := func(key []byte) string {
return string(key)
}
if err := migrateKeyType[string](ctx, firestore, stringKey); err != nil {
return trace.Wrap(err, "failed to migrate legacy key")
}
return nil
}

func migrateKeyType[T any](ctx context.Context, b *Backend, newKey func([]byte) T) error {
limit := 500
startKey := newKey([]byte("/"))

for {
docs, err := b.svc.Collection(b.CollectionName).
// passing the key type here forces the client to map the key to the underlying type
// and return all the keys in that share the same underlying type.
// backend.Key is mapped to Array in Firestore.
// []byte is mapped to Bytes in Firestore.
// string is mapped to String in Firestore.
// Searching for keys with the same underlying type will return all keys with the same type.
Where(keyDocProperty, ">", startKey).
Limit(limit).
Documents(ctx).GetAll()
if err != nil {
return trace.Wrap(err)
}

for _, dbDoc := range docs {
newDoc, err := newRecordFromDoc(dbDoc)
if err != nil {
return trace.Wrap(err, "failed to convert document")
}

if _, err := b.svc.Collection(b.CollectionName).
Doc(b.keyToDocumentID(newDoc.Key)).
Set(ctx, newDoc); err != nil {
return trace.Wrap(err, "failed to upsert document")
}

startKey = newKey(newDoc.Key) // update start key
}

if len(docs) < limit {
break
}
}
return nil
}
Loading