Skip to content

Preserve profile fields on re-login instead of wiping all keys#4597

Open
simonfaltum wants to merge 2 commits intomainfrom
simonfaltum/preserve-profile-fields
Open

Preserve profile fields on re-login instead of wiping all keys#4597
simonfaltum wants to merge 2 commits intomainfrom
simonfaltum/preserve-profile-fields

Conversation

@simonfaltum
Copy link
Member

@simonfaltum simonfaltum commented Feb 25, 2026

Why

SaveToProfile deleted every key in a .databrickscfg profile section before writing, then only wrote back the small subset of fields the caller explicitly set. This meant every databricks auth login or databricks configure silently destroyed fields like cluster_id, warehouse_id, scopes, azure_environment, and any custom keys the user had added to their profile.

This is especially problematic as profiles carry more explicit fields in the host-agnostic auth work (workspace_id, account_id, azure_environment). Users shouldn't have to re-specify everything every time they re-login.

Changes

Core: SaveToProfile merge semantics (libs/databrickscfg/ops.go)

Before: Delete all keys in the section, then write only non-zero fields from the new config.
After: Existing keys not mentioned in the new config are preserved. Non-zero fields from the new config overwrite existing values. A new clearKeys ...string variadic parameter lets callers explicitly remove specific keys.

auth login (cmd/auth/login.go)

Before: Re-login wiped everything. cluster_id and serverless_compute_id were manually read back from the existing profile in the default case (no --configure-cluster/--configure-serverless), but warehouse_id, azure_environment, custom keys, etc. were always lost.

After:

  • All non-auth fields are preserved automatically via merge semantics (no manual read-back needed).
  • Incompatible auth credentials (PAT token, basic auth, M2M client secrets, Azure/GCP credentials, metadata service URL, OIDC tokens) are explicitly cleared when switching to OAuth.
  • --configure-cluster explicitly clears serverless_compute_id (and vice versa) for mutual exclusion.
  • experimental_is_unified_host is explicitly cleared when false (since false is a zero value and would otherwise be skipped by the merge, leaving a stale true from a previous login).
  • Scopes are preserved: when --scopes is not passed, existing scopes from the profile are read back and used for the OAuth challenge. This means the minted token matches the profile's scope configuration. Previously, scopes were always wiped and the default all-apis was used.

Inline login in auth token (cmd/auth/token.go)

Before: runInlineLogin (the "create new profile" path in the interactive auth token flow) saved a minimal set of fields and wiped everything else. Did not handle scopes or experimental_is_unified_host clearing.

After:

  • Same auth credential clearing as auth login.
  • experimental_is_unified_host explicitly cleared when false.
  • Existing profile scopes are read back and used for the OAuth challenge (same as auth login).

databricks configure (cmd/configure/configure.go)

Before: PAT configure wiped all keys including auth_type, scopes, azure_environment, etc. — which was correct for auth_type/scopes but destroyed useful non-auth fields.

After:

  • Non-auth fields (cluster_id, warehouse_id, azure_environment, account_id, workspace_id, custom keys) are preserved.
  • OAuth metadata (auth_type, scopes, databricks_cli_path) is explicitly cleared — a PAT profile shouldn't keep OAuth artifacts.
  • All non-PAT auth credentials (basic auth, M2M, Azure, GCP, OIDC, metadata service) are explicitly cleared to prevent multi-auth conflicts.
  • experimental_is_unified_host is always cleared (PAT profiles don't use unified hosts).
  • serverless_compute_id is cleared when cluster_id is set — whether via --configure-cluster flag or via DATABRICKS_CLUSTER_ID env var (previously only the flag was checked).

Profile struct (libs/databrickscfg/profile/)

  • Added Scopes field to profile.Profile struct and read it from the INI file in LoadProfiles. This allows auth login and auth token to read existing scopes back from the profile.

Test plan

Unit tests (libs/databrickscfg/ops_test.go):

  • TestSaveToProfile_MergePreservesExistingKeys — token survives when only auth_type is written
  • TestSaveToProfile_ClearKeysRemovesSpecifiedKeys — token and cluster_id cleared, serverless_compute_id added
  • TestSaveToProfile_OverwritesExistingValues — host updated from old to new
  • TestSaveToProfile_ClearKeysOnNonExistentKeyIsNoop — clearing nonexistent keys doesn't error

Unit tests (cmd/configure/configure_test.go):

  • TestConfigureClearsOAuthAuthType — PAT configure clears auth_type and scopes from a previously OAuth-configured profile
  • TestConfigureClearsUnifiedHostMetadata — PAT configure clears experimental_is_unified_host while preserving account_id/workspace_id
  • TestConfigureClearsServerlessWhenClusterFromEnvserverless_compute_id cleared when DATABRICKS_CLUSTER_ID env var provides cluster_id

Acceptance test (acceptance/cmd/auth/login/preserve-fields/):

  • Profile with cluster_id, warehouse_id, azure_environment, and custom_key → all four survive auth login re-login without --configure-cluster/--configure-serverless

Existing tests:

  • All 7 auth login acceptance tests pass (including configure-serverless which verifies cluster_id is still cleared)
  • All cmd/auth/ unit tests pass
  • All cmd/configure/ unit tests pass
  • make checks clean
  • make lintfull clean (0 issues)

🤖 Generated with Claude Code

SaveToProfile previously deleted all keys in a profile section before
writing, destroying fields like cluster_id, warehouse_id, scopes, and
custom keys on every `auth login` or `configure` invocation.

Switch to merge semantics: existing keys not mentioned in the new config
are preserved. Add a clearKeys variadic parameter so callers can
explicitly remove incompatible keys on auth-type transitions (e.g., OAuth
login clears PAT/M2M/Azure/GCP credentials; PAT configure clears
auth_type and OAuth metadata). Handle boolean zero-value for
experimental_is_unified_host by clearing it explicitly when false.

Scopes are preserved across re-logins in both auth login and inline
login (auth token): when --scopes is not passed, the existing profile's
scopes are read back and used for the OAuth challenge, keeping the
profile truthful and the user's preference intact. Non-auth config
properties like azure_environment are preserved. Clear
serverless_compute_id when cluster_id is set (via flag or env). Clear
experimental_is_unified_host and databricks_cli_path during PAT
configure.
@eng-dev-ecosystem-bot
Copy link
Collaborator

eng-dev-ecosystem-bot commented Feb 25, 2026

Commit: 738f27e

Run: 22416294909

Env 🟨​KNOWN 💚​RECOVERED 🙈​SKIP ✅​pass 🙈​skip Time
🟨​ aws linux 7 1 7 268 768 9:51
🟨​ aws windows 7 1 7 270 766 8:08
💚​ aws-ucws linux 8 7 364 684 7:23
💚​ aws-ucws windows 8 7 366 682 4:56
💚​ azure linux 2 9 271 766 9:08
💚​ azure windows 2 9 273 764 6:19
💚​ azure-ucws linux 2 9 369 680 9:10
💚​ azure-ucws windows 2 9 371 678 5:06
💚​ gcp linux 2 9 267 769 9:07
💚​ gcp windows 2 9 269 767 7:09
15 interesting tests: 7 KNOWN, 7 SKIP, 1 RECOVERED
Test Name aws linux aws windows aws-ucws linux aws-ucws windows azure linux azure windows azure-ucws linux azure-ucws windows gcp linux gcp windows
🟨​ TestAccept 🟨​K 🟨​K 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R
🙈​ TestAccept/bundle/resources/permissions 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🟨​ TestAccept/bundle/resources/permissions/jobs/destroy_without_mgmtperms/with_permissions 🟨​K 🟨​K 💚​R 💚​R 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🟨​ TestAccept/bundle/resources/permissions/jobs/destroy_without_mgmtperms/with_permissions/DATABRICKS_BUNDLE_ENGINE=direct 🟨​K 🟨​K 💚​R 💚​R
🟨​ TestAccept/bundle/resources/permissions/jobs/destroy_without_mgmtperms/with_permissions/DATABRICKS_BUNDLE_ENGINE=terraform 🟨​K 🟨​K 💚​R 💚​R
🟨​ TestAccept/bundle/resources/permissions/jobs/destroy_without_mgmtperms/without_permissions 🟨​K 🟨​K 💚​R 💚​R 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🟨​ TestAccept/bundle/resources/permissions/jobs/destroy_without_mgmtperms/without_permissions/DATABRICKS_BUNDLE_ENGINE=direct 🟨​K 🟨​K 💚​R 💚​R
🟨​ TestAccept/bundle/resources/permissions/jobs/destroy_without_mgmtperms/without_permissions/DATABRICKS_BUNDLE_ENGINE=terraform 🟨​K 🟨​K 💚​R 💚​R
🙈​ TestAccept/bundle/resources/postgres_branches/basic 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🙈​ TestAccept/bundle/resources/postgres_branches/recreate 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🙈​ TestAccept/bundle/resources/postgres_branches/update_protected 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🙈​ TestAccept/bundle/resources/postgres_branches/without_branch_id 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🙈​ TestAccept/bundle/resources/postgres_endpoints/recreate 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
🙈​ TestAccept/bundle/resources/synced_database_tables/basic 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S 🙈​S
💚​ TestAccept/ssh/connection 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R 💚​R
Top 34 slowest tests (at least 2 minutes):
duration env testname
6:31 gcp linux TestAccept/ssh/connection
5:56 aws-ucws linux TestAccept/ssh/connection
5:44 aws linux TestSecretsPutSecretStringValue
5:25 gcp linux TestSecretsPutSecretStringValue
5:02 azure linux TestSecretsPutSecretStringValue
5:01 aws linux TestAccept/ssh/connection
4:52 gcp windows TestAccept/ssh/connection
4:33 aws windows TestSecretsPutSecretStringValue
4:31 azure-ucws linux TestAccept/ssh/connection
4:17 gcp windows TestSecretsPutSecretStringValue
4:09 gcp linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
3:54 azure linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
3:45 azure-ucws linux TestSecretsPutSecretStringValue
3:38 gcp windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
3:22 aws windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
3:19 aws-ucws windows TestAccept/ssh/connection
3:17 azure-ucws linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
3:15 aws windows TestAccept/ssh/connection
3:14 aws windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
3:12 gcp linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
3:11 gcp windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
3:10 azure linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
3:06 aws linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
2:51 azure windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
2:48 aws linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
2:48 aws-ucws windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
2:47 aws-ucws linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
2:39 azure-ucws windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
2:34 azure windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
2:25 azure windows TestSecretsPutSecretStringValue
2:22 aws-ucws linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
2:19 aws-ucws windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=direct
2:12 azure-ucws linux TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform
2:08 azure-ucws windows TestAccept/bundle/resources/apps/inline_config/DATABRICKS_BUNDLE_ENGINE=terraform

// auth-metadata fields from other auth methods that are incompatible with
// auth_type=databricks-cli. Identity fields (host, account_id, workspace_id)
// and client settings (http_timeout_seconds, etc.) are NOT cleared.
func oauthLoginClearKeys() []string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This list can evolve if we support more auth in the future. That was probably the rationale behind nuking everything before and having an allow list. Not a blocker just wanted to flag this.

// when saving a PAT-based profile via `databricks configure`. This prevents
// stale auth credentials from other methods (OAuth, Azure, GCP, etc.) from
// remaining in the profile and causing multi-auth validation failures.
var patConfigureClearKeys = []string{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have auth.ConfigAttrs in SDK that encapsulate all auth fields. Is that something we can leverage here to make this more robust?

For example, here's what we do to clear env vars in the CLI:

// EnvVars returns the list of environment variables that the SDK reads to configure
// authentication.
// This is useful for spawning subprocesses since you can unset all auth environment
// variables to clean up the environment before configuring authentication for the
// child process.
func envVars() []string {
	var out []string

	for _, attr := range config.ConfigAttributes {
		if len(attr.EnvVars) == 0 {
			continue
		}

		out = append(out, attr.EnvVars...)
	}

	return out
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fields that are part of a authentication method like pat are annotated as such.

os.Stdin = inp

cmd := cmd.New(ctx)
cmd.SetArgs([]string{"configure", "--token", "--host", "https://host"})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just extend the existing acceptance test rather than having these separate tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these tests could be made table tests? The test data will admittedly be a little big but it will make the tests easier to read and maintain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants