subcategory |
---|
Workspace |
This resource allows you to manage Databricks Repos.
-> Note To create a Repo from a private repository you need to configure Git token as described in the documentation. To set this token you can use databricks_git_credential resource.
You can declare Terraform-managed Repo by specifying url
attribute of Git repository. In addition to that you may need to specify git_provider
attribute if Git provider doesn't belong to cloud Git providers (Github, GitLab, ...). If path
attribute isn't provided, then repo will be created in the user's repo directory (/Repos/<username>/...
):
resource "databricks_repo" "nutter_in_home" {
url = "https://github.com/user/demo.git"
}
-> Note Repo in Databricks workspace would only be changed, if Terraform stage did change. This means that any manual changes to managed repository won't be overwritten by Terraform, if there's no local changes to configuration. If Repo in Databricks workspace is modifying, application of configuration changes will fail.
The following arguments are supported:
url
- (Required) The URL of the Git Repository to clone from. If the value changes, repo is re-created.git_provider
- (Optional, if it's possible to detect Git provider by host name) case insensitive name of the Git provider. Following values are supported right now (could be a subject for a change, consult Repos API documentation):gitHub
,gitHubEnterprise
,bitbucketCloud
,bitbucketServer
,azureDevOpsServices
,gitLab
,gitLabEnterpriseEdition
,awsCodeCommit
.path
- (Optional) path to put the checked out Repo. If not specified, then repo will be created in the user's repo directory (/Repos/<username>/...
). If the value changes, repo is re-created.branch
- (Optional) name of the branch for initial checkout. If not specified, the default branch of the repository will be used. Conflicts withtag
. Ifbranch
is removed, andtag
isn't specified, then the repository will stay at the previously checked out state.tag
- (Optional) name of the tag for initial checkout. Conflicts withbranch
.
Optional sparse_checkout
configuration block contains attributes related to sparse checkout feature in Databricks Repos. It supports following attributes:
patterns
- array of paths (directories) that will be used for sparse checkout. List of patterns could be updated in-place.
Addition or removal of the sparse_checkout
configuration block will lead to recreation of the repo.
In addition to all arguments above, the following attributes are exported:
id
- Repo identifiercommit_hash
- Hash of the HEAD commit at time of the last executed operation. It won't change if you manually perform pull operation via UI or APIworkspace_path
- path on Workspace File System (WSFS) in form of/Workspace
+path
- databricks_permissions can control which groups or individual users can access repos.
The resource Repo can be imported using the Repo ID (obtained via UI or using API)
$ terraform import databricks_repo.this repo_id
The following resources are often used in the same context:
- End to end workspace management guide.
- databricks_git_credential to manage Git credentials.
- databricks_directory to manage directories in Databricks Workpace.
- databricks_pipeline to deploy Delta Live Tables.
- databricks_secret to manage secrets in Databricks workspace.
- databricks_secret_acl to manage access to secrets in Databricks workspace.
- databricks_secret_scope to create secret scopes in Databricks workspace.
- databricks_workspace_conf to manage workspace configuration for expert usage.