- Terraform is used to deploy infrastructure. That includes all is necessary in order to launch Kubernetes clusters - modules should conclude producing a kubeconfig file and context
tf
files interraform/main/
specify whole testing environmentstf
files interraform/modules/
implement components (platform-specific or platform-agnostic)
- the
bin/setup.mjs
node.js script runs Terraform to create Kubernetes clusters, then Helm/kubectl to deploy and configure software under test (Rancher and/or any other component). It is designed to be idempotent - the
bin/run_tests.mjs
node.js script runsk6
scripts ink6/
, generating load. It is designed to be idempotent - a Mimir-backed Grafana instance in an own cluster displays results and acts as long-term result storage
- create a new
terraform/main
subdirectory copying overtf
files fromaws
- edit
inputs.tf
to include any platform-specific information - edit
main.tf
to use platform-specific providers, add modules as appropriate- platform-specific modules are prefixed with the platform name (eg.
terraform/modules/aws_*
) - platform-agnostic modules are not prefixed
- platform-specific wrappers are normally created for platform-agnostic modules (eg.
aws_k3s
wrapsk3s
)
- platform-specific modules are prefixed with the platform name (eg.
- adapt
outputs.tf
- please note the exact structure is expected by scripts inbin/
- change with care
It is assumed all created clusters will be able to reach one another with the same domain names, from the same network. That network might not be the same network of the machine running Terraform.
Created clusters may or may not be directly reachable from the machine running Terraform. In the current aws
implementation, for example, all access goes through an SSH bastion host and tunnels, but that is an implementation detail and may change in future. For new platforms there is no requirement - clusters might be directly reachable with an Internet-accessible FQDN, or be behind a bastion host, Tailscale, Boundary or other mechanism. Structures in outputs.tf
have been designed to accommodate for all cases, in particular:
local_
variables refer to domain names and ports as used by the machine running Terraform,private_
variables refer to domain names and ports as used by the clusters in their network,- values may coincide.
node_access_commands
are an optional convenience mechanism to allow a user to SSH into a particular node directly.
A particular deployment platform can be selected using TERRAFORM_WORK_DIR
environment variable, eg.
export TERRAFORM_WORK_DIR=terraform/main/aws
./bin/teardown.mjs && ./bin/setup.mjs && ./bin/run_tests.mjs
See terraform/main
subdirectories for the currently available platforms.