The Terraform state produced from this code is kept on S3 for distributed coordination. It's also public, meaning it can easily be used as a sort of light read-only API for Glider Labs infrastructure.
Name | Account | Description | Modules |
---|---|---|---|
infra.gl | 197859916071 | Top level infrastructure, payer account | dns |
manifold.infra.gl | 055471703963 | Kubernetes + Sandbox cluster | manifold,sandbox |
dev.infra.gl | 233115379322 | For experiments, has user accounts | |
364456219779 | Unused |
Only dev.infra.gl
allows manual experimentation via user accounts.
All others need to be modified via PRs to master
. Admins are allowed to run Terraform locally, but any changes that change infrastructure state MUST be pushed to a public branch immediately after at the very least.
Although it should rarely be necessary, bootstrapping this infrastructure from scratch requires some initial steps.
- Create a bucket called
gl-infra
in the main account for TF state - Create stable DNS zones with
make zones
- Now you can provision everything else with
make apply
Any apps will need to be re-bootstrapped. This process is generally:
- Apply secrets spec for the app in the approrpriate namespace
- Make an
<app>-ci
service account withmanifold/scripts/service-account
- Base64 encode this output and set
KUBE_CONFIG
env var in the app CI - Build (or just rebuild) the app on CI
- Update DNS records with the created Kubernetes service endpoint
Manifold uses Kubernetes namespaces. There are three namespaces currently:
default
- the default namespace is for experiments and should be kept emptygliderlabs
- where we run misc production daemons and servicescmd
- production namespace for Cmd.io and its release channels
Any major application that has multiple release channels should live in its
own namespace. You should add that namespace to the namespaces.yaml
spec in
manifold/specs
and then manually apply that spec.
When developing or debugging a test/dev deployment, you'll probably be tearing down everything quite a bit to make sure everything comes up correctly. This is currently a two step process:
- Teardown the Manifold Kubernetes cluster:
make -C manifold teardown
- Destroy remaining resources with Terraform:
terraform destroy