- Overview
- Prerequisites
- Basic Module Usage
- Inputs
- VPC Examples
- Features
- Important Notes
- Access Your Infrastructure
This Terraform module provides a complete infrastructure setup for Hyperspace. It creates everything needed to run Hyperspace in your AWS account.
The module creates a production-ready infrastructure with:
- Amazon EKS cluster with managed and self-managed node groups
- VPC with public and private subnets (Optional: deployment into an existing VPC)
- AWS Load Balancer Controller
- Internal and external ingress controllers
- Monitoring stack (Prometheus, Grafana, Loki)
- Backup solution (Velero)
- GitOps with ArgoCD
After deploying this Terraform Module, Install The Hyperspace Helm Chart through ArgoCD using the Hyperspace Deployment Repository
- Terraform >= 1.5.0
- AWS CLI configured with admin access
- kubectl installed
- Helm 3.x
- AWS account with admin access
- Domain name (for Route53 setup)
- Create a new Terraform configuration and call the module as follows:
module "hyperspace" {
source = "github.com/hyper-space-io/Hyperspace-terraform-module"
aws_region = "REGION"
domain_name = "DOMAIN.com"
environment = "ENVIRONMENT"
vpc_cidr = "10.50.0.0/16"
aws_account_id = "AWS_ACCOUNT_ID"
hyperspace_account_id = "HYPERSPACE_ACCOUNT_ID"
argocd_config = {
vcs = {
organization = "<org>"
repository = "<repo>"
gitlab/github = {
enabled = true
}
}
}
}
- Initialize and apply the Terraform configuration:
terraform init
terraform apply
- After the infrastructure is deployed, you can install Hyperspace Helm chart through Hyperspace Deployment Repository
Name | Description | Type | Default | Required |
---|---|---|---|---|
aws_account_id | The AWS account ID where resources will be created | string |
n/a | yes |
hyperspace_account_id | The Hyperspace account ID (obtained from Hyperspace support) used for accessing Hyperspace resources | string |
n/a | yes |
aws_region | The AWS region where resources will be created | string |
"us-east-1" |
yes |
domain_name | The main domain name used for creating subdomains for various services | string |
"" |
yes |
environment | The deployment environment (e.g., dev, staging, prod) | string |
n/a | yes |
argocd_config | Configuration for ArgoCD installation and VCS integration | object |
See below | yes |
Name | Description | Type | Default | Required |
---|---|---|---|---|
terraform_role | IAM role for Terraform to assume | string |
null |
no |
project | Name of the project | string |
"hyperspace" |
no |
tags | Map of tags to add to all resources | map(string) |
{} |
no |
Name | Description | Type | Default | Required |
---|---|---|---|---|
vpc_cidr | The CIDR block for the VPC | string |
"10.0.0.0/16" |
no |
existing_vpc_id | ID of an existing VPC to use | string |
null |
no |
existing_private_subnets | List of existing private subnet IDs | list(string) |
null |
no |
availability_zones | List of availability zones to deploy resources | list(string) |
[] |
no |
num_zones | Number of availability zones to use for EKS nodes | number |
2 |
no |
enable_nat_gateway | Whether to enable NAT Gateway | bool |
true |
no |
single_nat_gateway | Whether to use a single NAT Gateway or one per AZ | bool |
false |
no |
create_vpc_flow_logs | Whether to enable VPC flow logs | bool |
false |
no |
flow_logs_retention | Number of days to retain VPC flow logs | number |
14 |
no |
flow_log_group_class | CloudWatch log group class for flow logs | string |
"STANDARD" |
no |
flow_log_file_format | Format for VPC flow logs | string |
"parquet" |
no |
Name | Description | Type | Default | Required |
---|---|---|---|---|
create_eks | Whether to create the EKS cluster | bool |
true |
no |
cluster_endpoint_public_access | Whether to enable public access to the EKS cluster endpoint | bool |
false |
no |
enable_cluster_autoscaler | Whether to enable and install cluster-autoscaler | bool |
true |
no |
worker_nodes_max | Maximum number of worker nodes allowed | number |
10 |
no |
worker_instance_type | Instance type(s) for EKS worker nodes | list(string) |
["m5n.xlarge"] |
no |
eks_additional_admin_roles | Additional IAM roles to add as cluster administrators | list(string) |
[] |
no |
eks_additional_admin_roles_policy | IAM policy for the EKS additional admin roles | string |
"AmazonEKSClusterAdminPolicy" |
no |
Name | Description | Type | Default | Required |
---|---|---|---|---|
create_public_zone | Whether to create the public Route 53 zone | bool |
false |
no |
existing_public_zone_id | ID of an existing public Route 53 zone | string |
null |
no |
existing_private_zone_id | ID of an existing private Route 53 zone | string |
null |
no |
domain_validation_zone_id | ID of a public Route 53 zone to use for ACM certificate validation | string |
null |
no |
additional_private_zone_vpc_ids | List of additional VPC configurations that should have access to the private hosted zone | list(object) |
[] |
no |
Name | Description | Type | Default | Required |
---|---|---|---|---|
prometheus_endpoint_config | Configuration for Prometheus endpoint service | object |
See below | no |
grafana_privatelink_config | Configuration for Grafana PrivateLink | object |
See below | no |
argocd_config = {
# Required fields
vcs = {
organization = string # Your Git organization/group name
repository = string # Your repository name
# Choose either GitHub or GitLab
github = {
enabled = bool # Set to true to use GitHub
}
# OR
gitlab = {
enabled = bool # Set to true to use GitLab
}
}
# Optional fields with defaults
enabled = true
privatelink = {
enabled = false
endpoint_allowed_principals = []
additional_aws_regions = []
}
rbac = {
sso_admin_group = null
users_rbac_rules = []
users_additional_rules = []
}
}
prometheus_endpoint_config = {
enabled = false
endpoint_service_name = ""
endpoint_service_region = ""
additional_cidr_blocks = []
}
grafana_privatelink_config = {
enabled = false
endpoint_allowed_principals = []
additional_aws_regions = []
}
The module supports two deployment options for networking:
By default, the module creates a new VPC with the following configuration:
- CIDR block:
10.0.0.0/16
(configurable viavpc_cidr
) - Public and private subnets across 2 availability zones
- NAT Gateway for private subnet internet access
- All required tags and DNS settings automatically configured
To use an existing VPC, simply provide the VPC and subnet IDs:
module "hyperspace" {
source = "github.com/hyper-space-io/Hyperspace-terraform-module"
aws_region = "us-east-1"
domain_name = "example.com"
environment = "dev"
aws_account_id = "123456789012"
hyperspace_account_id = "123456789012"
# Existing VPC Configuration
existing_vpc_id = "vpc-0dc21447e050ee2b9"
existing_private_subnets = [
"subnet-063e309f79a853d4b",
"subnet-036a1e0052df1a89f",
"subnet-0661c345359809e05"
]
argocd_config = {
vcs = {
organization = "your-org"
repository = "your-repo"
github = {
enabled = true
}
}
}
}
Before using an existing VPC, ensure it meets these requirements:
-
VPC Settings:
- DNS hostnames must be enabled
- DNS resolution must be enabled
-
Subnet Tags:
- Private subnets must have:
kubernetes.io/role/internal-elb = 1 Type = private
- Public subnets must have (only if external ALB is required, i.e., when
create_public_zone
is true):kubernetes.io/role/elb = 1 Type = public kubernetes.io/cluster/${cluster-name} = shared
- Private subnets must have:
-
Kubernetes Cluster Tags: If the subnets already have an existing EKS cluster, they will have:
kubernetes.io/cluster/${cluster-name} = shared/owned
You must add our EKS cluster tag as well:
kubernetes.io/cluster/<HYPERSPACE_EKS_CLUSTER> = shared
-
Network Requirements:
- Private subnets must have NAT Gateway access
- Sufficient IP addresses in each subnet for EKS nodes
The module provides flexible options for managing DNS zones and ACM certificate validation. For automatic DNS validation, you must provide either an existing public zone or a domain validation zone.
-
Zone Management:
- Create new private zone (default)
- Create new public zone (
create_public_zone = true
) - Use existing public zone (
existing_public_zone_id
) - Use existing private zone (
existing_private_zone_id
) - Use domain validation zone (
domain_validation_zone_id
)
-
Load Balancer Creation:
- Internal ALB: Always created
- External ALB: Created when either:
create_public_zone = true
, ORdomain_validation_zone_id
is provided
-
Certificate Validation:
- Automatic: When either
existing_public_zone_id
ordomain_validation_zone_id
is provided - Manual: When using new public zone or no public zone
- Automatic: When either
# Recommended: Automatic validation with existing public zone
module "hyperspace" {
existing_public_zone_id = "Z1234567890ABC" # For both DNS and validation
}
# Recommended: Automatic validation with separate validation zone
module "hyperspace" {
domain_validation_zone_id = "Z1234567890ABC" # For validation only
}
# Optional: Use existing private zone with automatic validation
module "hyperspace" {
existing_private_zone_id = "Z0987654321XYZ"
domain_validation_zone_id = "Z1234567890ABC"
}
# Optional: Create new public zone (requires manual validation)
module "hyperspace" {
create_public_zone = true
}
Note: Options can be combined as long as they don't conflict (e.g., you can't provide both create_public_zone = true
and existing_public_zone_id
).
- Managed node groups with Bottlerocket OS
- Self-managed node groups for specialized workloads
- Cluster autoscaling
- IRSA (IAM Roles for Service Accounts)
- EBS CSI Driver integration
- EKS Managed Addons
- VPC with public and private subnets
- NAT Gateways
- VPC Endpoints
- Internal and external ALB ingress controllers
- Network policies
- VPC flow logs (optional)
- Connectivity to Auth0
- Network policies
- Security groups
- IAM roles and policies
- OIDC integration
- Prometheus and Grafana
- Loki for log aggregation
- OpenTelemetry for observability
- CloudWatch integration
- Core dump handling
- Velero for cluster backup
- EBS volume snapshots
- ArgoCD installation and SSO integration
- ECR credentials sync to gain access to private hyperspace ECR repositories
Privatelink provides a secure, private connection between Hyperspace and your infrastructure for observability & monitoring.
It's disabled by default and can be enabled through the argocd_config.privatelink
and grafana_privatelink_config
variables.
When enabled, it creates a secure, private connection that allows Hyperspace to access your deployed services (Hyperspace, ArgoCD, Grafana, Prometheus, and Loki) through AWS PrivateLink, ensuring all traffic stays within the AWS network and never traverses the public internet.
The module automatically creates the required DNS verification records when:
create_public_zone = true
(creates a new public zone), ORexisting_public_zone_id
is provided, ORdomain_validation_zone_id
is provided
If none of these conditions are met, you'll need to manually create the verification records:
- Open AWS Console and navigate to VPC Services
- Go to Endpoint Services in the left sidebar
- Find your endpoint service named
<your-domain>.<environment> ArgoCD Endpoint Service
- In the service details, locate:
- Domain verification name
- Domain verification value
- In AWS Console, navigate to Route 53
- Go to Hosted zones
- Select your public hosted zone
- Click Create record and configure:
- Record type: TXT
- Record name: Paste the domain verification name from step 1
- Value: Paste the domain verification value from step 1
- TTL: 1800 seconds (30 minutes)
- Click Create records
- In the VPC Endpoint Service console, select your endpoint service
- Click Actions -> Verify domain ownership for private DNS name
- The verification process may take up to 30 minutes
- You can monitor the status in the VPC Endpoint Service console
- The status will change to "Available" once verification is complete
When using ArgoCD or Grafana with privatelink enabled, there are some important considerations:
-
Privatelink Configuration:
- Both ArgoCD and Grafana can be configured to use AWS Privatelink for secure access
- This is controlled by the
argocd_config.privatelink.enabled
andgrafana_privatelink_config.enabled
variables - When enabled, the services will be accessible through consumer VPC endpoints in allowed accounts controlled by
argocd_config.privatelink.endpoint_allowed_principals
andgrafana_privatelink_config.endpoint_allowed_principals
-
Deletion Process:
- Before deleting changing argocd_config.privatelink.enabled or grafana_privatelink_config.enabled to false, you must first remove all active VPC endpoint connections
- To delete the endpoint services:
- Go to AWS VPC Console > Endpoint Services
- Find the ArgoCD or Grafana endpoint service
- Select all endpoint connections
- Click "Reject" to deny the connections
- After all connections are rejected, you can delete the endpoint service
- This is required because AWS prevents deletion of endpoint services that have active connections
grafana_privatelink_config = {
enabled = true
endpoint_allowed_principals = ["123456789012"] # AWS account IDs allowed to connect
}
prometheus_endpoint_config = {
enabled = true
endpoint_service_name = "prometheus-endpoint"
endpoint_service_region = "us-east-1"
}
After successful deployment, the following services will be available at these URLs:
- ArgoCD:
https://argocd.internal-<environment>.<your-domain>
- Grafana:
https://grafana.internal-<environment>.<your-domain>
By default, the module creates a private hosted zone with a wildcard DNS record for your subdomain. This means:
- Services are only accessible from within the VPC
- No public internet access is allowed
- You must be connected to the VPC (via VPN, Direct Connect, or EC2 instance) to access the services
-
Username: The default admin username is
admin
-
Password: Retrieve it using:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d