Skip to content

Commit 32d6650

Browse files
committed
Multiple terraform module enhancements
1 parent 40ff91e commit 32d6650

File tree

5 files changed

+54
-29
lines changed

5 files changed

+54
-29
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Database Lab Terraform Module
22

3-
This [Terraform Module](https://www.terraform.io/docs/language/modules/index.html) is responsible for deploying the [Database Lab Engine](https://gitlab.com/postgres-ai/database-lab) to cloud hosting providers.
3+
This [Terraform Module](https://www.terraform.io/docs/language/modules/index.html) can be used as a template for deploying the [Database Lab Engine](https://gitlab.com/postgres-ai/database-lab) to cloud hosting providers. Please feel free to tailor it to meet your requirements.
44

55
Your source PostgreSQL database can be located anywhere, but DLE with other components will be created on an EC2 instance under your AWS account. Currently, only "logical" mode of data retrieval (dump/restore) is supported – the only available method for managed PostgreSQL cloud services such as RDS Postgres, RDS Aurora Postgres, Azure Postgres, or Heroku. "Physical" mode is not yet supported, but it will be in the future. More about various data retrieval options for DLE: https://postgres.ai/docs/how-to-guides/administration/data.
66

@@ -12,7 +12,7 @@ Your source PostgreSQL database can be located anywhere, but DLE with other comp
1212
- [Terraform Installed](https://learn.hashicorp.com/tutorials/terraform/install-cli) (minimal version: 1.0.0)
1313
- AWS [Route 53](https://aws.amazon.com/route53/) Hosted Zone (For setting up TLS) for a domain or sub-domain you control
1414
- You must have AWS Access Keys and a default region in your Terraform environment (See section on required IAM Permissions)
15-
- The DLE runs on an EC2 instance which can be accessed using a selected set of SSH keys uploaded to EC2. Use the Terraform parameter `aws_keypair` to specify which EC2 Keypair to use
15+
- The DLE runs on an EC2 instance which can be accessed using a selected set of SSH keys uploaded to EC2.
1616
- Required IAM Permissions: to successfully run this Terraform module, the IAM User/Role must have the following permissions:
1717
* Read/Write permissions on EC2
1818
* Read/Write permissions on Route53
@@ -49,16 +49,15 @@ The following steps were tested on Ubuntu 20.04 but supposed to be valid for oth
4949
```
5050
1. Edit `terraform.tfvars` file. In our example, we will use Heroku demo database as a source:
5151
```config
52-
dle_version_full = "2.4.1"
52+
dle_version_full = "2.5.0"
5353
5454
aws_ami_name = "DBLABserver*"
55-
aws_keypair = "YOUR_AWS_KEYPAIR"
5655
5756
aws_deploy_region = "us-east-1"
5857
aws_deploy_ebs_availability_zone = "us-east-1a"
59-
aws_deploy_ec2_instance_type = "t2.large"
58+
aws_deploy_ec2_instance_type = "c5.large"
6059
aws_deploy_ec2_instance_tag_name = "DBLABserver-ec2instance"
61-
aws_deploy_ebs_size = "40"
60+
aws_deploy_ebs_size = "10"
6261
aws_deploy_ebs_type = "gp2"
6362
aws_deploy_allow_ssh_from_cidrs = ["0.0.0.0/0"]
6463
aws_deploy_dns_api_subdomain = "tf-test" # subdomain in aws.postgres.ai, fqdn will be ${dns_api_subdomain}-engine.aws.postgres
@@ -67,13 +66,14 @@ The following steps were tested on Ubuntu 20.04 but supposed to be valid for oth
6766
source_postgres_host = "ec2-3-215-57-87.compute-1.amazonaws.com"
6867
source_postgres_port = "5432"
6968
source_postgres_dbname = "d3dljqkrnopdvg" # this is an existing DB (Heroku example DB)
70-
source_postgres_username = "postgres"
71-
69+
source_postgres_username = "bfxuriuhcfpftt" # in secret.tfvars, use: source_postgres_password = "dfe01cbd809a71efbaecafec5311a36b439460ace161627e5973e278dfe960b7"
7270
dle_debug_mode = "true"
7371
dle_retrieval_refresh_timetable = "0 0 * * 0"
7472
postgres_config_shared_preload_libraries = "pg_stat_statements,logerrors" # DB Migration Checker requires logerrors extension
7573
7674
platform_project_name = "aws_test_tf"
75+
76+
ssh_public_keys_files_list = ["~/.ssh/id_rsa.pub"]
7777
```
7878
1. Create `secret.tfvars` containing `source_postgres_password`, `platform_access_token`, and `vcs_github_secret_token`. An example:
7979
```config
@@ -106,7 +106,7 @@ The following steps were tested on Ubuntu 20.04 but supposed to be valid for oth
106106
public_dns_name = "demo-api-engine.aws.postgres.ai" # todo: this should be URL, not hostname – further we'll need URL, with protocol – `https://`
107107
```
108108

109-
1. To verify result and check the progress, you might want to connect to the just-created EC2 machine using IP address or hostname from the Terraform output. In our example, it can be done using this one-liner (you can find more about DLE logs and configuration on this page: https://postgres.ai/docs/how-to-guides/administration/engine-manage):
109+
1. To verify result and check the progress, you might want to connect to the just-created EC2 machine using IP address or hostname from the Terraform output and ssh key from ssh_public_keys_files_list and/or ssh_public_keys_list variables. In our example, it can be done using this one-liner (you can find more about DLE logs and configuration on this page: https://postgres.ai/docs/how-to-guides/administration/engine-manage):
110110
```shell
111111
echo "sudo docker logs dblab_server -f" | ssh ubuntu@18.118.126.25 -i postgres_ext_test.pem
112112
```

dle-logical-init.sh.tpl

Lines changed: 24 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -86,17 +86,31 @@ EOF
8686
sudo systemctl enable envoy
8787
sudo systemctl start envoy
8888

89-
#create zfs pools
90-
disks=(${dle_disks})
91-
for i in $${!disks[@]}; do
89+
# create zfs pools
90+
# Get the full list of disks available and then make attempts
91+
# to create zpool on each. Here we assume that the system disk
92+
# will be skipped because it already has a filesystem.
93+
# This is a "brute force" approach that we probably want to
94+
# rework, but for now we leave it as is because it seems that
95+
# `/dev/../by-id` doesn't really work for all EC2 types.
96+
97+
disks=$(lsblk -ndp -e7 --output NAME) # TODO: this is not needed, used now for debug only
98+
99+
i=1
100+
101+
sleep 10 # Not elegant at all, we need a better way to wait till the moment when all disks are available
102+
103+
# Show all disks in alphabetic order; "-e7" to exclude loop devices
104+
for disk in $disks; do
92105
sudo zpool create -f \
93-
-O compression=on \
94-
-O atime=off \
95-
-O recordsize=128k \
96-
-O logbias=throughput \
97-
-m /var/lib/dblab/dblab_pool_0$i\
98-
dblab_pool_0$i \
99-
$${disks[$i]}
106+
-O compression=on \
107+
-O atime=off \
108+
-O recordsize=128k \
109+
-O logbias=throughput \
110+
-m /var/lib/dblab/dblab_pool_$(printf "%02d" $i)\
111+
dblab_pool_$(printf "%02d" $i) \
112+
$disk \
113+
&& ((i=i+1)) # increment if succeeded
100114
done
101115

102116
# Adjust DLE config

terraform.tfvars

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
dle_version = "2.5.0"
1+
dle_version = "2.5.0" # it is also possible to use branch name here (e.g., "master")
22
joe_version = "0.10.0"
33

44
aws_ami_name = "DBLABserver*"
5-
aws_keypair = "YOUR_AWS_KEYPAIR"
65

76
aws_deploy_region = "us-east-1"
87
aws_deploy_ebs_availability_zone = "us-east-1a"
9-
aws_deploy_ec2_instance_type = "t2.large"
8+
aws_deploy_ec2_instance_type = "c5.large"
109
aws_deploy_ec2_instance_tag_name = "DBLABserver-ec2instance"
11-
aws_deploy_ebs_size = "40"
10+
aws_deploy_ebs_size = "10"
1211
aws_deploy_ebs_type = "gp2"
12+
aws_deploy_ec2_volumes_names = ["/dev/xvdf", "/dev/xvdg",]
1313
aws_deploy_allow_ssh_from_cidrs = ["0.0.0.0/0"]
1414
aws_deploy_dns_api_subdomain = "tf-test" # subdomain in aws.postgres.ai, fqdn will be ${dns_api_subdomain}.aws.postgres.ai
1515

@@ -24,3 +24,8 @@ dle_retrieval_refresh_timetable = "0 0 * * 0"
2424
postgres_config_shared_preload_libraries = "pg_stat_statements,logerrors" # DB Migration Checker requires logerrors extension
2525

2626
platform_project_name = "aws_test_tf"
27+
28+
# Edit this list to have all public keys that will be placed to
29+
# have them placed to authorized_keys. Instead of ssh_public_keys_files_list,
30+
# it is possible to use ssh_public_keys_list containing public keys as text values.
31+
ssh_public_keys_files_list = ["~/.ssh/id_rsa.pub"]

variables.tf

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,6 @@ variable "aws_deploy_ec2_instance_type" {
2727
default = "t2.micro"
2828
}
2929

30-
variable "aws_keypair" {
31-
description = "Key pair to access the EC2 instance"
32-
default = "default"
33-
}
34-
3530
variable "aws_deploy_allow_ssh_from_cidrs" {
3631
description = "List of CIDRs allowed to connect to SSH"
3732
default = ["0.0.0.0/0"]
@@ -67,6 +62,11 @@ variable "aws_deploy_ebs_availability_zone" {
6762
default = "us-east-1a"
6863
}
6964

65+
variable "aws_deploy_ebs_encrypted" {
66+
description = "If EBS volumes used by DLE are encrypted"
67+
default = "true"
68+
}
69+
7070
variable "aws_deploy_ebs_size" {
7171
description = "The size (GiB) for data volumes used by DLE"
7272
default = "1"
@@ -77,12 +77,17 @@ variable "aws_deploy_ebs_type" {
7777
default = "gp2"
7878
}
7979

80+
# If we need to have more data disks, this array has to be extended.
81+
# TODO: change logic – user sets the number of disks only, not thinking about names
8082
variable "aws_deploy_ec2_volumes_names" {
8183
description = "List of paths for EBS volumes mounts"
84+
# This list is of "non-nitro" instances. For "nitro" ones,
85+
# the real disk names will be different and in fact these names
86+
# will be ignored. However, we still need to pass something here
87+
# to proceed with the disk attachment.
8288
default = [
8389
"/dev/xvdf",
8490
"/dev/xvdg",
85-
"/dev/xvdh",
8691
]
8792
}
8893

volumes.tf

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ resource "aws_volume_attachment" "ebs_att" {
99
resource "aws_ebs_volume" "DLEVolume" {
1010
count = "${length(tolist(var.aws_deploy_ec2_volumes_names))}"
1111
availability_zone = "${var.aws_deploy_ebs_availability_zone}"
12+
encrypted = "${var.aws_deploy_ebs_encrypted}"
1213
size = "${var.aws_deploy_ebs_size}"
1314
type = "${var.aws_deploy_ebs_type}"
1415
tags = {

0 commit comments

Comments
 (0)