Skip to content

Add cleanup options #127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
May 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,8 +222,10 @@ The following inputs can be used as `step.with` keys
#### **Application Inputs**
| Name | Type | Description |
|------------------|---------|------------------------------------|
| `app_port` | String | Port to be expose for the container. Default is `3000` |
| `docker_full_cleanup` | Boolean | Set to `true` to run `docker-compose down` and `docker system prune --all --force --volumes` after. Runs before `docker_install`. WARNING: docker volumes will be destroyed. |
| `app_directory` | String | Relative path for the directory of the app. (i.e. where the `docker-compose.yaml` file is located). This is the directory that is copied into the EC2 instance. Default is `/`, the root of the repository. |
| `app_directory_cleanup` | Boolean | Will generate a timestamped compressed file (in the home directory of the instance) and delete the app repo directory. Runs before `docker_install` and after `docker_full_cleanup`. |
| `app_port` | String | Port to be expose for the container. Default is `3000` |
<hr/>
<br/>

Expand Down
8 changes: 8 additions & 0 deletions action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,14 @@ inputs:
default: ''

# Application
docker_full_cleanup:
description: 'Set to true to run docker-compose down and docker system prune --all --force --volumes after.'
required: false
app_directory:
description: 'Relative path for the directory of the app (i.e. where `Dockerfile` and `docker-compose.yaml` files are located). This is the directory that is copied to the EC2 instance. Default is the root of the repo.'
app_directory_cleanup:
description: 'Will generate a timestamped compressed file and delete the app repo directory.'
required: false
app_port:
description: 'Port to expose for the app'
required: false
Expand Down Expand Up @@ -225,7 +231,9 @@ runs:
CREATE_SUB_CERT: ${{ inputs.create_sub_cert }}
NO_CERT: ${{ inputs.no_cert }}
BITOPS_FAST_FAIL: true
DOCKER_FULL_CLEANUP: ${{ inputs.docker_full_cleanup }}
APP_DIRECTORY: ${{ inputs.app_directory }}
APP_DIRECTORY_CLEANUP: ${{ inputs.app_directory_cleanup }}
CREATE_KEYPAIR_SM_ENTRY: ${{ inputs.create_keypair_sm_entry }}
ADDITIONAL_TAGS: ${{ inputs.additional_tags }}
AWS_ENABLE_POSTGRES: ${{ inputs.aws_enable_postgres }}
Expand Down
3 changes: 3 additions & 0 deletions operations/_scripts/deploy/deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ export LB_LOGS_BUCKET="$(/bin/bash $GITHUB_ACTION_PATH/operations/_scripts/gener
# Generate bitops config
/bin/bash $GITHUB_ACTION_PATH/operations/_scripts/generate/generate_bitops_config.sh

# Generate Ansible playbook
/bin/bash $GITHUB_ACTION_PATH/operations/_scripts/generate/generate_ansible_playbook.sh

# List terraform folder
echo "ls -al $GITHUB_ACTION_PATH/operations/deployment/terraform/"
ls -al $GITHUB_ACTION_PATH/operations/deployment/terraform/
Expand Down
57 changes: 57 additions & 0 deletions operations/_scripts/generate/generate_ansible_playbook.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
#!/bin/bash

set -e

echo "In generate_ansible_playbook.sh"

echo -en "- name: Ensure hosts is up and running
hosts: bitops_servers
gather_facts: no
tasks:
- name: Wait for hosts to come up
wait_for_connection:
timeout: 300

- name: Ansible tasks
hosts: bitops_servers
become: true
tasks:
" > $GITHUB_ACTION_PATH/operations/deployment/ansible/playbook.yml

# Adding docker cleanup task to playbook
if [[ $DOCKER_FULL_CLEANUP = true ]]; then
echo -en "
- name: Docker Cleanup
include_tasks: tasks/docker_cleanup.yml
" >> $GITHUB_ACTION_PATH/operations/deployment/ansible/playbook.yml
fi

# Adding app_pore cleanup task to playbook
if [[ $APP_DIRECTORY_CLEANUP = true ]]; then
echo -en "
- name: EC2 Cleanup
include_tasks: tasks/ec2_cleanup.yml
" >> $GITHUB_ACTION_PATH/operations/deployment/ansible/playbook.yml
fi

# Continue adding the defaults
echo -en "
- name: Include install
include_tasks: tasks/install.yml
- name: Include fetch
include_tasks: tasks/fetch.yml
# Notes on why unmounting is required can be found in umount.yaml
- name: Unmount efs
include_tasks: tasks/umount.yml
" >> $GITHUB_ACTION_PATH/operations/deployment/ansible/playbook.yml
if [[ $(alpha_only "$AWS_EFS_CREATE") == true ]] || [[ $(alpha_only "$AWS_EFS_CREATE_HA") == true ]] || [[ $AWS_EFS_MOUNT_ID != "" ]]; then
echo -en "
- name: Mount efs
include_tasks: tasks/mount.yml
when: mount_efs
" >> $GITHUB_ACTION_PATH/operations/deployment/ansible/playbook.yml
fi
echo -en "
- name: Include start
include_tasks: tasks/start.yml
" >> $GITHUB_ACTION_PATH/operations/deployment/ansible/playbook.yml
16 changes: 12 additions & 4 deletions operations/_scripts/generate/generate_tf_vars.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,17 @@ echo "GITHUB_IDENTIFIER SS: [$GITHUB_IDENTIFIER_SS]"
# Function to generate the variable content based on the fact that it could be empty.
# This way, we only pass terraform variables that are defined, hence not overwriting terraform defaults.

generate_var () {
function alpha_only() {
echo "$1" | tr -cd '[:alpha:]' | tr '[:upper:]' '[:lower:]'
}

function generate_var () {
if [[ -n "$2" ]];then
echo "$1 = \"$2\""
if [[ $(alpha_only "$2") == "true" ]] || [[ $(alpha_only "$2") == "false" ]]; then
echo "$1 = $(alpha_only $2)"
else
echo "$1 = \"$2\""
fi
fi
}

Expand Down Expand Up @@ -118,7 +126,7 @@ create_root_cert=$(generate_var create_root_cert $CREATE_ROOT_CERT)
create_sub_cert=$(generate_var create_sub_cert $CREATE_SUB_CERT)
no_cert=$(generate_var no_cert $NO_CERT)
#-- EFS --#
if [[ $AWS_CREATE_EFS = true ]]; then
if [[ $(alpha_only "$AWS_EFS_CREATE") == true ]] || [[ $(alpha_only "$AWS_EFS_CREATE_HA") == true ]] || [[ $AWS_EFS_MOUNT_ID != "" ]]; then
aws_create_efs=$(generate_var aws_create_efs $AWS_CREATE_EFS)
aws_create_ha_efs=$(generate_var aws_create_ha_efs $AWS_CREATE_HA_EFS)
aws_create_efs_replica=$(generate_var aws_create_efs_replica $AWS_CREATE_EFS_REPLICA)
Expand All @@ -130,7 +138,7 @@ if [[ $AWS_CREATE_EFS = true ]]; then
aws_mount_efs_security_group_id=$(generate_var aws_mount_efs_security_group_id $AWS_MOUNT_EFS_SECURITY_GROUP_ID)
fi
#-- RDS --#
if [[ $AWS_ENABLE_POSTGRES = true ]]; then
if [[ $(alpha_only "$AWS_POSTGRES_ENABLE") == true ]]; then
# aws_security_group_name_pg=$(generate_var aws_security_group_name_pg $AWS_SECURITY_GROUP_NAME_PG) - Fixed
aws_enable_postgres=$(generate_var aws_enable_postgres $AWS_ENABLE_POSTGRES)
aws_postgres_engine=$(generate_var aws_postgres_engine $AWS_POSTGRES_ENGINE)
Expand Down
1 change: 1 addition & 0 deletions operations/deployment/ansible/ansible.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ callbacks_enabled = ansible.posix.profile_tasks

[callback_profile_tasks]
sort_order = none
output_limit = 50

[ssh_connection]

Expand Down
25 changes: 0 additions & 25 deletions operations/deployment/ansible/playbook.yml

This file was deleted.

19 changes: 19 additions & 0 deletions operations/deployment/ansible/tasks/docker_cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
- name: Check Docker exists
ansible.builtin.command:
cmd: "docker --version"
register: docker_check
ignore_errors: true

- name: Stop and cleanup Docker
docker_compose:
project_src: "{{ app_install_root }}/{{ app_repo_name }}"
state: absent
remove_orphans: true
remove_images: all
remove_volumes: true
register: output
when: docker_check.rc == 0

- name: Prune Docker system
command: docker system prune --all --force --volumes
when: docker_check.rc == 0
49 changes: 49 additions & 0 deletions operations/deployment/ansible/tasks/ec2_cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
- name: Generate timestamp
set_fact:
timestamp: "{{ ansible_date_time.date | regex_replace('[^0-9]','') }}-{{ ansible_date_time.hour }}{{ ansible_date_time.minute }}"

- name: Check if folder exists
stat:
path: "{{ app_install_root }}/{{ app_repo_name }}"
register: folder_stat

- name: Stop Docker
docker_compose:
project_src: "{{ app_install_root }}/{{ app_repo_name }}"
state: present
stopped: true
when: folder_stat.stat.exists

- name: Find the NFS volume in fstab
shell: "grep 'nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=612,retrans=2,noresvport' /etc/fstab | awk '{print $2}'"
register: nfs_mount_path
changed_when: false
failed_when: false
when: folder_stat.stat.exists

- name: Check if mounted
shell: "mount | grep {{ nfs_mount_path.stdout }}"
register: volume_mounted
changed_when: false
failed_when: false
when: folder_stat.stat.exists and nfs_mount_path.stdout != ""

- name: Unmount the NFS volume
shell: "timeout 5 umount {{ nfs_mount_path.stdout }} || timeout 5 umount -f {{ nfs_mount_path.stdout }} || timeout 5 umount -fl {{ nfs_mount_path.stdout }}"
ignore_errors: true
when: folder_stat.stat.exists and nfs_mount_path.stdout != "" and volume_mounted.stdout != ""

- name: Deletes efs mount directory
file:
path: "{{ nfs_mount_path.stdout }}"
state: absent
when: folder_stat.stat.exists

- name: Compress folder without mounted EFS
archive:
path: "{{ app_install_root }}/{{ app_repo_name }}"
dest: "{{ app_install_root }}/{{ app_repo_name }}-{{ timestamp }}.tar.gz"
format: gz
force_archive: true
remove: true
when: folder_stat.stat.exists
2 changes: 1 addition & 1 deletion operations/deployment/ansible/tasks/mount.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
ansible.posix.mount:
src: "{{ efs_url }}:/{{ efs_mount_target }}"
path: "{{ app_install_root }}/{{ app_repo_name }}/{{ application_mount_target }}"
opts: "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
opts: "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=612,retrans=2,noresvport"
fstype: nfs4
state: mounted
boot: false
53 changes: 34 additions & 19 deletions operations/deployment/ansible/tasks/umount.yml
Original file line number Diff line number Diff line change
@@ -1,24 +1,39 @@
# UnMount EFS
- name: Check if HOST variable is defined
shell: "grep '^HOST_DIR=' {{ app_install_root }}/{{ app_repo_name }}/.env"
register: host_variable
changed_when: false
failed_when: false

# Reason for usage
# There is no reliable way to know when an unmount is necessary
# if you’ve ran a deployment which created an EC2 and an EFS and you’ve mounted the EFS and you then wanted to delete the EFS, how would you tell Ansible that unmounting is needed?
# Terraform is unaware of potential state changes and therefor there is no reliable way to know if an unmount is neccessary from a passed toggle.
#
# Unmounting every time ensures that if an EFS is destroyed, the mount is removed with it.
- name: Find the NFS volume in fstab
shell: "grep 'nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=612,retrans=2,noresvport' /etc/fstab | awk '{print $2}'"
register: nfs_mount_path
changed_when: false
failed_when: false
when: host_variable.stdout == ""

- name: Check if efs mount directory is present
stat:
path: "{{ app_install_root }}/{{ app_repo_name }}/{{ application_mount_target }}/"
register: check_efs_mount
- name: Check if mounted
shell: "mount | grep {{ nfs_mount_path.stdout }}"
register: volume_mounted
changed_when: false
failed_when: false
when: host_variable.stdout == "" and nfs_mount_path.stdout != ""

- name: Stat test
debug:
msg: "The file or directory exists"
when: check_efs_mount.stat.exists
- name: Unmount the NFS volume
shell: "timeout 5 umount {{ nfs_mount_path.stdout }} || timeout 5 umount -f {{ nfs_mount_path.stdout }} || timeout 5 umount -fl {{ nfs_mount_path.stdout }}"
ignore_errors: true
when: host_variable.stdout == "" and nfs_mount_path.stdout != "" and volume_mounted.stdout != ""

- name: Unmount efs volume
ansible.posix.mount:
path: "{{ app_install_root }}/{{ app_repo_name }}/{{ application_mount_target }}"
state: unmounted
when: check_efs_mount.stat.exists
- name: Deletes efs mount directory
file:
path: "{{ nfs_mount_path.stdout }}"
state: absent
when: host_variable.stdout == ""

- name: Remove entry from /etc/fstab
lineinfile:
path: /etc/fstab
search_string: 'nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=612,retrans=2,noresvport'
state: absent
become: true
when: host_variable.stdout == "" and nfs_mount_path.stdout != ""