Demo setup for my presentation at BSidesNYC 2023 on volatile memory analysis in Google Kubernetes Engine (GKE).
This repository sets up a GKE cluster, forensic GCE, cloud storage bucket and associated IAM roles and permissions in a default GCP project. You can then create an AVML container with the tools to conduct a memory dump and also an attacker container with a few demo scripts.
Add the following permissions to your gcloud principle in your GCP project.
Project IAM Admin
Service Account Admin
Service Account Token Creator
Check your CPU limitations based on the resources you want to create: https://cloud.google.com/billing/docs/how-to/modify-project
Install gcloud using this link: https://cloud.google.com/sdk/docs/install
$ gcloud auth application-default login
https://cloud.google.com/migrate/containers/docs/config-dev-env https://console.cloud.google.com/apis/dashboard?project=bsidesnyc2023
$ gcloud services enable servicemanagement.googleapis.com servicecontrol.googleapis.com cloudresourcemanager.googleapis.com compute.googleapis.com container.googleapis.com containerregistry.googleapis.com cloudbuild.googleapis.com
$ ssh-add ~/.ssh/google_compute_engine
$ export gcp_project=YOUR_GCP_PROJECT_NAME
$ export zone=ZONE
$ docker build -t gcr.io/$gcp_project/avml_image:latest -f image_files/avml/Dockerfile .
$ docker build -t gcr.io/$gcp_project/attacker_image:latest -f image_files/attacker/Dockerfile .
$ docker push gcr.io/$gcp_project/avml_image:latest
$ docker push gcr.io/$gcp_project/attacker_image:latest
This sections explains how to setup your Terraform environment to create the needed resources.
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started
$ terraform init
Modify main.tf in ./bsidesnyc2023/terraform_bsides to reflect your GCP environment.
Variables to add:
var.pid
var.installation_user
var.installation_path
$ cd ./bsidesnyc2023/terraform_bsides
$ terraform apply -lock=true -auto-approve
$ terraform output >> terraform_resources.conf
$ gcloud container clusters get-credentials bsides-gke-cluster --zone $zone --project $gcp_project
$ kubectl exec --stdin --tty pod-node-affinity-bsides-attacker-pod --namespace default -- /bin/bash
Run:
$ ./actions.sh
And other commands you want and then exit the shell. When you log back in, you should see the bash history updated by typing:
$ history
$ python3 -m venv .venv
$ source .venv/bin/activate
(.venv)$ pip3 install -r requirements.txt
Execute the python script using the GKE node name. E.g.
$ python3 memory_collection_bsides.py --gke_node_name gke-bsides-gke-clust-bsides-gke-node--f72013e9-jm9c
Once the script finishes, access the AVML instance here.
$ gcloud compute ssh "avml-instance" --tunnel-through-iap
This project is licensed under the terms of the GNU General Public License v3.0