You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 27, 2023. It is now read-only.
Copy file name to clipboardExpand all lines: documentation/docs/start/overview/README.md
+101-1Lines changed: 101 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ This is an overview of a Proof-of-Concept web application I'm working on called
15
15
- AWS
16
16
- and last but not least, **AWS Cloud Development Kit (CDK)**
17
17
18
-
This README will start by describing some features of the application I'm building and how the different technologies are used together. I also share my experience in moving from CloudFormation to CDK for managing cloud infrastructure on AWS. Finally, I discuss my solution to a specific question I have been trying to answer: what's the best way to scale celery workers to zero?
18
+
This README will start by describing some features of the application I'm building and how the different technologies are used together. I also share my experience in adopting CDK for managing cloud infrastructure on AWS. Finally, I discuss my solution to a specific question I have been trying to answer: what's the best way to scale Celery workers to zero to reduce Total Cost of Ownership?
19
19
20
20
## Development Philosophies and Best Practices
21
21
@@ -344,6 +344,106 @@ This image is then referenced by multiple `NestedStack`s that define Fargate ser
344
344
345
345
I'm not setting up an ECR image repository for my application, but I believe there is a way to do this. One question that I have about using `ecs.AssetImage` is about image lifecycle management. I know that you can implement rules about how many images you want to keep in an ECR image repository, but **I'm not sure how this works with CDK Image Assets**.
346
346
347
+
### Quick tour of `ApplicationStack`
348
+
349
+
Here's a very quick look at the structure of my CDK code, focusing on the `ApplicationStack`, the "master stack" or "skeleton stack" that contains.
350
+
351
+
#### `hosted_zone`
352
+
353
+
We get the hosted zone using the `DOMAIN_NAME` and `HOSTED_ZONE_ID`. This is not a nested stack.
354
+
355
+
#### `site_certificate`
356
+
357
+
The ACM Certificate that will be used for the given environment. This references the `full_domain_name` (environment + application).
358
+
359
+
#### `vpc_stack`
360
+
361
+
A `NestedStack` for defining VPC resources. This construct generates lots of CloudFormation resources. I currently have `nat_gateways` set to zero, and I'm `PUBLIC` and `PRIVATE` subnets spread over 2 AZs. As I mentioned earlier, this is primarily for cost considerations and it is a best practice to use the tiered security model and run our Fargate tasks in private subnets instead of public subnets. I think I need to add NACL resources in this `NestedStack`.
362
+
363
+
#### `alb_stack`
364
+
365
+
This defines the load balancer, configures that will send traffic to our Fargate services (such as our Django API). I was a little bit unclear about needing a `listener` and `https_listener`. I might be able to get away with removing the `listener` and only using `https_listener`.
366
+
367
+
#### `static_site_stack`
368
+
369
+
This stack defines the S3 bucket and policies that will be used for hosting our static site (Quasar PWA).
370
+
371
+
#### `backend_assets`
372
+
373
+
This stack defines the bucket and policies for managing the bucket that holds static and media assets for Django.
374
+
375
+
#### `cloudfront`
376
+
377
+
This defines the CloudFront distribution that ties together several different parts of the application. It is the "front desk" of the application, and acts as a CDN and proxy. There is a separate CloudFront distribution for each environment (dev, staging, production). This stack also defines the Route53 `ARecord` that will be used to send traffic to a specific subdomain to the correct CloudFront distribution.
378
+
379
+
There are three `origin_configs` for each distribution:
380
+
381
+
1. `CustomOriginConfig` for the ALB
382
+
1. `CustomOriginConfig` for the S3 bucket website
383
+
1. `S3OriginConfig` for the Django static assets
384
+
385
+
Note that these `origin_configs` each have different `behaviors`, and that list comprehension is used to keep this code DRY.
386
+
387
+
#### `BucketDeployment`
388
+
389
+
This will deploy our static site assets to the S3 bucket defined in `static_site_stack` if the static site assets are present at the time of deployment. If they are not present, this means that there were no changes made to the frontend site.
390
+
391
+
#### `ecs`
392
+
393
+
Defines the ECS Cluster.
394
+
395
+
#### `rds`
396
+
397
+
There is no L2 construct for `DBCluster`, so I used `CfnDBCluster` in order to use the Aurora Postgres `engine` and the `serverless` `engine_mode`.
398
+
399
+
#### `elasticache`
400
+
401
+
I also had to use L1 constructs for ElastiCache, but this one is pretty straightforward.
402
+
403
+
For both RDS and ElastiCache I used the `vpc_default_security_group` as the `source_security_group`. It might be a better idea to define another security group altogether, but this approach works.
404
+
405
+
#### `AssetImage`
406
+
407
+
The docker image that references Django application code in the `backend` directory. This image is referenced in Fargate services and tasks.
408
+
409
+
#### `variables`
410
+
411
+
This section defines and organizes all of the environment variables and secrets for my application.
412
+
413
+
#### `backend_service`
414
+
415
+
It might be a better idea to replace this with `NetworkLoadBalancedFargateService`, but instead I implemented this with lower-level constructs just to be clear about what I'm doing. To add a load balanced service, here is what I did:
416
+
417
+
1. Define the Fargate task
418
+
1. Add the container to this task with other information (secrets, logging, `command`, etc.)
419
+
1. Give the task role permissions it needs such as access to Secrets, S3 permissions. (It might be a good idea to refactor this into a function that can be called on `task_role`, but for now I am explicitly granting all permissions)
420
+
1. Create and add a port mapping
421
+
1. Define an ECS Fargate Service that reference the previously defined Fargate task, configure security group
422
+
1. Add the service as a target to the `https_listener` defined previously in `alb_stack`.
423
+
1. Optionally configure autoscaling for the Fargate service
424
+
425
+
#### `flower_service`
426
+
427
+
Flower is a monitoring utility for Celery. I had trouble getting this to work correctly, but I managed to make it work by adding a simple nginx container that passes traffic to the flower container running in the same task. https://flower.readthedocs.io/en/latest/reverse-proxy.html
428
+
429
+
#### `celery_default_service`
430
+
431
+
This stack defines the default celery queue. This is discussed later in more detail, but the basic idea is to:
432
+
433
+
1. Define the Fargate task
434
+
1. Add the container
435
+
1. Define the Fargate service
436
+
1. Grant permissions
437
+
1. Configure autoscaling
438
+
439
+
#### `celery_autoscaling`
440
+
441
+
This stack defines the Lambda function and schedule on which this Lambda is called. This stack is discussed in more detail later on.
442
+
443
+
#### `backend_tasks`
444
+
445
+
These are administrative tasks that are executed by running manual GitLab CI jobs such as `migrate`, `collectstatic` and `createsuperuser`.
446
+
347
447
## Why `X`? Why not `Y`?
348
448
349
449
This section will compare some of the technology choices I have made in this project to other popular alternatives.
0 commit comments