Skip to content

Latest commit

 

History

History
315 lines (225 loc) · 11.3 KB

chapter_18_second_deploy.asciidoc

File metadata and controls

315 lines (225 loc) · 11.3 KB

Deploying Our New Code

It’s time to deploy our brilliant new validation code to our live servers.

This will be a chance to see our automated deploy scripts in action for the second time. Let’s take the opportunity to make a little deployment checklist.

Note
At this point I always want to say a huge thanks to Andrew Godwin and the whole Django team. In the first edition, I used to have a whole long section, entirely devoted to migrations. Since Django 1.7, migrations now "just work", so I was able to drop it altogether. I mean yes this all happened nearly ten years ago, but still—​open source software is a gift. We get such amazing things, entirely for free. It’s worth taking a moment to be grateful, now and again.
🚧 Warning, Under construction

This chapter has only just been rewritten as part of the third edition. Please send feedback!

You can refer back to [chapter_11_server_prep] for reminders on Ansible commands.

The Deployment Checklist

Let’s make a little checklist of pre-deployment tasks:

  1. We run all our unit and functional tests in the regular way. Just in case!

  2. We rebuild our Docker image, and run our tests against Docker, on our local machine.

  3. We deploy to staging, and run our FTs against staging.

  4. Now we can deploy to prod.

Tip
A deployment checklist like this should be a temporary measure. Once you’ve worked through it manually a few times, you should be looking to take the next step in automation, continuous deployment straight using a CI/CD pipeline. We’ll touch on this in [chapter_25_CI].

A Full Test Run Locally

Of course, under the watchful eye of the Testing Goat, we’re running the tests all the time! But, just in case:

$ cd src && python manage.py test
[...]

Ran 39 tests in 15.222s

OK

Quick Test Run Against Docker

The next step closer to prod, is running things in Docker. This was one of the main reasons we went to the trouble of containerising our app, which is being able to repro the production environment as faithfully as possible, on our own machine.

So let’s rebuild our Docker image and spin up a local Docker container:

$ *docker build -t superlists . && docker run \
    -p 8888:8888 \
    --mount type=bind,source="$PWD/src/db.sqlite3",target=/src/db.sqlite3 \
    -e DJANGO_SECRET_KEY=sekrit \
    -e DJANGO_ALLOWED_HOST=localhost \
    -it superlists
 => [internal] load build definition from Dockerfile                  0.0s
 => => transferring dockerfile: 371B                                  0.0s
 => [internal] load metadata for docker.io/library/python:3.13-slim   1.4s
 [...]
 => => naming to docker.io/library/superlists                         0.0s
+ docker run -p 8888:8888 --mount
type=bind,source="$PWD/src/db.sqlite3",target=/src/db.sqlite3 -e
DJANGO_SECRET_KEY=sekrit -e DJANGO_ALLOWED_HOST=localhost -e EMAIL_PASSWORD -it
superlists
[2025-01-27 21:29:37 +0000] [7] [INFO] Starting gunicorn 22.0.0
[2025-01-27 21:29:37 +0000] [7] [INFO] Listening at: http://0.0.0.0:8888 (7)
[2025-01-27 21:29:37 +0000] [7] [INFO] Using worker: sync
[2025-01-27 21:29:37 +0000] [8] [INFO] Booting worker with pid: 8

And now, in a separate terminal, we can run our FT suite against the Docker:

$ TEST_SERVER=localhost:8888 python src/manage.py test functional_tests
[...]
......
 ---------------------------------------------------------------------
Ran 6 tests in 17.047s

OK

Looking good! Let’s move on to staging.

Staging Deploy and Test Run

Here’s our ansible-playbook command to deploy to staging:

$ ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/deploy-playbook.yaml -vv
[...]

PLAYBOOK: deploy-playbook.yaml ***********************************************
1 plays in infra/deploy-playbook.yaml

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
[...]
ok: [staging.ottg.co.uk]

TASK [Install docker] **********************************************************
[...]
ok: [staging.ottg.co.uk] => {"cache_update_time": [...]

TASK [Build container image locally] *******************************************
[...]
ok: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Built image superlists:latest [...]

TASK [Export container image locally] ******************************************
ok: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": [], "changed": false, "image": [...]

TASK [Upload image to server] **************************************************
ok: [staging.ottg.co.uk] => {"changed": false, "checksum": [...]

TASK [Import container image on server] ****************************************
ok: [staging.ottg.co.uk] => {"actions": ["Loaded image superlists:latest [...]

TASK [Ensure .env file exists] *************************************************
ok: [staging.ottg.co.uk] => {"changed": false, "dest": "/home/elspeth/superlists.env", [...]

TASK [Ensure db.sqlite3 file exists outside container] *************************
changed: [staging.ottg.co.uk] => {"changed": true, "dest": "/home/elspeth/db.sqlite3", [...]

TASK [Run container] ***********************************************************
changed: [staging.ottg.co.uk] => {"changed": true, "container": [...]

TASK [Run migration inside container] ******************************************
changed: [staging.ottg.co.uk] => {"changed": true, "rc": 0, "stderr": "", [...]

PLAY RECAP *********************************************************************
staging.ottg.co.uk         : ok=10   changed=3    unreachable=0    failed=0
skipped=0    rescued=0    ignored=0
[...]
Disconnecting from staging.ottg.co.uk... done.

And now we run the FTs against staging:

$ TEST_SERVER=staging.ottg.co.uk python src/manage.py test functional_tests
OK

Hooray!

Production Deploy

Since all is looking well we can deploy to prod!

$ ansible-playbook --user=elspeth -i www.ottg.co.uk, infra/deploy-playbook.yaml -vv

What to Do If You See a Database Error

Because our migrations introduce a new integrity constraint, you may find that it fails to apply because some existing data violates that constraint.

sqlite3.IntegrityError: columns list_id, text are not unique

At this point you have two choices:

  1. Delete the database on the server and try again. After all, it’s only a toy project!

  2. Learn about data migrations. See [data-migrations-appendix].

How to Delete the Database on the Staging Server

Here’s how you might do option (1):

ssh elspeth@staging.ottg.co.uk rm db.sqlite3

The ssh command takes an arbitrary shell command to run as its last argument, so we pass in rm db.sqlite3. We don’t need a full path because we keep the sqlite database in elspeth’s home folder.

Tip
Don’t do this in prod!

Wrap-Up: git tag the New Release

The last thing to do is to tag the release in our VCS—​it’s important that we’re always able to keep track of what’s live:

$ git tag -f LIVE  # needs the -f because we are replacing the old tag
$ export TAG=date +DEPLOYED-%F/%H%M
$ git tag $TAG
$ git push -f origin LIVE $TAG
Note
Some people don’t like to use push -f and update an existing tag, and will instead use some kind of version number to tag their releases. Use whatever works for you.

And on that note, we can wrap up [part2], and move on to the more exciting topics that comprise [part3]. Can’t wait!

Deployment Procedure Review

We’ve done a couple of deploys now, so this is a good time for a little recap:

  • Deploy to staging first

  • Run our FTs against staging.

  • Deploy to live

  • Tag the release

Deployment procedures evolve and get more complex as projects grow, and it’s an area that can grow hard to maintain, full of manual checks and procedures, if you’re not careful to keep things automated. There’s lots more to learn about this, but it’s out of scope for this book. Look up "continuous delivery" for some background reading.