This is the administration interface for the SEC Directory system. It's built on Django, utilizing docker to manage the build process and dependencies.
This application's primary purpose is to push directory data into a search index on Algolia. The main source for that data is the existing SEAS directory feed used for the www.seas.harvard.edu website; other people and places can be manually entered through the Django admin interface. Those three data sources are then compiled into a single "Rollup" data model, which is added to the Algolia index.
A separate sec-directory-client project hosts the applications public-facing code, with connects directly to Algolia to search and display results from the index.
For development, there is a docker-compose.yml file in the root of the project. You'll need to copy the template.env file to .env and fill in the appropriate values for the database connection and the Algolia application and index.
For Algolia, you'll need to provide the ALGOLIA_APP_ID that owns the index defined in ALGOLIA_INDEX, and provide an ALGOLIA_API_KEY that has permission to addObject, deleteObject and deleteIndex for that same index. For security reasons, it's best to create a "Secured API key" scoped to the index, and we recommend using different indices for production, development, and testing.
With the .env file created, run:
$ docker-compose upWhich should bring up the app and database containers. From there, you can access the Django admin interface in the browser at http://localhost:8000/admin and log in with the credentials defined in the DJANGO_SUPERUSER_USERNAME and DJANGO_SUPERUSER_PASSWORD variables.
Our Dockerfile uses a multi-stage build, and the docker-compose file targets the development stage, which has some additional system dependencies installed and runs the built-in Django web server. To install additional python dependencies, you should put them in app/requirements.txt and rebuild the image with:
$ docker-compose buildTo access Django's CLI tool, you can run:
$ docker-compose exec web python manage.pyThat will list all of the available commands. For development the most important ones will be:
# Run tests
$ docker-compose exec web python manage.py test
# Run database migrations
$ docker-compose exec web python manage.py migrate
# Run the load_feed_people function to populate the Algolia index defined in ALGOLIA_INDEX
$ docker-compose exec web python manage.py shell --command "from feedperson.utils import load_feed_people; load_feed_people()"The docker image is built by GitHub Action and published through GitHub container registry. To run the latest version of the app:
# Pull the latest copy of the image
$ docker pull ghcr.io/seas-computing/sec-directory-server:stable
# Run the image, passing through the necessary environment variables from our .env file
$ docker run -it --rm --env-file .env ghcr.io/seas-computing/sec-directory-server:stableWhen running in production, the DJANGO_SETTINGS_MODULE environment variable should be set to app.settings.production. By default, the production image will run a gunicorn process that listens on port 8000.
There is also a docker-compose.prod.yml file that runs the container in production mode behind an nginx proxy. This is primarily useful for testing the production settings; our real production deployment will be using AWS Elastic Container Service, Relational Database Service, and Elastic Load Balancer.
To run in production mode, run:
$ docker-compose --file docker-compose.prod.yml up --buildThen visit http://localhost:1337/admin in the browser.
In addition to serving the administration interface, the Django can also run a function to import the directory feed and push that data to Algolia. In production, this is done through a separate task using the same container.
$ docker run -it --rm --env-file .env ghcr.io/seas-computing/sec-directory-server:stable python manage.py shell --command "from feedperson.utils import load_feed_people; load_feed_people()"When running the container with an additional shell command like this, the app/entrypoint.sh script will not run the gunicorn or development server processes; it will run the command specified, within the /app directory in the container. If the DATABASE environment variable is set to postgres, it will wait for the database defined by SQL_HOST and SQL_PORT to become available before proceeding.
You can also force the container to run in production or development mode by passing --production or --development as the only arguments, or you can run the re-indexing process noted above with the --reindex flag. For example:
# For Production mode
$ docker run -it --rm --env-file .env ghcr.io/seas-computing/sec-directory-server:stable --production
# For Development mode
$ docker run -it --rm --env-file .env ghcr.io/seas-computing/sec-directory-server:stable --development
# Shortcut to run the re-indexing command
$ docker run -it --rm --env-file .env ghcr.io/seas-computing/sec-directory-server:stable --reindexWith no arguments, the image will default to running in production mode.