The codebase is being roughed out, but finer details are likely to change.
- Runtime environment - NodeJS
- Programming language - Typescript
- Database - PostgreSQL
- Authentication - Keycloak
- Backend API server - NestJS
- Express
- TypeORM
- Swagger
- Frontend React framework - NextJS
- Formik
- Tailwind CSS
- class-validator
- Cypress
- Deployment
- GitHub Actions
- Terraform
- AWS CloudFront/S3/Lambda/RDS
Workspace or Package | Description | README |
---|---|---|
apps/api | Backend NestJS API server | README |
apps/web | Frontend NextJS React app | README |
packages/common | Shared library | README |
packages/accessibility | Accessibility Test | README |
When you create a pull request, be aware that GitHub actions for each project will be executed to check its validity.
- pr-check-api - format, lint, unit and integration tests
- pr-check-web - format, lint, and test
- pr-check-common - format, lint, unit tests, and build
- pr-check-e2e - run cypress e2e and accessibility tests
- pr-check-terraform - show terraform plan
-
Install NodeJS 16+ as a runtime environment by nvm
-
Install yarn as a package manager
-
Install and run Docker Desktop
-
Check out the repository
$ git clone https://github.com/bcgov/internationally-educated-nurses ien $ cd ien
-
Install dependencies
$ yarn
-
Define environment variables in .env
Copy .env.example to .env
$ cp .config/.env.example .env
Define variables for database connection.
PROJECT=ien RUNTIME_ENV=local POSTGRES_HOST=db POSTGRES_USERNAME= POSTGRES_PASSWORD= POSTGRES_DATABASE=
Database Initialization
The local
.pgdata
folder is mapped to a volume in db container, and it is initialized at the initial launch. If you change env variables to authenticate a db connection, delete.pgdata
so that database could be reinitialized.Teams Integration
TEAMS_ALERTS_WEBHOOK_URL=
If TEAMS_ALERTS_WEBHOOK_URL is defined and an exception occurs, the error message will be sent to the Teams channel.
The Make
command docker-run
to build and launch containers is defined in Makefile.
-
create containers
$ make docker-run
-
stop containers
$ docker-compose stop
-
start containers
$ docker-compose start
-
destroy containers
$ make docker-down
Containers:
- ien_db
- ien_common
- ien_web
- ien_api
Containers are configured by Dockerfile and docker-compose.yml
If you get a DockerException, make sure Docker Desktop is running.
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', ConnectionRefusedError(61, 'Connection refused'))
[80774] Failed to execute script docker-compose
It is recommended to run database as a container in any case. On the other hand, you can run common
, api
, and web
as NodeJS instances.
$ make start-local
or run in watch
mode
$ make watch
Database Hostname Resolution
POSTGRES_HOST
env is defined asdb
, which is used as a service name in docker-compose.yml. Asapi
uses it to connect to the database and a service name is resolved as an address only in Docker environment, you need to redefine it to resolve it on your local machine. You can set it tolocalhost
if you persistently run the app in this way. Otherwise, add127.0.0.1 db
to/etc/hosts
.
API Calls
NEXT_PUBLIC_API_URL=http://localhost:4000/api/v1
To make successful requests from
web
toapi
, you need to setNEXT_PUBLIC_API_URL
environment variable. It is set by default when using Docker or run bymake
command, but if you run the application bynext start
command inapps/web
folder, you should supply this value by creating a file named.env.local
placed inapps/web
.
In order to make breakpoints work in
watch
mode, setsourceMap
totrue
in tsconfig.json and restart the apps.
Unit and integration tests run against the API in the CI pipeline on pull request.
Requests to all endpoints are defined in FreshWorks's Postman IEN workspace. Except version
endpoint, all require authentication. IEN collection's pre-request script authenticates and saves token
as an environment variable before each call.
Note that it only works for the
local
anddev
environments because they use different Keycloak servers. See deployments section. To query for thetest
andprod
, unsetusername
andpassword
environment variables and settoken
with the one retrieved from the response of login request in the browser.
Run API and web unit tests with make api-unit-test
and make web-unit-test
.
api
and web
integration tests start test database with clean
data before running tests and destroy it after.
@make start-test-db @yarn build @NODE_ENV=test yarn test:e2e @make stop-test-db
The test database container has no mapped volume. Therefore, all data will be deleted when the container is removed by make stop-test-db
command.
Run API integration tests with make api-integration-test
Run Cypress integration tests with make test-e2e
or make test-web
. test-web
runs pa11y if cypress tests succeed.
If you want to open Cypress UI while developing new test cases, run make run-test-apps
to prepare applications and then run make open:cypress
Seed data
Login test case should be run to seed a test account and applicants before running any other cases requiring logging in.
Cypress session
Authentication with Keycloak is a little expensive and time-consuming. To reduce interaction with it, call
cy.login()
before each test case. It creates and stores a session. Subsequent calls restore the session so that it could save time to log in again. When logging in with a user of different role, pass its id as a parameter, then it creates its isolated new session.
cy.login('ien_hmbc')
All test users should have the same password.
See accessibility README
We have four environments where we run the application: local, development, test, and production.
local
is normally each developer's laptop or workstation. How to run the app section is meant for it.dev
,test
, andprod
are on OCIO Cloud Platform - AWS LZ2 with project code ofuux0vy
. They are provisioned by the same IaC but with a little different variables.
The standard process of deployment goes through the following steps.
- Run and test the app on local environment while implementing a new feature. Once the task is done,
- Create, review, and merge a pull request,
- Deploy to
dev
. Developers verify the app, - Deploy to
test
. QA team verify the app; Clients might usetest
to confirm that the app is ready to be released. - Deploy to
prod
with approval.
To trigger deployment, run make tag-{env}
. ex) make tag-dev
dev
, test
and prod
deployments to AWS are managed through Terraform configurations and GitHub actions. They do not require access to LZ2. However, in order to access LZ2 for updating parameters, troubleshooting, or diagnosing the app, your IDIRs would have to be onboarded on to LZ2 for the project code uux0vy
- IEN.
local
anddev
use FreshWorks's Keycloak server at https://keycloak.freshworks.club.
test
andprod
use Ministry of Health's Keycloak server at https://common-logon-test.hlth.gov.bc.ca and https://common-logon.hlth.gov.bc.caThe notable difference is that MoH Keycloak doesn't allow
direct access grants
. Therefore, you can't use pre-request to authenticate on Postman.
The AWS infrastructure is created and updated using Terraform and Terraform Cloud as the backend.
The TFC keys required to run terraform can be found in SSM store in AWS.
Make commands are listed under terraform commands
in Makefile for initialization, plan and deployment of resources.
Service accounts are created with IAM permissions to deploy cloud resources such as - S3 static file uploads, update lambda function, cloudfront invalidation etc.
All changes in main
branch are released to production by tagging make tag-prod
along with the version number of the release.
This creates a release tag and also a production tag, deploying to production, once approved by the Leads / DevOps team members.
As a part of the production release approval:
- Validate the latest ZAP scan results to ensure no new vulnerabilities are introduced.
- Review the latest code quality analysis results in Sonar Cloud to ensure no new vulnerabilities are introduced.
Database backups occur on every deployment and also during the scheduled backup window.
To restore the database from a backup, the following steps need to be performed in the specified order
- Find the snapshot to restore from the AWS console
- snapshots created during a build are tagged with the commit sha
- Uncomment everything from the file
terraform/db_backup.tf
- Comment everything from the file
terraform/db.tf
. This deletes the existing RDS cluster. If any debugging needs to be done on the bad rds cluster do not do this step - Update local var
snapshot_name
to the snapshot name from the console - Uncomment the line
POSTGRES_HOST = aws_rds_cluster.pgsql_backup.endpoint
fromterraform/api.tf
- Comment out the line
POSTGRES_HOST = aws_rds_cluster.pgsql.endpoint
fromterraform/api.tf
- Run
ENV_NAME=prod make plan
andENV_NAME=prod make apply
. Change ENV_NAME based on the needs - This should create a new rds cluster from the snapshot provided and update api to point to the new backup cluster
All BC gov projects must pass the STRA (Security Threat and Risk Assessment Standard) and maintain the approved SoAR
More details on STRA here
Regular review of ZAP Scan and Sonar Qube results must be performed. Especially before release to production.
Current STRA and SoAR here
Portal should be SSL, process for certificate renewal - Refer