You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/app/README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ You can switch which way many of these components are run by setting the `PY_RUN
60
60
*`export PY_RUN_APPROACH=local` will run these components natively
61
61
*`export PY_RUN_APPROACH=docker` will run these within Docker
62
62
63
-
Note that even with the native mode, many components like the DB and API will only ever run in Docker, and you should always make sure that any implementations work within docker.
63
+
Note that even with the native mode, many components like the DB and API will only ever run in Docker, and you should always make sure that any implementations work within Docker.
64
64
65
65
Running in the native/local approach may require additional packages to be installed on your machine to get working.
66
66
@@ -71,8 +71,8 @@ Running in the native/local approach may require additional packages to be insta
71
71
* Run `poetry install --all-extras --with dev` to keep your Poetry packages up to date
72
72
* Load environment variables from the local.env file, see below for one option.
73
73
74
-
One option for loading all of your local.env variables is to install direnv: https://direnv.net/
75
-
You can configure direnv to then load the local.env file by creating an `.envrc` file in the /app directory that looks like:
74
+
One option for loading all of your local.env variables is to install `direnv`: https://direnv.net/
75
+
You can configure `direnv` to then load the local.env file by creating an `.envrc` file in the /app directory that looks like:
Most configuration options are managed by environment variables.
100
100
101
-
Environment variables for local development are stored in the [local.env](/app/local.env) file. This file is automatically loaded when running. If running within Docker, this file is specified as an `env_file` in the [docker-compose](/docker-compose.yml) file, and loaded [by a script](/app/src/util/local.py) automatically when running unit tests (see running natively above for other cases).
101
+
Environment variables for local development are stored in the [local.env](/backend/local.env) file. This file is automatically loaded when running. If running within Docker, this file is specified as an `env_file` in the [docker-compose](/backend/docker-compose.yml) file, and loaded [by a script](/backend/src/util/local.py) automatically when running unit tests (see running natively above for other cases).
102
102
103
103
Any environment variables specified directly in the [docker-compose](/docker-compose.yml) file will take precedent over those specified in the [local.env](/app/local.env) file.
Copy file name to clipboardExpand all lines: docs/app/database/database-testing.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ This document describes how the database is managed in the test suite.
4
4
5
5
## Test Schema
6
6
7
-
The test suite creates a new PostgreSQL database schema separate from the `public` schema that is used by the application outside of testing. This schema persists throughout the testing session is dropped at the end of the test run. The schema is created by the `db` fixture in [conftest.py](../../../app/tests/conftest.py). The fixture also creates and returns an initialized instance of the [db.DBClient](../../../app/src/db/__init__.py) that can be used to connect to the created schema.
7
+
The test suite creates a new PostgreSQL database schema separate from the `public` schema that is used by the application outside of testing. This schema persists throughout the testing session and is dropped at the end of the test run. The schema is created by the `db` fixture in [conftest.py](../../../app/tests/conftest.py). The fixture also creates and returns an initialized instance of the [db.DBClient](../../../app/src/db/__init__.py) that can be used to connect to the created schema.
8
8
9
9
Note that [PostgreSQL schemas](https://www.postgresql.org/docs/current/ddl-schemas.html) are entirely different concepts from [Schema objects in OpenAPI specification](https://swagger.io/docs/specification/data-models/).
3. If you are using an M1 mac, you will need to install postgres as well: `brew install postgresql` (The psycopg2-binary is built from source on M1 macs which requires the postgres executable to be present)
22
+
3. If you are using an M1 Mac, you will need to install Postgres as well: `brew install postgresql` (The psycopg2-binary is built from source on M1 Macs which requires the Postgres executable to be present)
23
23
24
24
4. You'll also need [Docker Desktop](https://www.docker.com/products/docker-desktop/)
Copy file name to clipboardExpand all lines: docs/app/monitoring-and-observability/logging-configuration.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This document describes how logging is configured in the application. The loggin
8
8
9
9
We have two separate ways of formatting the logs which are controlled by the `LOG_FORMAT` environment variable.
10
10
11
-
`json` (default) -> Produces JSON formatted logs which are machine-readable.
11
+
`json` (default) -> Produces JSON formatted logs, which are machine-readable.
12
12
13
13
```json
14
14
{
@@ -27,7 +27,7 @@ We have two separate ways of formatting the logs which are controlled by the `LO
27
27
}
28
28
```
29
29
30
-
`human-readable` (set by default in `local.env`) -> Produces colorcoded logs for local development or for troubleshooting.
30
+
`human-readable` (set by default in `local.env`) -> Produces color-coded logs for local development or troubleshooting.
31
31
32
32

33
33
@@ -37,11 +37,11 @@ The [src.logging.flask_logger](../../../app/src/logging/flask_logger.py) module
37
37
38
38
## PII Masking
39
39
40
-
The [src.logging.pii](../../../app/src/logging/pii.py) module defines a filter that applies to all logs that automatically masks data fields that look like social security numbers.
40
+
The [src.logging.pii](../../../app/src/logging/pii.py) module defines a filter that applies to all logs and automatically masks data fields that look like social security numbers.
41
41
42
42
## Audit Logging
43
43
44
-
* The [src.logging.audit](../../../app/src/logging/audit.py) module defines a lowlevel audit hook that logs events that may be of interest from a security point of view, such as dynamic code execution and network requests.
44
+
* The [src.logging.audit](../../../app/src/logging/audit.py) module defines a low-level audit hook that logs events that may be of interest from a security point of view, such as dynamic code execution and network requests.
Copy file name to clipboardExpand all lines: docs/app/monitoring-and-observability/logging-conventions.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ Logging is a valuable tool for engineering teams to support products in producti
8
8
9
9
### Make code observability a primary tool for debugging and reasoning about production code
10
10
11
-
When a user runs into an issue in production, logs offer one of the primary ways of understanding what happened. This is especially important for situations where we can’t or don’t know how to reproduce the issue. In general it is not feasible to attach a debugger to production systems, or to set breakpoints and inspect the state of the application in production, so logs offer a way to debug through “print statements”.
11
+
When a user runs into an issue in production, logs offer one of the primary ways of understanding what happened. This is especially important for situations where we can’t or don’t know how to reproduce the issue. In general, it is not feasible to attach a debugger to production systems, or to set breakpoints and inspect the state of the application in production, so logs offer a way to debug through “print statements”.
12
12
13
13
### Make it easy for on-call engineers to search for logs in the codebase
14
14
@@ -30,21 +30,21 @@ Log querying systems are often limited in their querying abilities. Most log dat
30
30
31
31
### Log event type
32
32
33
-
-**INFO** – Use INFO events to log something informational. This can be information that's useful for investigations, debugging, or tracking metrics. Note that events such as a user or client error (such as validation errors or 4XX bad request errors) should use INFO, since those are expected to occur as part of normal operation and do not necessarily indicate anything wrong with the system. Do not use ERROR or WARNING for user or client errors to avoid cluttering error logs.
34
-
-**ERROR** – Use ERROR events if the the system is failed to complete some business operation. This can happen if there is an unexpected exception or failed assertion. Error logs can be used to trigger an alert to on-call engineers to look into a potential issue.
35
-
-**WARNING** – Use WARNING to indicate that there *may* be something wrong with the system but that we have not yet detected any immediate impact on the system's ability to successfully complete the business operation. For example, you can warn on failed soft assumptions and soft constraints. Warning logs can be used to trigger notifications that engineers need to look into during business hours.
33
+
-**INFO** – Use `INFO` events to log something informational. This can be information that's useful for investigations, debugging, or tracking metrics. Note that events such as a user or client error (such as validation errors or 4XX bad request errors) should use `INFO`, since those are expected to occur as part of normal operation and do not necessarily indicate anything wrong with the system. Do not use `ERROR` or `WARNING` for user or client errors to avoid cluttering error logs.
34
+
-**ERROR** – Use `ERROR` events if the system fails to complete some business operation. This can happen if there is an unexpected exception or failed assertion. Error logs can be used to trigger an alert to on-call engineers to look into a potential issue.
35
+
-**WARNING** – Use `WARNING` to indicate that there *may* be something wrong with the system but that we have not yet detected any immediate impact on the system's ability to successfully complete the business operation. For example, you can warn on failed soft assumptions and soft constraints. Warning logs can be used to trigger notifications that engineers need to look into during business hours.
36
36
37
37
### Log messages
38
38
39
-
-**Standardized log messages** – Consistently formatted and worded log messages easier to read when viewing many logs at a time, which reduces the chance for human error when interpreting logs. It also makes it easier to write queries by enabling engineers to guess queries and allow New Relic autocomplete to show available log message options to filter by.
39
+
-**Standardized log messages** – Consistently formatted and worded log messages are easier to read when viewing many logs at a time, which reduces the chance of human error when interpreting logs. It also makes it easier to write queries by enabling engineers to guess queries and allowing New Relic autocomplete to show available log message options to filter by.
40
40
-**Statically defined log messages** – Avoid putting dynamic data in log messages. Static messages are easier to search for in the codebase. Static messages are also easier to query for those specific log events without needing to resort to RLIKE queries with regular expressions or LIKE queries.
41
41
42
42
### Attributes
43
43
44
44
-**Log primitives not objects** – Explicitly list which attributes you are logging to avoid unintentionally logging PII. This also makes it easier for engineers to know what attributes are available for querying, or for engineers to search for parts of the codebase that logs these attributes.
45
45
-**Structured metadata in custom attributes** – Put metadata in custom attributes (not in the log message) so that it can be used in queries more easily. This is especially helpful when the attributes are used in "group by" clauses to avoid needing to use more complicated queries.
46
46
-**system identifiers** – Log all relevant system identifiers (uuids, foreign keys)
47
-
-**correlation ids** – Log ids that can be shared between frontend events, backend logs, and ideally even sent to external services
47
+
-**correlation ids** – Log ids that can be shared between front-end events, backend logs, and ideally even sent to external services
48
48
-**discrete or discretized attributes** – Log all useful non-PII discrete attributes (enums, flags) and discretized versions of continuous attributes (e.g. comment → has_comment, household → is_married, has_dependents)
49
49
-**Denormalized data** – Include relevant metadata from related entities. Including denormalized (i.e. redundant) data makes queries easier and faster, and removes the need to join or self-join between datasets, which is not always feasible.
50
50
-**Fully-qualified globally consistent attribute names** – Using consistent attribute names everywhere. Use fully qualified attribute names (e.g. application.application_id instead of application_id) to avoid naming conflicts.
0 commit comments