You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Connexion](https://github.com/zalando/connexion) is used to map the API specification to its implementation in Python.
@@ -71,13 +73,15 @@ to review and edit the API specification. When the API is live, the spec is also
71
73
* [Contributing](#contributing)
72
74
73
75
## Getting Started
76
+
74
77
In this section, you'll configure and deploy a local API server and your own suite of cloud services to run a
75
78
development version of the DSS.
76
79
77
80
Note that privileged access to cloud accounts (AWS, GCP, etc.) is required to deploy the data-store. IF your deployment fails
78
81
due to access restrictions, please consult your local systems administrators.
79
82
80
83
### Install Dependencies
84
+
81
85
The DSS requires Python 3.6 to run.
82
86
83
87
Clone the repo and install dependencies:
@@ -91,7 +95,6 @@ Also install [terraform from Hashicorp](https://www.terraform.io/) from your fav
91
95
92
96
### Configuration
93
97
94
-
95
98
The DSS is configured via environment variables. The required environment variables and their default values
96
99
are defined in the file [`environment`](environment). To customize the values of these environment variables:
97
100
@@ -247,6 +250,7 @@ When deploying for the first time, a Google Cloud Platform service account must
247
250
### Setting admin emails
248
251
249
252
Set admin account emails within AWS Secret Manager
253
+
250
254
```
251
255
echo ' ' | ./scripts/dss-ops.py secrets set --secret-name $ADMIN_USER_EMAILS_SECRETS_NAME
252
256
```
@@ -263,42 +267,47 @@ usual plan/review Terraform workflow, and should therefore be lightweight in nat
263
267
added to `$DSS_HOME/infra` instead.
264
268
265
269
##### Resources
270
+
266
271
Cloud resources have the potential for naming collision in both [AWS](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html)
267
272
and [GCP](https://cloud.google.com/storage/docs/naming), ensure that you rename resources as needed.
268
273
269
274
#### Buckets
270
275
271
276
Buckets within AWS and GCP need to be available for use by the DSS. Use Terraform to setup these resources:
272
277
273
-
274
278
```
275
279
make -C infra COMPONENT=buckets plan
276
280
make -C infra COMPONENT=buckets apply
277
281
```
278
282
279
283
#### ElasticSearch
284
+
280
285
The AWS Elasticsearch Service is used for metadata indexing. Currently the DSS uses version 5.5 of ElasticSearch. For typical development deployments the
281
286
t2.small.elasticsearch instance type is sufficient. Use the [`DSS_ES_`](./docs/environment/README.md) variables to adjust the cluster as needed.
282
287
283
288
Add allowed IPs for ElasticSearch to the secret manager, use comma separated IPs:
289
+
284
290
```
285
291
echo ' ' | ./scripts/dss-ops.py secret set --secret-name $ES_ALLOWED_SOURCE_IP_SECRETS_NAME
286
-
287
292
```
293
+
288
294
Use Terraform to deploy ES resource:
295
+
289
296
```
290
297
make -C infra COMPONENT=elasticsearch plan
291
298
make -C infra COMPONENT=elasticsearch apply
292
299
```
293
300
294
301
#### Certificates
302
+
295
303
A certificate matching your domain must be registered with
296
304
[AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html). Set `ACM_CERTIFICATE_IDENTIFIER`
297
305
to the identifier of the certificate, which can be found on the AWS console.
298
306
299
307
An AWS route53 zone must be available for your domain name and configured in `environment`.
300
308
301
309
#### Deploying
310
+
302
311
Now deploy using make:
303
312
304
313
make plan-infra
@@ -314,8 +323,9 @@ And you should be able to list bundles like this:
314
323
curl -X GET "https://<domain_name>/v1/bundles" -H "accept: application/json"
315
324
316
325
#### Monitoring
317
-
Please see the [data-store-monitor](https://www.github.com/humancellatlas/data-store-monitor) repo for additional
318
-
monitoring tools.
326
+
327
+
Please see the [data-store-monitor](https://www.github.com/humancellatlas/data-store-monitor) repo for additional
328
+
monitoring tools.
319
329
320
330
### CI/CD with Travis CI and GitLab
321
331
@@ -387,14 +397,17 @@ indexed metadata.
387
397
'
388
398
389
399
## Running Tests
400
+
390
401
1. Check that software packages required to test and deploy are available, and install them if necessary:
391
402
392
403
`make --dry-run`
393
404
394
405
1. Populate text fixture buckets with test fixture data _**(This command will completely empty the given buckets** before populating them with test fixture data, please ensure
Copy file name to clipboardExpand all lines: daemons/dss-dlq-reaper/README.md
+1
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,7 @@ This daemon is a part of the DLQ-based framework for reprocessing failed Lambda
13
13
#### Enabling DLQ-based retries for DSS daemons Lambdas
14
14
15
15
In order to enable DLQ-based reprocessing for DSS daemons each daemon needs to be configured individually.
16
+
16
17
- Locate config.json file in the daemon's .chalice directory
17
18
- Add the following entry to the `config.json` file `"dead_letter_queue_target_arn": "",`.
18
19
- The entry needs to be created at the top level of the json attribute hierarchy. During deployment the value would be replaced with approriate SQS queue name.
0 commit comments