As a TheyDo customer you can setup an S3 integration to automatically ingest data from any source application as long as it adheres to the specified json schemas.
This repository serves as documentation of:
- available schemas
- examples and example use cases
- aws cli commands for necessary configuration
- a cli tool to test authentication, validate and upload files
- AWS Account - your AWS account, share your account id with us to get started. If AWS is not yet part of your infrastructure, reach out and we will be able to provide a solution.
- AWS Region - Target region (typically
eu-west-1) - shared by TheyDo - Role ARN - The Amazon Resource Name of the role to assume (format:
arn:aws:iam::<account>:role/<name>) - shared by TheyDo- this role allows an external account to access 'Bucket Name/Bucket Prefix'
- External ID - Required by the role's trust policy for additional security - shared by TheyDo
- Bucket Name - Target S3 bucket name (e.g.,
theydo-ext-dev-eu-west-1) - shared by TheyDo - Bucket Prefix - Key prefix/folder path within the bucket where files will be uploaded - shared by TheyDo
Role permissions are strictly limited to what is needed to upload files to the bucket prefix. This means that tools like S3 Browser will not be successful in connecting to the bucket as they require a bigger permission scope.
You need to know your <aws_access_key_id> and <aws_secret_access_key> to provide in the first step.
aws configure --profile <source_profile>
aws configure set region eu-west-1 --profile <source_profile>
aws configure set source_profile <source_profile> --profile <role_profile>
aws configure set region eu-west-1 --profile <role_profile>
aws configure set role_arn <role_arn> --profile <role_profile>
aws configure set external_id <external_id> --profile <role_profile>
aws sts get-caller-identity --profile <role_profile>
aws s3 ls s3://<bucket_name>/<bucket_prefix> --summarize --profile jb-test-role
aws s3 cp <local_file_name> s3://<bucket_name>/<bucket_prefix>/<remote_file_name> --profile <role_profile>
A lightweight command‑line helper for:
- validating JSON files against a JSON Schema
- testing that an AssumeRole configuration works
- (optionally) validating again and uploading to Amazon S3
make # boots an isolated Python 3.12 env and installs the CLI
s3tcli --helpThe commands below require the configuration of a profile as described above.
Arguments
--format PATH— Path to the JSON Schema file.--file PATH— Path to the JSON document to validate.
Example
s3tcli test-format \
--format schema/SolutionsFile.schema.json \
--file examples/solutions.jsonArguments
--role ARN— Role to assume (e.g.,arn:aws:iam::<account>:role/<name>).--external-id STRING— External ID required by the role’s trust policy.--profile NAME— Local AWS credentials profile to use.--region CODE— AWS region (e.g.,eu-west-1).
Example
s3tcli test-role \
--role arn:aws:iam::830965594115:role/.N2Y1M2siZ2QtOYU3MS05YzUzLWI2OGYtODVkZmU9ZmVlY2Yy. \
--external-id 1f377dc0-a39a-493a-ae61-a32e9b64d4d7 \
--profile kristjan-s3-test \
--region eu-west-1On success, the CLI prints the caller identity for the assumed role.
Arguments
--bucket NAME— Target S3 bucket (e.g.,theydo-ext-dev-eu-west-1).--prefix KEYPREFIX— Key prefix/folder under which to upload.--file PATH— Path to the JSON document to upload.--format PATH— Path to the JSON Schema used for validation.--role ARN— Role to assume.--external-id STRING— External ID for the role.--profile NAME— AWS credentials profile.--region CODE— AWS region.
Example
s3tcli test-upload \
--bucket theydo-ext-dev-eu-west-1 \
--prefix .N2Y1M2siZ2QtOYU3MS05YzUzLWI2OGYtODVkZmU9ZmVlY2Yy. \
--file examples/solutions.json \
--format schemas/SolutionsFile.schema.json \
--role arn:aws:iam::830965594115:role/.N2Y1M2siZ2QtOYU3MS05YzUzLWI2OGYtODVkZmU9ZmVlY2Yy. \
--external-id 1f377dc0-a39a-493a-ae61-a32e9b64d4d7 \
--profile kristjan-s3-test \
--region eu-west-1The CLI composes a key like:
.N2Y1M2siZ2QtOYU3MS05YzUzLWI2OGYtODVkZmU9ZmVlY2Yy./1691425012-solutions.json
…and prints “Upload successful” on completion.
- If the property is not in the import json a) if the entity exists, owner is left as is b) if the entity does not exist it is created without owner
- if the property is 'null' in the import json a) if the entity exists, owner is removed b) if the entity does not exist, it is created without owner
- If the property is set in the import json a) the owner is updated to the user it maps to if such user exists in the workspace b) if no such user exists it is treated as not in the json (see 1.)