All CDK CLI Commands are defined in lib/config.ts
. This file is translated
into a valid yargs
configuration by bin/cli-args-gen
, which is generated by @aws-cdk/cli-args-gen
.
The yargs
configuration is generated into the function parseCommandLineArguments()
,
in lib/parse-command-line-arguments.ts
, and is checked into git for readability and
inspectability; do not edit this file by hand, as every subsequent yarn build
will
overwrite any manual edits. If you need to leverage a yargs
feature not used by
the CLI, you must add support for it to @aws-cdk/cli-args-gen
.
Note that bin/cli-args-gen
is executed by ts-node
, which allows config.ts
to
reference functions and other identifiers defined in the CLI before the CLI is
built.
Some values, such as the user's platform, cannot be computed at build time.
Some commands depend on these values, and thus cli-args-gen
must generate the
code to compute these values at build time.
The only way to do this today is to reference a parameter with DynamicValue.fromParameter
.
The caller of parseCommandLineArguments()
must pass the parameter.
Unit tests are automatically run as part of the regular build. Integration tests aren't run automatically since they have nontrivial requirements to run.
The CDK CLI integration tests live in @aws-cdk-testing. See here for more information on the tests and how to run them.
We are reusing the same set of integration tests in 3 ways. In each of those cases, we get the code we're testing to test from a different source.
- Run them as part of development. In this case, we get the CLI and the framework libraries from the source repository.
- Run them as integration tests in the pipeline. In this case, we get a specific version of the CLI and the framework libraries from a set of candidate NPM packages.
- Run them continuously, as a canary. In this case, we get the latest CLI and the framework libraries directly from the package managers, same as an end user would do.
To hide the differences between these different ways of running the tests,
there are 3 scripts. They all take as command-line argument the ACTUAL test
script to run, and prepare the environment in such a way that the tests
will use the cdk
command and the libraries from the distribution selected.
To run the CLI integ tests in each configuration:
$ test/integ/run-against-repo test/integ/cli/test.sh
$ test/integ/run-against-dist test/integ/cli/test.sh
$ test/integ/run-against-release test/integ/cli/test.sh
To run a single integ test in the source tree:
$ test/integ/run-against-repo test/integ/cli/test.sh -t 'SUBSTRING OF THE TEST NAME'
To run regression tests in the source tree:
$ test/integ/test-cli-regression-against-current-code.sh [-t '...']
Integ tests can run in parallel across multiple regions. Set the AWS_REGIONS
environment variable to a comma-separate list of regions:
$ env AWS_REGIONS=us-west-2,us-west-1,eu-central-1,eu-west-2,eu-west-3 test/integ/run-against-repo test/integ/cli/test.sh
Elements from the list of region will be exclusively allocated to one test at
a time. The tests will run in parallel up to the concurrency limit imposed by
jest (default of 5, controllable by --maxConcurrency
) and the available
number of elements. Regions may be repeated in the list in which case more
than one test will run at a time in that region.
If AWS_REGIONS
is not set, all tests will sequentially run in the one
region set in AWS_REGION
.
Run with env INTEG_NO_CLEAN=1
to forego cleaning up the temporary directory,
in order to be able to debug 'cdk synth' output.
CLI tests will exercise a number of common CLI scenarios, and deploy actual stacks to your AWS account.
REQUIREMENTS
- All packages have been compiled.
- Shell has been preloaded with AWS credentials.
Run:
yarn integ-cli
This command runs two types of tests:
These tests simply run the local integration tests located in test/integ/cli
. They test the proper deployment of stacks and in general the correctness of the actions performed by the CLI.
You can also run just these tests by executing:
yarn integ-cli-no-regression
Since the tests take a long time to run, we run them in parallel in order to minimize running time. Jest does not have
good support for parallelism, the only thing that exists is test.concurrent()
and it has a couple of limitations:
- It's not possible to only run a subset of tests, all tests will execute (the reason for this is that it will start all
tests in parallel, but only
await
your selected subset specified with the-t TESTNAME
option. However, all tests are running and Node will not exit until they're all finished). - It's not possible to use
beforeEach()
andafterEach()
.
Because of the first limitation, concurrency is only enabled on the build server (via the JEST_TEST_CONCURRENT
environment variable), not locally. Note: tests using beforeEach()
will appear to work locally, but will fail on the
build server! Don't use it!
Validate that previously tested functionality still works in light of recent changes to the CLI. This is done by fetching the functional tests of the previous published release, and running them against the new CLI code.
These tests run in two variations:
-
against local framework code
Use your local framework code. This is important to make sure the new CLI version will work properly with the new framework version.
See a concrete failure example
-
against previously release code
Fetches the framework code from the previous release. This is important to make sure the new CLI version does not rely on new framework features to provide the same functionality.
See a concrete failure example
You can also run just these tests by executing:
yarn integ-cli-regression
Note that these tests can only be executed using the run-against-dist
wrapper. Why? well, it doesn't really make sense to run-against-repo
when testing the previously released code, since we obviously cannot use the repo. Granted, running against local framework code can somehow work, but it required a few too many hacks in the current codebase to make it seem worthwhile.
The implementation of the regression suites is not trivial to reason about and follow. Even though the code includes inline comments, we break down the exact details to better serve us in maintaining it and regaining context.
Before diving into it, we establish a few key concepts:
CANDIDATE_VERSION
- This is the version of the code that is being built in the pipeline, and its value is stored in thebuild.json
file of the packaged artifact of the repo.PREVIOUS_VERSION
- This is the version previous to theCANDIDATE_VERSION
.CLI_VERSION
- This is the version of the CLI we are testing. It is always the same as theCANDIDATE_VERSION
since we want to test the latest CLI code.FRAMEWORK_VERSION
- This is the version of the framework we are testing. It varies between the two variation of the regression suites. Its value can either be that ofCANDIDATE_VERSION
(for testing against the latest framework code), orPREVIOUS_VERSION
(for testing against the previously published version of the framework code).
Following are the steps involved in running these tests:
-
Run
./bump-candidate.sh
to differentiate between the local version and the published version. For example, if the version inlerna.json
is1.67.0
, this script will result in a version1.67.0-rc.0
. This is needed so that we can launch a verdaccio instance serving local tarballs without worrying about conflicts with the public npm uplink. This will help us avoid version quirks that might happen during the post-release-pre-merge-back time window. -
Run
./align-version.sh
to configure the above version in all our packages. -
Build and Pack the repository. The produced tarballs will be versioned with the above version.
-
Run
test/integ/run-against-dist test/integ/test-cli-regression-against-latest-release.sh
(ortest/integ/test-cli-regression-against-latest-code.sh
) -
First, the
run-against-dist
wrapper will run and:- Read the
CANDIDATE_VERSION
frombuild.json
and export it. - Launch verdaccio to serve all local tarballs (serves the
CANDIDATE_VERSION
now) - Install the CLI using the
CANDIDATE_VERSION
versionCANDIDATE_VERSION
env variable. - Execute the given script.
- Read the
-
Both cli regression test scripts run the same
run_regression_against_framework_version
function. This function accepts which framework version should the regression run against, it can be eitherCANDIDATE_VERSION
orPREVIOUS_VERSION
. Note that the argument is not the actual value of the version, but instead is just an indirection identifier. The function will:- Calculate the actual value of the previous version based on the candidate version. (fetches from github)
- Download the previous version tarball from npm and extract the integration tests.
- Export a
FRAMWORK_VERSION
env variable based on the caller, and execute the integration tests of the previous version.
-
Our integration tests now run and have knowledge of which framework version they should install.
That "basically" it, hope it makes sense...
Init template tests will initialize and compile the init templates that the CLI ships with.
REQUIREMENTS
- Running on a machine that has all language tools available (JDK, .NET Core, Python installed).
- All packages have been compiled.
- All packages have been packaged to their respective languages (
pack.sh
).
Run:
npm run integ-init
These two sets of integration tests have 3 running modes:
- Developer mode, when called through
npm run
. Will use the source tree. - Integration test, when called from a directory with the build artifacts
(the
dist
directory). - Canaries, when called with
IS_CANARY=true
. Will use the build artifacts up on the respective package managers.
The integration test and canary modes are used in the CDK publishing pipeline and the CDK canaries, respectively. You wouldn't normally need to run them directly that way.
Our CLI package is built and packaged using the node-bundle tool.
This has two affects one should be aware of:
All runtime dependencies are converted to devDependencies
, as they are bundled inside
the package and don't require installation by consumers. This process happens on-demand during packaging,
this is why our source code still contains those dependencies,
but the npm package does not.
The bundler also creates an attributions document that lists out license information for the entire dependency closure. This document is stored in the THIRD_PARTY_LICENSES file. Our build process validates that the file committed to source matches the expected auto-generated one. We do this so that our source code always contains the up to date attributions document, and so that we can backtrack/review changes to it using normal code review processes.
Whenever a dependency changes (be it direct or transitive, new package or new version), the attributions document will change, and needs to be regenerated. For you, this means that:
- When you manually upgrade a dependency, you must also regenerate the document by running
yarn pkglint
inside the CLI package. - When you build the CLI locally, you must ensure your dependencies are up to date by running
yarn install
inside the CLI package. Otherwise, you might get an error like so:aws-cdk: - [bundle/outdated-attributions] THIRD_PARTY_LICENSES is outdated (fixable)
.
The source map handling is not entirely intuitive, so it bears some description here.
There are 2 steps to producing a CLI build:
- First we compile TypeScript to JavaScript. This step is configured to produce inline sourcemaps.
- Then we bundle JavaScript -> bundled JavaScript. This removes the inline sourcemaps, and also is configured to not emit a fresh sourcemap file.
The upshot is that we don't vend a 30+MB sourcemap to customers that they have no use for, and that we don't slow down Node loading those sourcemaps, while if we are locally developing and testing the sourcemaps are still present and can be used.
During the CLI initialization, we always enable source map support: if we are developing then source maps are present and can be used, while in a production build there will be no source maps so there's nothing to load anyway.