Skip to content

Commit

Permalink
Parameterized Generators and Pipelines (#2132)
Browse files Browse the repository at this point in the history
* Parameterised build pipeline generator

Users can now generate their own pipelines using this script from their own repositories to where they want to generate them.
Signed-off-by: Morgan Davies <morgandavies2020@gmail.com>

* Implement file path param in pipelines
* Implement repository checkout in downstream generators
* Implement file path params in downstream generators
* Roll out previous changes across all version files

Signed-off-by: Morgan Davies <morgandavies2020@gmail.com>

* Pass in new params to pr tester

Signed-off-by: Morgan Davies <morgandavies2020@gmail.com>

* configureBuild is already declared, add test stub

* Better error handling for regen

* Remove inline if's

* Use elvis operators

* Multiple Regen updates
* Added timestamps to build generator
* Correctly parsed ENABLE_PIPELINE_SCHEDULE
* Added checkout credentials to build generator
* Upgraded dot map logic to put logic
* Ensured we only checkout once on build generator
* Removed trigger schedule from config map unless we are asking for one
* General logic improvements for pipeline generators

* Clean up whitespaces

* Create initial defaults json

* Don't generate triggers if no schedule

* Parameterise as much as possible in defaults.json
* defaults.json changed to be more expressive
* defaults.json is read in generators and passed down as a param, even when it's not read from
* Whitespace removals

* Try using jenkins readJson

* Try as empty stub

* Try with empty func

* Try as list

* Try replicating jenkins src impl

* Basic def function declaration

* Try as string method (since java.langString is classname in error)

* Ensure to specifiy map when pulling in JSON

* Try declaring as a func

* Change implementation back to file since readJSON cannot be done with our test suite

* Specify param name in impl

* It's parseText for strings *facepalm*

* Use readFile outside of json slurp

* Use defs, specify param name

* Move all read files into master node
* Moves PR Tester jenkins script into GitHub (doc updated)

* Ensure the build testers have access to platform configs

* Revert addition of timestamps

* Download the defaults.json file instead of trying to open it
* Usually doesn't work out due to jenkins pathing

* Try parse

* Create URL class instance seperatly

* Remove unessassary json convert

* Update param to current version

* Remove unessassry workspace paths

* see if adding scm checkout back in fixes not found lib

* Fix bug in version pipeline, now will always use sanitised value

* Init default config paths after java version update parsing

* Spelling

* Remove as Map

* Instantiate as a map

* Try not parsing at all?

* Pretty print JSON to prevent parising error

* Remove extra parsing in downstream pipeline

* Add error handling for missing JSON file

* defaultJson -> defaultsJson

* Pass in json defaults to pr tester

* Fix bug in triggerSchedule setting

* Pretty print out config for top level generator

* Add missing params to config regen script

* Update other regen files to match working jdk8

* Update main pipeline files to match jdk8

* Remove unimportant pipelineSchedule

* Move checkout further up

* Parameterise the defaults json

* Move defaults json further up

* Trial loading lib after checkout in downstream regen

* Pretty print out in regen

* Parameterise lib path, strong defaultsJson description

* Parameterise regen script path

* Parameterise lib and downstream basefile paths, intelligently pulling in new values

* More credential changes
* Fixed bug in that credentials could be read from anywhere on the build
* Credentials will now also be generated and passed down to the pipelines
* Added checkout credentials to generators

* Renamed var and switch back user/pass

* Try with username/colon

* More credential changes

* Credentials will now also be generated and passed down to the pipelines
* Added checkout credentials to generators

* double quotes

* single quotes

* access var directly

* double quotes var

* single quotes var (normal)

* Strongify defaults param

* Renames jobTemplateDirectories to templateDirectories

* Trial checkout system in build generator

Generator will check if a file or script exists at a location in the user's repository and will checkout to adopt's if it does not exist using it's helper class, ConfigHandler

* Manually create methods for pipeline generator since there's no lib to load

* Minor updates

* Remove WORKSPACE from params

* Ensure we use the right template path and checkout

* Always clean up

* Clean up boolean logic

* Try parsing the boolean

* Try using a if else block

* Try using a noSchedule config

* Remove potentinally useless try-catch

* Remove attempts at changing the template logic

I can't find a way to make it work within a resonable timeframe and our existing logic is ok for now

* Add Repo Switching to Downstream job generator
* Renames ConfigHandler class to RepoHandler

* Remove RepoHandler from top level files
* We cannot import it so we use closures instead and use the repo handler further down the chain
* Also condenses exception messages into the actual exceptions

* Declare closure outside of scope

* Finish setting up generators, passing down everything the jobs

* Replace empty pipelineSchedule in config

* Conform remaining files
* Passes down remote configs and adopt defaults to downstream jobs
* make script will now run in adopt's workspace and exit it after build
* Removed duplicated code in top generator
* Made platform config dir default a url so it can be downloaded by our bash scripts

* Try update version after normal to be consistent

* Make generators less inclusive

* Declare var outside of try catch

* f

* Enter blank checkout creds if they don't exist

* MAke lib load in pipeline a global exception

* Mapify user remote configs

* Do map parsing further up

* Try constructor map

* Try declaring "as Map"

* Try using json slurper to parse map to be consistent

* Use JsonSluprers across downstream generation

* Try using dot logic for assigning json values

* Add use adopt bash scripts param
* Also check if dir exists in bash

* Introduce constructor for build_base_file

* Remove hashmap references in class instantiation

* Remove hashmap values

* Clean up Platform config parsing

* Ensure file is created and ran

* Try creating files first

* Try specify true

* try checking if it is a true string

* stringify everything

* Booleanify useAdoptBashScripts

* try normal logic

* See what useAdoptBashScripts is

* debug

* Move debug

* Rejig order of params

* debug

* Try using file redirect

* Adjust how branch and URL is determinted for platform config pulling

* debug

* Addiotional https

* rm debug

* Remove extra uneeded parsing code

* Allow generation of useAdoptBashScripts
* Also removes unessassary if for pipeline schedule

* Add weekly template

* Amend failing tests

* Update testers

* Try again with tests

* Cast and set splitAdoptUrl as object

* triggerSchedule -> pipelineSchedule

* Update tester

* Change around scripts to add common stuff into bash

* Flesh out testing suite for RepoHandler

* Remove duped JDK14 Aix key

* First doc updates

* More brevity on build generator output

* Comment and compatibility upgrades

* Add in an additional check for platforms on a failed direct link

* Debug out better

* Amend config location for test builds

* Specify no OS on linux pr build

* Reduce duration of build generator by querying adopt api

* Functionise download script

* Clean up echos, make bash method a function

* Use regex when checking error code

* Remove TODO's and update other pipeline files

* Minor comment and styling updates

* Align map entries, replace semi-colon with colon

* Remove compile testing on pr tester

* Change name of jenkins credentials

* USE_ADOPT_BASH_SCRIPTS -> USE_ADOPT_SHELL_SCRIPTS

* Made nessassary changes
  • Loading branch information
M-Davies authored Feb 4, 2021
1 parent ecb4ce1 commit 485a977
Show file tree
Hide file tree
Showing 49 changed files with 3,794 additions and 974 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ jobs:
VARIANT: ${{ matrix.vm }}
TARGET_OS: ${{ matrix.os }}
FILENAME: OpenJDK.tar.gz
# Don't set the OS as we use both linux and alpine-linux
PLATFORM_CONFIG_LOCATION: AdoptOpenJDK/openjdk-build/master/build-farm/platform-specific-configurations


- uses: actions/upload-artifact@v2

Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ workspace
pipelines/.gradle
pipelines/gradle-cache
pipelines/target
**/.DS_Store
5 changes: 4 additions & 1 deletion FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,10 @@ In order to test whether your changes work use the [test-build-script-pull-reque
Pass it your fork name (e.g. https://github.com/sxa555/openjdk-build) and the name of the branch and it will run a build using your updated scripts.
For more information, see the [PR testing documentation](./pipelines/build/prTester/README.md).

## I want to use my own configuration files or scripts on my own Jenkins instance. How do I do it?

Check out [Adopt's guide]((docs/UsingOurScripts.md)) to setting up your own scripts and configurations (while not having to keep up with Adopt's changes)!

## Which OS levels do we build on?

The operating systems/distributions which we build or are documented in the
Expand All @@ -131,4 +135,3 @@ Runtime platforms are in our [supported platforms page](https://adoptopenjdk.net

The following PR: https://github.com/AdoptOpenJDK/openjdk-build/pull/2416
demonstrates changes required to add a new build pipeline param, and also associated version/platform job configurations for setting the value when needed.

4 changes: 2 additions & 2 deletions build-farm/make-adopt-build-farm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,12 @@ then
# Use Adopt API to get the JDK Head number
echo "This appears to be JDK Head. Querying the Adopt API to get the JDK HEAD Number (https://api.adoptopenjdk.net/v3/info/available_releases)..."
JAVA_FEATURE_VERSION=$(curl -q https://api.adoptopenjdk.net/v3/info/available_releases | awk '/tip_version/{print$2}')

# Checks the api request was successful and the return value is a number
if [ -z "${JAVA_FEATURE_VERSION}" ] || ! [[ "${JAVA_FEATURE_VERSION}" -gt 0 ]]
then
echo "RETRYWARNING: Query ${retryCount} failed. Retrying in 30 seconds (max retries = ${retryMax})..."
retryCount=$((retryCount+1))
retryCount=$((retryCount+1))
sleep 30s
else
echo "JAVA_FEATURE_VERSION FOUND: ${JAVA_FEATURE_VERSION}" && break
Expand Down
59 changes: 57 additions & 2 deletions build-farm/set-platform-specific-configurations.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,5 +25,60 @@ fi

export VARIANT_ARG="--build-variant ${VARIANT}"

# shellcheck disable=SC1091,SC1090
source "$SCRIPT_DIR/platform-specific-configurations/${OPERATING_SYSTEM}.sh"
# Create placeholders for curl --output
if [ ! -d "$SCRIPT_DIR/platform-specific-configurations" ]
then
mkdir "$SCRIPT_DIR/platform-specific-configurations"
fi
if [ ! -f "$SCRIPT_DIR/platform-specific-configurations/platformConfigFile.sh" ]
then
touch "$SCRIPT_DIR/platform-specific-configurations/platformConfigFile.sh"
fi
PLATFORM_CONFIG_FILEPATH="$SCRIPT_DIR/platform-specific-configurations/platformConfigFile.sh"

# Setup for platform config download
rawGithubSource="https://raw.githubusercontent.com"
ret=0
fileContents=""

# Uses curl to download the platform config file
# param 1: LOCATION - Repo path to where the file is located
# param 2: SUFFIX - Operating system to append
function downloadPlatformConfigFile () {
echo "Attempting to download platform configuration file from ${rawGithubSource}/$1/$2"
# make-adopt-build-farm.sh has 'set -e'. We need to disable that for the fallback mechanism, as downloading might fail
set +e
curl "${rawGithubSource}/$1/$2" > "${PLATFORM_CONFIG_FILEPATH}"
ret=$?
# A download will succeed if location is a directory, so we also check the contents are valid
fileContents=$(cat $PLATFORM_CONFIG_FILEPATH)
set -e
}

# Attempt to download and source the user's custom platform config
downloadPlatformConfigFile "${PLATFORM_CONFIG_LOCATION}" ""
# Regex to spot github api error messages similar to "404: Not Found"
contentsErrorRegex="#!/bin/bash"

if [ $ret -ne 0 ] || [[ ! $fileContents =~ $contentsErrorRegex ]]
then
# Check to make sure that a OS file doesn't exist if we can't find a config file from the direct link
echo "[WARNING] Failed to find a user configuration file, ${rawGithubSource}/${PLATFORM_CONFIG_LOCATION} is likely a directory so we will try and search for a ${OPERATING_SYSTEM}.sh file."
downloadPlatformConfigFile "${PLATFORM_CONFIG_LOCATION}" "${OPERATING_SYSTEM}.sh"

if [ $ret -ne 0 ] || [[ ! $fileContents =~ $contentsErrorRegex ]]
then
# If there is no user platform config, use adopt's as a default instead
echo "[WARNING] Failed to download a user platform configuration file. Downloading Adopt's ${OPERATING_SYSTEM}.sh configuration file instead."
downloadPlatformConfigFile "${ADOPT_PLATFORM_CONFIG_LOCATION}" "${OPERATING_SYSTEM}.sh"

if [ $ret -ne 0 ] || [[ ! $fileContents =~ $contentsErrorRegex ]]
then
echo "[ERROR] Failed to download a platform configuration file from User and Adopt's repositories"
exit 2
fi
fi
fi

echo "[SUCCESS] Config file downloaded successfully to ${PLATFORM_CONFIG_FILEPATH}"
source "${PLATFORM_CONFIG_FILEPATH}"
121 changes: 121 additions & 0 deletions docs/UsingOurScripts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Custom environment setting up guide

Adopt have setup their build scripts so that you can plug in configuration files and scripts you have changed while not having to duplicate and maintain Adopt's entire codebase separately. This may seem complicated at first but it's pretty simple once you get the hang of the process.

## defaults.json

This file contains the default constants and paths used in the build scripts for whichever repository it is located in. As an example, Adopt's `defaults.json` file is located [here](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/defaults.json). If you're unsure of any of the fields, see Adopt's example map below:

```json
{
// Git repository details
"repository" : {
// Git Url of the current repository.
"url" : "https://github.com/AdoptOpenJDK/openjdk-build.git",
// Git branch you wish to use when running the scripts
"branch" : "master"
},
// Jenkins server details
"jenkinsDetails" : {
// The base URL of the server, usually this is where you would end up if you opened your server from a webpage
"rootUrl" : "https://ci.adoptopenjdk.net",
// Jenkins directory where jobs will be generated and run
"rootDirectory" : "build-scripts"
},
// Jenkins job dsl template paths (relative to this repository root)
"templateDirectories" : {
// Downstream job template (e.g. jdk8u-linux-x64-hotspot)
"downstream" : "pipelines/build/common/create_job_from_template.groovy",
// Upstream job template (e.g. openjdk8-pipeline)
"upstream" : "pipelines/jobs/pipeline_job_template.groovy",
// Weekly job template (e.g. weekly-openjdk8-pipeline)
"weekly" : "pipelines/jobs/weekly_release_pipeline_job_template.groovy"
},
// Job configuration file paths (relative to this repository root)
"configDirectories" : {
// Build configs directory containing node details, os, arch, testlists, etc
"build" : "pipelines/jobs/configurations",
// Nightly configs directory containing execution frequency, weekly tags, platforms to build.
"nightly" : "pipelines/jobs/configurations",
// Bash platform script directory containing jdk downloading and toolchain setups.
"platform" : "build-farm/platform-specific-configurations"
},
// Job script paths (relative to this repository root)
"scriptDirectories" : {
// Upstream scripts directory containing the 1st files that are executed by the openjdkx-pipeline jobs.
"upstream" : "pipelines/build",
// Upstream script file containing the 1st script that is executed by the weekly-openjdk8-pipeline jobs.
"weekly" : "pipelines/build/common/weekly_release_pipeline.groovy",
// Downstream script file containing the 1st script that is executed by the jdkx-platform-arch-variant jobs.
"downstream" : "pipelines/build/common/kick_off_build.groovy",
// Base script file containing the 2nd script that is executed by the pipeline_jobs_generator_jdkxx jobs
"regeneration" : "pipelines/build/common/config_regeneration.groovy",
// Base PR tester file script file containing the 2nd script that is executed by the pipeline_jobs_generator_jdkxx jobs
"tester" : "pipelines/build/prTester/pr_test_pipeline.groovy"
},
// Job base file (the main file which is called after the 1st setup script file) paths (relative to this repository root)
"baseFileDirectories": {
// Upstream pipeline file script containing the 2nd script that is executed by the openjdkx-pipeline jobs
"upstream" : "pipelines/build/common/build_base_file.groovy",
// Upstream pipeline file script containing the 2nd script that is executed by the jdkx-platform-arch-variant jobs
"downstream" : "pipelines/build/common/openjdk_build_pipeline.groovy"
},
// Script to import the adopt groovy class library (relative to this repository root)
"importLibraryScript" : "pipelines/build/common/import_lib.groovy"
}
```

### How do I know which parameter the jenkins job will use?

The scripts have been designed with a set hierarchy in mind when choosing which parameter to use:

```md
1. JENKINS PARAMETERS (highest priority, args entered here will be what the build scripts use over everything else)
2. USER JSON (medium priority, args entered here will be used when a jenkins parameter isn't entered)
3. ADOPT JSON (final priority, when jenkins parameters AND a user json arg can't be validated, the script will checkout to this repository and use Adopt's defaults json (linked above))
```

The `ADOPT JSON` level is only used for files and directories. Other parameters (`JOB_ROOT`, `JENKINS_BUILD_ROOT`, etc) only use the first two levels.

As an example, take a look at the [build-pipeline-generator](https://ci.adoptopenjdk.net/job/build-scripts/job/utils/job/build-pipeline-generator/) `SCRIPT_FOLDER_PATH` parameter:

![Image of the SCRIPT_FOLDER_PATH parameter in jenkins](images/scriptFolderParam.png)
The script will use whatever has been entered into the parameter field unless it has been left empty, in which case it will use whatever is in the user's `defaults.json['scriptDirectories']['upstream']` attribute.

It will then evaluate the existence of that directory in the user's repository and, if it fails to find one, will checkout to AdoptOpenJDK/openjdk-build and use Adopt's `defaults.json` (the console log will warn the user of this occuring):

```
00:13:31 [WARNING] pipelines/build/common/weekly_release_pipeline.groovy does not exist in your chosen repository. Updating it to use Adopt's instead
```

NOTE: For the defaults that are paths to directories, the scripts will search for files of the same name as Adopt's. Custom named files are not currently supported (so for `defaults.json['configDirectories']['platform']`, all of the filenames in the specified folder need to be the same as [Adopt's](https://github.com/AdoptOpenJDK/openjdk-build/tree/master/build-farm/platform-specific-configurations) or the script will fail to pick up the user's config's and will use Adopt's instead).

### This is great, but how do I add new defaults?

Create a openjdk-build PR that adds the new defaults in for what they would be for Adopt. Don't forget to update Adopt's [RepoHandlerTest.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/src/test/groovy/RepoHandlerTest.groovy) and [fakeDefaults.json](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/src/test/groovy/fakeDefaults.json), as well as any jenkins jobs if needs be (if you don't have configuration access, ask in Slack#build for assistance). Then update any scripts that will need to handle the new default, you will likely need to do a bit of searching through the objects mentioned in Adopt's `defaults.json` to find where Adopt's scripts will need changing.

Once it has been approved and merged, update your scripts and/or jenkins jobs to handle the new default and you're done!

## Starting from scratch

1. Create a (preferably) public repository with whatever scripts/configs you have altered. You don't need to place them in the same place as where Adopt's ones are, but they should have the same name. Currently, the list of supported files (replacing `x` with the JDK version number you want to alter, `(u)` is optional) you can modify are:

- [pipelines/build/regeneration/build_pipeline_generator.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/regeneration/build_pipeline_generator.groovy) - Main upstream generator files. This is what the [build-pipeline-generator jenkins job](https://ci.adoptopenjdk.net/job/build-scripts/job/utils/job/build-pipeline-generator/) executes on build, generating the [upstream jobs](https://ci.adoptopenjdk.net/job/build-scripts/).
- [pipelines/jobs/pipeline_job_template.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/jobs/pipeline_job_template.groovy) - Upstream jobs dsl. This is the dsl job framework of the [openjdkxx-pipeline downstream jobs](https://ci.adoptopenjdk.net/job/build-scripts).
- [pipelines/jobs/weekly_release_pipeline_job_template.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/jobs/weekly_release_pipeline_job_template.groovy) - Upstream jobs dsl. This is the dsl job framework of the [weekly-openjdkxx-pipeline downstream jobs](https://ci.adoptopenjdk.net/job/build-scripts).
- [pipelines/build/openjdkx_pipeline.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/openjdk8_pipeline.groovy) - Main upstream script files. These are what the [openjdkx-pipeline jenkins jobs](https://ci.adoptopenjdk.net/job/build-scripts/job/openjdk8-pipeline/) execute on build.
- [pipelines/build/common/import_lib.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/common/import_lib.groovy) - Class library import script. This imports [Adopt's classes](https://github.com/AdoptOpenJDK/openjdk-build/tree/master/pipelines/library/src) used in the groovy scripts.
- [pipelines/build/common/build_base_file.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/common/build_base_file.groovy) - Base upstream script file that's called from `pipelines/build/openjdkx_pipeline.groovy`, setting up the [downstream build JSON](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/library/src/common/IndividualBuildConfig.groovy) for each downstream job and executing them.
- [pipelines/jobs/configurations/jdkx(u).groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/jobs/configurations/jdk8u.groovy) - Upstream nightly config files. These define the job schedules, what platforms are instantiated on a nightly build and what tags are used on the weekend releases.
- [pipelines/jobs/configurations/jdkx(u)_pipeline_config.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/jobs/configurations/jdk8u_pipeline_config.groovy) - Downstream build config files, docs for this are [in progress](https://github.com/AdoptOpenJDK/openjdk-build/issues/2129).
- [pipelines/build/common/kick_off_build.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/common/kick_off_build.groovy) - Main downstream scripts file. These are what the [jdkx(u)-os-arch-variant jenkins jobs](https://ci.adoptopenjdk.net/job/build-scripts/job/jobs/job/jdk8u/) execute on build.
- [pipelines/build/common/openjdk_build_pipeline.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/common/openjdk_build_pipeline.groovy) - Base downstream script file. This contains most of the functionality for Adopt's downstream jobs (tests, installers, bash scripts, etc).
- [pipelines/build/regeneration/jdkx_regeneration_pipeline.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/regeneration/jdk8_regeneration_pipeline.groovy) - Main downstream generator files. These are what the [pipeline_jobs_generator_jdk8u jenkins jobs](https://ci.adoptopenjdk.net/job/build-scripts/job/utils/job/pipeline_jobs_generator_jdk8u/) execute on build, generating the [downstream jobs](https://ci.adoptopenjdk.net/job/build-scripts/job/jobs/) via `pipelines/build/common/config_regeneration.groovy` (see below).
- [pipelines/build/common/config_regeneration.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/common/config_regeneration.groovy) - Base downstream script file. These are what the [pipeline_jobs_generator_jdk8u jenkins jobs](https://ci.adoptopenjdk.net/job/build-scripts/job/utils/job/pipeline_jobs_generator_jdk8u/) execute after `jdkx_regeneration_pipeline.groovy`, calling the dsl template `pipelines/build/common/create_job_from_template.groovy`.
- [pipelines/build/common/create_job_from_template.groovy](https://github.com/AdoptOpenJDK/openjdk-build/blob/master/pipelines/build/common/create_job_from_template.groovy) - Downstream jobs dsl. This is the dsl job framework of the [downstream jobs]((https://ci.adoptopenjdk.net/job/build-scripts/job/jobs/)).
2. Create a User JSON file containing your default constants that the build scripts will use (see [#the defaults.json](#defaults.json))
3. Copy the [build-pipeline-generator](https://ci.adoptopenjdk.net/job/build-scripts/job/utils/job/build-pipeline-generator/) and [pipeline_jobs_generator_jdk8u](https://ci.adoptopenjdk.net/job/build-scripts/job/utils/job/pipeline_jobs_generator_jdk8u/) jobs to your Jenkins instance (replace `jdk8u` with whichever version you intend to build, there should be one job for each jdk version).
4. Execute the copied `build-pipeline-generator`. Make sure you have filled in the parameters that are not covered by your `defaults.json` (e.g. `DEFAULTS_URL`, `CHECKOUT_CREDENTIALS`). You should now see that the nightly and weekly pipeline jobs have been successfully created in whatever folder was entered into `JOB_ROOT`
5. Execute the copied `pipeline_jobs_generator_jdkxx` jobs. Again, make sure you have filled in the parameters that are not covered by your `defaults.json`. You should now see that the `jobs/jdkxx-platform-arch-variant` jobs have been successfully created in whatever folder was entered into `JOB_ROOT`

Congratulations! You should now be able to run Adopt's scripts inside your own Jenkins instance.
Binary file added docs/images/scriptFolderParam.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 485a977

Please sign in to comment.