Skip to content

leboncoin/ansible-concourse

 
 

Repository files navigation

ansible-concourse

Build Status

An easy way to deploy and manage a Concourse CI with a cluster of workers using ansible

Requirements

Supported concourse:

  • v4.x
  • v5.x

Supported platforms:

  • Ubuntu 16.04 and 18.04
  • MacOS (Early support. Accepting PRs)
  • Windows (not supported yet. Accepting PRs)

Optional TLS termination

Overview

I am a big fan of concourse. This role will install and manage concourse using Ansible. A more robust solution is to use Bosh

Examples

Single node

---
- name: Create Single node host
  hosts: ci.example.com
  become: True
  vars:
    # Set your own password and save it securely in vault
    concourse_local_users:
                          - {user: "user1", pass: "pass1"}
    concourse_web_options:
      CONCOURSE_POSTGRES_DATABASE                : "concourse"
      CONCOURSE_POSTGRES_HOST                    : "127.0.0.1"
      CONCOURSE_POSTGRES_PASSWORD                : "conpass"
      CONCOURSE_POSTGRES_SSLMODE                 : "disable"
      CONCOURSE_POSTGRES_USER                    : "concourseci"
    # ********************* Example Keys (YOU MUST OVERRIDE THEM) *********************
    # This keys are demo keys. generate your own and store them safely i.e. ansible-vault
    # Check the key section on how to auto generate keys.
    # **********************************************************************************
    concourseci_key_session_public             : ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6tKH.....
    concourseci_key_session_private            : |
                                                  -----BEGIN RSA PRIVATE KEY-----
                                                  MIIEowIBAAKCAQEAurSh5kbUadGuUgHqm1ct6SUrqFkH5kyJNdOjHdWxoxCzw5I9
                                                  ................................
                                                  N1EQdIhtxo4mgHXjF/8L32SqinAJb5ErNXQQwT5k9G22mZkHZY7Y
                                                  -----END RSA PRIVATE KEY-----

    concourseci_key_tsa_public                  : ssh-rsa AAAAB3NzaC1yc2EAAAADAQ......
    concourseci_key_tsa_private                 : |
                                                  -----BEGIN RSA PRIVATE KEY-----
                                                  MIIEogIBAAKCAQEAo3XY74qhdwY1Z8a5XnTbCjNMJu28CcEYJ1KJi1a8B143wKxM
                                                  .........
                                                  uPTcE+vQzvMV3lJo0CHTlNMo1JgHOO5UsFZ1cBxO7MZXCzChGE8=
                                                  -----END RSA PRIVATE KEY-----
    concourseci_worker_keys                     :
                                  - public      : ssh-rsa AAAAB3N.....
                                    private     : |
                                                    -----BEGIN RSA PRIVATE KEY-----
                                                    MIIEpQIBAAKCAQEAylt9UCFnAkdhofItX6HQzx6r4kFeXgFu2b9+x87NUiiEr2Hi
                                                   .......
                                                    ZNJ69MjK2HDIBIpqFJ7jnp32Dp8wviHXQ5e1PJQxoaXNyubfOs1Cpa0=
                                                    -----END RSA PRIVATE KEY-----
  roles:
    - { name: "postgresql",        tags: "postgresql" }
    - { name: "ansible-concourse", tags: "concourse"  }
[concourse-web]
ci.example.com
[concourse-worker]
ci.example.com

Breaking changes as of version v4.0.0

As of version 4.x of this role the user management has changed to reflect changes in Concourse 4.x the new team auth https://concourse-ci.org/authentication.html.

I would recommend reading the new authentication before proceeding. A new top level list can be used concourse_local_users to add local user. example

concourse_local_users:
   - user: "user1"
     pass: "pass1"
   - user: "user2"
     pass: "pass2"

Clustered nodes 2x web & 4x worker

In order to make a cluster of servers you can easily add the host to groups

[concourse-web]
ci-web01.example.com
ci-web02.example.com
[concourse-worker]
ci-worker01.example.com
ci-worker02.example.com
ci-worker03.example.com
ci-worker04.example.com

You would also need to generate keys for workers check key section

Configuration

All command line options are now supported as of ansible-concourse version 4.x in Web and worker as a dictionary. Note: if you are upgrade from a version prior to 3.0.0 you would need to accommodate for changes

The configuration is split between two dictionaries concourse_web_options and concourse_worker_options all key values defined will be exported as an environmental variable to concourse process. There are some ansible-concourse flags that can be defined outside concourse_web_options and concourse_worker_options fpr more info check defaults.yml

concourse_local_users:
                          - {user: "user1", pass: "pass1"}
                          - {user: "user2", pass: "pass2"}
concourse_web_options                        :
  CONCOURSE_POSTGRES_DATABASE                : "concourse"
  CONCOURSE_POSTGRES_HOST                    : "127.0.0.1"
  CONCOURSE_POSTGRES_PASSWORD                : "NO_PLAIN_TEXT_USE_VAULT"
  CONCOURSE_POSTGRES_SSLMODE                 : "disable"
  CONCOURSE_POSTGRES_USER                    : "concourseci"

concourse_worker_options                     :
  CONCOURSE_GARDEN_NETWORK_POOL              : "10.254.0.0/22"
  CONCOURSE_GARDEN_MAX_CONTAINERS            : 150

To view all environmental options please check web options and worker options.

ansible-concourse has some sane defaults defined concourse_web_options_default and concourse_worker_options_default in default.yml those default will merge with concourse_web_option and concourse_worker_option. concourse_web_option and concourse_worker_optionhas higher precedence.

Concourse versions

This role supports installation of release candidate and final releases. Simply overriding concourseci_version with desired version.

  • Fpr rc. concourseci_version : "vx.x.x-rc.xx" that will install release candidate.
  • For final release. concourseci_version : "vx.x.x"

By default this role will try to have the latest stable release look at defaults/main.yml

Default variables

Check defaults/main.yml for all bells and whistles.

Keys

Warning the role comes with default keys. This keys are used for demo only you should generate your own and store them safely i.e. ansible-vault

You would need to generate 2 keys for web and one key for each worker node. An easy way to generate your keys to use a script in keys/key.sh or you can reuse the same keys for all workers.

The bash script will ask you for the number of workers you require. It will then generate ansible compatible yaml files in keys/vars You can than copy the content in your group vars or any other method you prefer.

Managing teams

This role supports Managing teams :

NOTE if you use manage DO NOT USE DEFAULT PASSWORD you should set your own password and save it securely in vault. or you can look it up from web options

    concourseci_manage_teams                : True
    ## User must be added first concourse_local_users
    concourseci_manage_credential_user          : "api"
    concourseci_manage_credential_password      : "apiPassword"


    concourseci_teams                 :
          - name: "team_1"
            state: "present"
            flags:
              local-user : user1
          - name: "team_2"
            state: "absent"
          - name: "team_3"
            state: "present"
            flags:
              # See [web options](web_arguments.txt) for how to integrate Concourse Web with GitHub for auth
              github-organization: ORG
              github-team: ORG:TEAM
              github-user: LOGIN
          - name: "team_4"
            state: "present"
            flags:
                no-really-i-dont-want-any-auth: ""
          - name: "x5"
            state: "absent"
            flags:
                local-user : user5

The role supports all arguments passed to fly for more info fly set-team --help. Please note if you delete a team you remove all the pipelines in that team

Auto scaling

  • Scaling out: Simply just add a new instance :)
  • Scaling in: You would need to drain the worker first by running service concourse-worker stop

Vagrant demo

You can use vagrant to spin a test machine.

# Install postgresql role in test/helper_roles
./test/setup_roles.sh
vagrant up

The vagrant machine will have an IP of 192.168.50.150 you can access the web http://192.168.50.150:8080

You can access the web and API on port 8080 with username myuser and mypass

Once your done

vagrant destroy

Contribution

Pull requests on GitHub are welcome on any issue.

Thanks for all the contrubtors

TODO

  • Support pipeline upload
  • Full MacOS support
  • Add distributed cluster tests
  • Windows support

License

MIT

Packages

No packages published

Languages

  • Ruby 48.2%
  • Shell 46.3%
  • Jinja 5.5%