OpenStack + StackLight Reclass models for Mk-based cloud for training and development.
A new simple way to model multiple deployments in 1 reclass is now possible by new top level class cluster. This is a major change in model after 1-2 years using the current service and system separation and was created to address needs to clearly support:
- multiple parallel deployments,
- mitigate the need of pregenerated data [cookiecutter],
- unite the large production and small lab models in common format
This approach replaces actual system/openstack/... systems. It covers all current labs, stacklight labs up to the mk22-full-scale production ready deployments.
Each deployment is defined in own cluster directory classes/cluster/<deployment_name>. Short overview of deployment dir content:
- init.yml
- Shared location parameters, all hosts
- openstack.yml
- shared OpenStack parameters
- openstack-control/compute/database/etc.yml
- defined service clusters
- mon.yml
- shared monitoring parameters
- monitoring-server/proxy/etc.yml
- defined monitoring clusters/servers
The openstack-config is new cluster role for salt master and is used to define all nodes and services.
All other systems [ceph/stacklight/mcp] can be setup the same way. With this setup you have on system level only generic system fragments [not to be changed too much, better amended per case/pattern] and you have full power to override/separate services at the cluster level.
This approach basically removes the need for cookiecutter as the common files contain basically the content of what you would imput to cookiecutter. The current content of mk-lab-salt-model is basically demo of parallel deployment models with multiple separate salt-masters defined that reuse the single model.
Neutron deployed with OpenContrail SDN.
- 1 config node
- 3 control nodes
- 1 compute node
- 1 monitor node
Neutron deployed with OpenContrail SDN.
- 1 config node
- 3 control nodes
- 2 compute nodes
- 3 monitoring nodes
Neutron deployed with OpenvSwitch (DVR flavor).
- 1 config node
- 3 control nodes
- 2 compute nodes
- 3 monitoring nodes
Kubernetes deployed with Calico.
- 1 config node
- 3 master nodes
- 2 pool nodes
- 1 monitoring node