Description
I'm trying to create a kOps cluster on single AZ in AWS (to avoid inter AZ data transfer costs). Single AZ is enough to meet our availability requirements.
However, I still need more than one master node so I scaled up my instanceGroup to have minSize:3 and maxSize:3 on the existing master instanceGroup spec. However, kOps appears to have a built in limitation on using only one instance per instanceGroup for master nodes.
kops/pkg/apis/kops/validation/instancegroup.go
Lines 52 to 56 in 61e7ac2
I could create multiple instanceGroups on the same AZ but that would mean duplicating the same config and also it just doesn't make sense.
Any advise/workarounds welcome. Happy to make a pull request to fix it if required.
My configuration:
# master instance group snippet
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: {{.kops.clusterName}}.{{.kops.dnsZone}}
name: masters
spec:
image: {{ ChannelRecommendedImage .kops.cloudProvider .kops.kubernetesVersion .kops.architecture }}
kubernetesVersion: {{.kops.kubernetesVersion}}
machineType: {{ .kops.masters.machineType }}
{{ if .kops.masters.spot }}
maxPrice: {{ .kops.masters.maxPrice | quote }}
{{ end }}
maxSize: 3
minSize: 3
role: Master
rootVolumeSize: 80
subnets:
- {{.kops.awsRegion}}a
---
# etcd config snippet
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: masters
name: a
- encryptedVolume: true
instanceGroup: masters
name: b
- encryptedVolume: true
instanceGroup: masters
name: c
Also, Is this a bug?