Skip to content

Commit

Permalink
fix(README): resolving a merge conflict
Browse files Browse the repository at this point in the history
Signed-off-by: Cryptophobia <aouzounov@gmail.com>
  • Loading branch information
Cryptophobia committed Oct 1, 2020
2 parents 5930851 + 523ef79 commit 5613bc8
Show file tree
Hide file tree
Showing 11 changed files with 93 additions and 86 deletions.
13 changes: 6 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,10 @@
[![Build Status](https://travis-ci.org/teamhephy/controller.svg?branch=master)](https://travis-ci.org/teamhephy/controller)
[![codecov.io](https://codecov.io/github/deis/controller/coverage.svg?branch=master)](https://codecov.io/github/deis/controller?branch=master)
[![Docker Repository on Quay](https://quay.io/repository/deisci/controller/status "Docker Repository on Quay")](https://quay.io/repository/deisci/controller)
[![Dependency Status](https://www.versioneye.com/user/projects/5863f1de6f4bf900128fa95a/badge.svg?style=flat)](https://www.versioneye.com/user/projects/5863f1de6f4bf900128fa95a)

Deis (pronounced DAY-iss) Workflow is an open source Platform as a Service (PaaS) that adds a developer-friendly layer to any [Kubernetes](http://kubernetes.io) cluster, making it easy to deploy and manage applications on your own servers.

For more information about the Deis Workflow, please visit the main project page at https://github.com/deisthree/workflow.
For more information about the Deis Workflow, please visit the main project page at https://github.com/teamhephy/workflow.

We welcome your input! If you have feedback, please [submit an issue][issues]. If you'd like to participate in development, please read the "Development" section below and [submit a pull request][prs].

Expand Down Expand Up @@ -47,7 +46,7 @@ You'll want to test your code changes interactively in a working Kubernetes clus

### Workflow Installation

After you have a working Kubernetes cluster, you're ready to [install Workflow](https://deis.com/docs/workflow/installing-workflow/).
After you have a working Kubernetes cluster, you're ready to [install Workflow](https://docs.teamhephy.com/installing-workflow/).

## Testing Your Code

Expand Down Expand Up @@ -77,8 +76,8 @@ kubectl get pod --namespace=deis -w | grep deis-controller
```

[install-k8s]: https://kubernetes.io/docs/setup/pick-right-solution
[issues]: https://github.com/deisthree/controller/issues
[prs]: https://github.com/deisthree/controller/pulls
[workflow]: https://github.com/deisthree/workflow
[issues]: https://github.com/teamhephy/controller/issues
[prs]: https://github.com/teamhephy/controller/pulls
[workflow]: https://github.com/teamhephy/workflow
[Docker]: https://www.docker.com/
[v2.18]: https://github.com/deisthree/workflow/releases/tag/v2.18.0
[v2.18]: https://github.com/teamhephy/workflow/releases/tag/v2.21.4
4 changes: 4 additions & 0 deletions charts/controller/templates/controller-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,10 @@ spec:
secretKeyRef:
name: database-creds
key: password
{{- if (.Values.deis_ignore_scheduling_failure) }}
- name: DEIS_IGNORE_SCHEDULING_FAILURE
value: "{{ .Values.deis_ignore_scheduling_failure }}"
{{- end }}
- name: RESERVED_NAMES
value: "deis, deis-builder, deis-workflow-manager, grafana"
- name: WORKFLOW_NAMESPACE
Expand Down
3 changes: 2 additions & 1 deletion rootfs/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM quay.io/deis/base:v0.3.6
FROM hephy/base:v0.4.1

RUN adduser --system \
--shell /bin/bash \
Expand All @@ -17,6 +17,7 @@ RUN buildDeps='gcc libffi-dev libpq-dev libldap2-dev libsasl2-dev python3-dev py
libpq5 \
libldap-2.4 \
python3-minimal \
python3-distutils \
# cryptography package needs pkg_resources
python3-pkg-resources && \
ln -s /usr/bin/python3 /usr/bin/python && \
Expand Down
4 changes: 2 additions & 2 deletions rootfs/Dockerfile.test
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM quay.io/deis/base:v0.3.6
FROM hephy/base:v0.4.1

RUN adduser --system \
--shell /bin/bash \
Expand Down Expand Up @@ -49,7 +49,7 @@ RUN buildDeps='gcc libffi-dev libpq-dev libldap2-dev libsasl2-dev python3-dev py
WORKDIR /app

# test-unit additions to the main Dockerfile
ENV PGBIN=/usr/lib/postgresql/9.5/bin PGDATA=/var/lib/postgresql/data
ENV PGBIN=/usr/lib/postgresql/10/bin PGDATA=/var/lib/postgresql/data
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
git \
Expand Down
6 changes: 6 additions & 0 deletions rootfs/api/settings/production.py
Original file line number Diff line number Diff line change
Expand Up @@ -218,6 +218,12 @@
'filters': ['require_debug_true'],
'propagate': True,
},
'django_auth_ldap': {
'handlers': ['console'],
'level': 'DEBUG',
'filters': ['require_debug_true'],
'propagate': False,
},
'api': {
'handlers': ['console'],
'propagate': True,
Expand Down
2 changes: 1 addition & 1 deletion rootfs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Deis controller requirements
backoff==1.4.3
django==1.11.23
django==1.11.29
django-auth-ldap==1.2.15
django-cors-middleware==1.3.1
django-guardian==1.4.9
Expand Down
57 changes: 38 additions & 19 deletions rootfs/scheduler/resources/deployment.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,11 @@ def create(self, namespace, name, image, entrypoint, command, spec_annotations,
return response

def update(self, namespace, name, image, entrypoint, command, spec_annotations, **kwargs):
# Set the replicas value to the current replicas of the deployment.
# This avoids resetting the replicas which causes disruptions during the deployment.
deployment = self.deployment.get(namespace, name).json()
current_replicas = int(deployment['spec']['replicas'])
kwargs['replicas'] = current_replicas
manifest = self.manifest(namespace, name, image,
entrypoint, command, spec_annotations, **kwargs)

Expand Down Expand Up @@ -277,9 +282,9 @@ def are_replicas_ready(self, namespace, name):

if (
'unavailableReplicas' in status or
('replicas' not in status or status['replicas'] is not desired) or
('updatedReplicas' not in status or status['updatedReplicas'] is not desired) or
('availableReplicas' not in status or status['availableReplicas'] is not desired)
('replicas' not in status or status['replicas'] != desired) or
('updatedReplicas' not in status or status['updatedReplicas'] != desired) or
('availableReplicas' not in status or status['availableReplicas'] != desired)
):
return False, pods

Expand Down Expand Up @@ -380,22 +385,36 @@ def _check_for_failed_events(self, namespace, labels):
Request for new ReplicaSet of Deployment and search for failed events involved by that RS
Raises: KubeException when RS have events with FailedCreate reason
"""
response = self.rs.get(namespace, labels=labels)
data = response.json()
fields = {
'involvedObject.kind': 'ReplicaSet',
'involvedObject.name': data['items'][0]['metadata']['name'],
'involvedObject.namespace': namespace,
'involvedObject.uid': data['items'][0]['metadata']['uid'],
}
events_list = self.ns.events(namespace, fields=fields).json()
events = events_list.get('items', [])
if events is not None and len(events) != 0:
for event in events:
if event['reason'] == 'FailedCreate':
log = self._get_formatted_messages(events)
self.log(namespace, log)
raise KubeException(log)
max_retries = 3
retry_sleep_sec = 3.0
for try_ in range(max_retries):
response = self.rs.get(namespace, labels=labels)
data = response.json()
try:
fields = {
'involvedObject.kind': 'ReplicaSet',
'involvedObject.name': data['items'][0]['metadata']['name'],
'involvedObject.namespace': namespace,
'involvedObject.uid': data['items'][0]['metadata']['uid'],
}
except Exception as e:
if try_ + 1 < max_retries:
self.log(namespace,
"Got an empty ReplicaSet list. Trying one more time. {}".format(
json.dumps(labels)))
time.sleep(retry_sleep_sec)
continue
self.log(namespace, "Did not find the ReplicaSet for {}".format(
json.dumps(labels)), "WARN")
raise e
events_list = self.ns.events(namespace, fields=fields).json()
events = events_list.get('items', [])
if events is not None and len(events) != 0:
for event in events:
if event['reason'] == 'FailedCreate':
log = self._get_formatted_messages(events)
self.log(namespace, log)
raise KubeException(log)

@staticmethod
def _get_formatted_messages(events):
Expand Down
8 changes: 5 additions & 3 deletions rootfs/scheduler/resources/pod.py
Original file line number Diff line number Diff line change
Expand Up @@ -552,7 +552,7 @@ def events(self, pod):
if not events:
events = []
# make sure that events are sorted
events.sort(key=lambda x: x['lastTimestamp'])
events.sort(key=lambda x: x['lastTimestamp'] or '')
return events

def _handle_pod_errors(self, pod, reason, message):
Expand All @@ -577,9 +577,11 @@ def _handle_pod_errors(self, pod, reason, message):
"ErrImageNeverPull": "ErrImageNeverPullPolicy",
# Not including this one for now as the message is not useful
# "BackOff": "BackOffPullImage",
# FailedScheduling relates limits
"FailedScheduling": "FailedScheduling",
}
# We want to be able to ignore pod scheduling errors as they might be temporary
if not os.environ.get("DEIS_IGNORE_SCHEDULING_FAILURE", False):
# FailedScheduling relates limits
event_errors["FailedScheduling"] = "FailedScheduling"

# Nicer error than from the event
# Often this gets to ImageBullBackOff before we can introspect tho
Expand Down
28 changes: 15 additions & 13 deletions rootfs/scheduler/tests/test_deployments.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,12 +102,12 @@ def test_deployment_api_version_1_9_and_up(self):
deployment.version = mock.MagicMock(return_value=parse(canonical))
actual = deployment.api_version
self.assertEqual(
expected,
actual,
"{} breaks - expected {}, got {}".format(
canonical,
expected,
actual,
"{} breaks - expected {}, got {}".format(
canonical,
expected,
actual))
actual))

def test_deployment_api_version_1_8_and_lower(self):
cases = ['1.8', '1.7', '1.6', '1.5', '1.4', '1.3', '1.2']
Expand All @@ -120,12 +120,12 @@ def test_deployment_api_version_1_8_and_lower(self):
deployment.version = mock.MagicMock(return_value=parse(canonical))
actual = deployment.api_version
self.assertEqual(
expected,
actual,
"{} breaks - expected {}, got {}".format(
canonical,
expected,
actual,
"{} breaks - expected {}, got {}".format(
canonical,
expected,
actual))
actual))

def test_create_failure(self):
with self.assertRaises(
Expand Down Expand Up @@ -158,11 +158,13 @@ def test_update(self):
deployment = self.scheduler.deployment.get(self.namespace, name).json()
self.assertEqual(deployment['spec']['replicas'], 4, deployment)

# emulate scale without calling scale
self.update(self.namespace, name, replicas=2)
# update the version
new_version = 'v1024'
self.update(self.namespace, name, version=new_version)

deployment = self.scheduler.deployment.get(self.namespace, name).json()
self.assertEqual(deployment['spec']['replicas'], 2, deployment)
self.assertEqual(deployment['spec']['template']['metadata']['labels']['version'],
new_version, deployment)

def test_delete_failure(self):
# test failure
Expand Down
27 changes: 7 additions & 20 deletions rootfs/scheduler/tests/test_horizontalpodautoscaler.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,27 +62,14 @@ def update(self, namespace=None, name=generate_random_name(), **kwargs):
self.assertEqual(horizontalpodautoscaler.status_code, 200, horizontalpodautoscaler.json()) # noqa
return name

def update_deployment(self, namespace=None, name=generate_random_name(), **kwargs):
def scale_deployment(self, namespace=None, name=generate_random_name(), replicas=1):
"""
Helper function to update and verify a deployment on the namespace
Helper function to scale the replicas of a deployment
"""
namespace = self.namespace if namespace is None else namespace
# these are all required even if it is kwargs...
d_kwargs = {
'app_type': kwargs.get('app_type', 'web'),
'version': kwargs.get('version', 'v99'),
'replicas': kwargs.get('replicas', 4),
'pod_termination_grace_period_seconds': 2,
'image': 'quay.io/fake/image',
'entrypoint': 'sh',
'command': 'start',
'spec_annotations': kwargs.get('spec_annotations', {}),
}

deployment = self.scheduler.deployment.update(namespace, name, **d_kwargs)
data = deployment.json()
self.assertEqual(deployment.status_code, 200, data)
return name
self.scheduler.deployment.scale(namespace, name, image=None,
entrypoint=None, command=None,
replicas=replicas)

def test_create_failure(self):
with self.assertRaises(
Expand Down Expand Up @@ -147,7 +134,7 @@ def test_update(self):
self.assertEqual(deployment['status']['availableReplicas'], 3)

# scale deployment to 1 (should go back to 3)
self.update_deployment(self.namespace, name, replicas=1)
self.scale_deployment(self.namespace, name, replicas=1)

# check the deployment object
deployment = self.scheduler.deployment.get(self.namespace, name).json()
Expand All @@ -158,7 +145,7 @@ def test_update(self):
self.assertEqual(deployment['status']['availableReplicas'], 3)

# scale deployment to 6 (should go back to 4)
self.update_deployment(self.namespace, name, replicas=6)
self.scale_deployment(self.namespace, name, replicas=6)

# check the deployment object
deployment = self.scheduler.deployment.get(self.namespace, name).json()
Expand Down
27 changes: 7 additions & 20 deletions rootfs/scheduler/tests/test_horizontalpodautoscaler_12_lower.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,27 +66,14 @@ def update(self, namespace=None, name=generate_random_name(), **kwargs):
self.assertEqual(horizontalpodautoscaler.status_code, 200, horizontalpodautoscaler.json()) # noqa
return name

def update_deployment(self, namespace=None, name=generate_random_name(), **kwargs):
def scale_deployment(self, namespace=None, name=generate_random_name(), replicas=1):
"""
Helper function to update and verify a deployment on the namespace
Helper function to scale the replicas of a deployment
"""
namespace = self.namespace if namespace is None else namespace
# these are all required even if it is kwargs...
kwargs = {
'app_type': kwargs.get('app_type', 'web'),
'version': kwargs.get('version', 'v99'),
'replicas': kwargs.get('replicas', 4),
'pod_termination_grace_period_seconds': 2,
'image': 'quay.io/fake/image',
'entrypoint': 'sh',
'command': 'start',
'spec_annotations': kwargs.get('spec_annotations', {}),
}

deployment = self.scheduler.deployment.update(namespace, name, **kwargs)
data = deployment.json()
self.assertEqual(deployment.status_code, 200, data)
return name
self.scheduler.deployment.scale(namespace, name, image=None,
entrypoint=None, command=None,
replicas=replicas)

def test_create_failure(self):
with self.assertRaises(
Expand Down Expand Up @@ -151,7 +138,7 @@ def test_update(self):
self.assertEqual(deployment['status']['availableReplicas'], 3)

# scale deployment to 1 (should go back to 3)
self.update_deployment(self.namespace, name, replicas=1)
self.scale_deployment(self.namespace, name, replicas=1)

# check the deployment object
deployment = self.scheduler.deployment.get(self.namespace, name).json()
Expand All @@ -162,7 +149,7 @@ def test_update(self):
self.assertEqual(deployment['status']['availableReplicas'], 3)

# scale deployment to 6 (should go back to 4)
self.update_deployment(self.namespace, name, replicas=6)
self.scale_deployment(self.namespace, name, replicas=6)

# check the deployment object
deployment = self.scheduler.deployment.get(self.namespace, name).json()
Expand Down

0 comments on commit 5613bc8

Please sign in to comment.