Skip to content
This repository has been archived by the owner on Sep 16, 2024. It is now read-only.

Use minishift in CI #214

Closed
wants to merge 5 commits into from

Conversation

machacekondra
Copy link
Contributor

What this PR does / why we need it:

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Release note:


Copy link

@vatsalparekh vatsalparekh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is needed to run changes of #212 related to openshift templates

tests/playbooks/e2e.yaml Outdated Show resolved Hide resolved
] Outdated Show resolved Hide resolved
@mmazur
Copy link
Contributor

mmazur commented Apr 25, 2019

I've bumped the kubevirt_vm call that times out to have a timeout value of 5 minutes instead of 2 minutes and it still fails. Looks like something else is wrong. Let's see…

@mmazur
Copy link
Contributor

mmazur commented Apr 26, 2019

I'm pretty sure I managed to get the vms running by messing a bit with how kubevirt is getting deployed, but now I've hit an interesting problem. When an ephemeral vm exists and is running (I'm guessing this isn't just limited to ephemeral vms), the task below should return success with changed==false.

kubevirt_vm:
  state: present
  name: xyz
  ephemeral: yes

And it does on kubevirt 1.6.0 running on vanilla k8s 1.13.3. But it doesn't on same kubevirt running on latest minishift (or not so latest minishift). And this is the exact issue currently reported by travis. I'm suspecting some weird behavior on the openshift lib layer.

@mmazur
Copy link
Contributor

mmazur commented Apr 26, 2019

Blocked by: ansible/ansible#55817

@mmazur
Copy link
Contributor

mmazur commented Apr 29, 2019

Ugh, so openshift 3.11 ships kubernetes 1.11 and there's a bug in that k8s that only got fixed in 1.12 (kubernetes/kubernetes#67562), which is what we're hitting here.

I'll see if I can get the ansible/k8s people to add a workaround for this (ideally before 2.8 ships).

@mmazur mmazur force-pushed the use_minishift branch 6 times, most recently from 1e83603 to 6f6c35b Compare April 29, 2019 14:07
@mmazur
Copy link
Contributor

mmazur commented Apr 29, 2019

😠 !#@$!^#$ 😡

Works… kind of. I'm pretty sure this will be even more flaky than the minkube environment, so expect random stuff to time out for no reason. Might be a problem for us, might not. If it is, we can investigate something other than travis at some later time.

For now, @machacekondra @pkliczewski @vatsalparekh please review and merge.

@@ -1,3 +1,4 @@
[submodule "ci"]
path = ci
url = https://github.com/fabiand/traviskube
url = https://github.com/mmazur/traviskube

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand there are bugs. Please paste reference or describe why we are using our fork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My changes are very trivial, and update here and there. The only reason is the RH Summit. Fabian won't be looking at low prio PRs for some time and I didn't want to get stuck on this. As soon Fabian is back to his usual responsive self I'm planning on getting my changes merged and reverting this back to his repo.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can move it under kubevirt org

.travis.yml Outdated
- name: "minishift"
env:
- CPLATFORM=minishift
- KUBEVIRT=v0.16.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v0.16.2 - latest released

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0.16.1 and 0.16.2 are tagged as pre–release, which is why I didn't use them. Dunno if those tags can be ignored or if they actually mean something.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is v0.17.0 available please change.

with_items:
- { name: 'eph1', state: 'present' }
- { name: 'eph1', state: 'running' }
- { name: 'eph2', state: 'present' }
- { name: 'eph2', state: 'running' }
register: result
failed_when: result.changed != false or result.kubevirt_vm.status.phase != 'Running'
failed_when: result.kubevirt_vm.status.phase != 'Running'
# failed_when: result.changed != false or result.kubevirt_vm.status.phase != 'Running'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this commented line?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a 4 line explanation of what this is for at the start of the playbook.

I did it this way on purpose. If I'm making a temporary workaround which I want to get removed as soon as possible, then I want to very clearly mark all the places in the code where the workaround applies. The only chance for someone to update it in a few months so that full tests are utilized is to make it an eye sore and hard to not notice should anyone look at this file.

You know how this stuff works. If I hide these lines in git, then no one will ever bother to try to fish out the originals and figure out what to do with them, so the "workaround" will just become the permanent tests until the heat death of the universe. But if the originals are still there in plain sight, there's a chance…

with_items:
- { name: 'eph1', state: 'stopped' }
- { name: 'eph1', state: 'absent' }
- { name: 'eph2', state: 'stopped' }
- { name: 'eph2', state: 'absent' }
register: result
failed_when: result.changed != false or 'spec' in result.kubevirt_vm or 'status' in result.kubevirt_vm
failed_when: ('spec' in result.kubevirt_vm or 'status' in result.kubevirt_vm)
# failed_when: result.changed != false or 'spec' in result.kubevirt_vm or 'status' in result.kubevirt_vm

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

with_items:
- { name: 'vm1', state: 'stopped' }
- { name: 'vm1', state: 'present' }
- { name: 'vm2', state: 'stopped' }
- { name: 'vm2', state: 'present' }
register: result
failed_when: result.changed != false or 'status' in result.kubevirt_vm
failed_when: ('status' in result.kubevirt_vm)
# failed_when: result.changed != false or 'status' in result.kubevirt_vm

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see there are more. We use version control not to keep things like this.

@machacekondra
Copy link
Contributor Author

I would rather prefer stable tests without testing templates, then unstable env with templates testing.

@mmazur
Copy link
Contributor

mmazur commented Apr 30, 2019

It's my guess that the reliability issues we've had with the test env were due to k8s/kubevirt and non-accelarated vms in particular being rather heavy, so whenever there was a cpu shortage we got random timeouts.

And since k8s is lighter than openshift I suspect that might be more of a problem.

But I don't know for sure and this patch is already done. If it turns out to be a problem, we can revert this easily or consider moving away from travis. Or maybe it'll be enough to increase a few timeout values. I'll keep an eye on it going forward.

echo "$(minikube ip) minikube" | sudo tee -a /etc/hosts

## No HW virt
kubectl create configmap -n kubevirt kubevirt-config --from-literal debug.useEmulation=true
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need this anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok, you moved most of this to your repo.

@mmazur
Copy link
Contributor

mmazur commented May 21, 2019

I'll open a separate PR that adds jenkins support.

@mmazur mmazur closed this May 21, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants