Support for testing ovirt on OpenShift in OST
Description
Activity

Eyal Edri June 10, 2018 at 3:35 PM
ovirt-containers initiative was dropped, so it doesn't make sense to develop a suite to run oVirt on OpenShift in OST.
There are other initiatives like running OpenShift on oVirt which are tracked in other tickets.

Eyal Edri July 18, 2017 at 8:50 AM
The best way to find the right ansible playbook is to look in origin docs:
https://github.com/openshift/origin

Eyal Edri July 4, 2017 at 2:55 PM
can we close this ticket following our last conversation, or update it with new requirements?

Eyal Edri June 20, 2017 at 7:53 AM
This will probably require installing OpenShift via Ansible, and since we already support Ansible in Lago, we need to choose the right playbook from Galaxy.
proposed role: https://github.com/openshift/openshift-ansible/tree/master/playbooks/byo
we should run config.yml

Eyal Edri March 12, 2017 at 9:16 AM
still trying to move it to blocked status, please ignore for now.

Yaniv Kaul January 29, 2017 at 9:55 PM
0. I think it's time to understand if we want to use Lago for Kubevirt or not. I prefer vagrant - it's more common, works out of the box on both Ubuntu and possibly other operating systems (does it have nested virt on Mac or Windows?). I would like to believe that moving to vagrant is actually a good opportunity to start afresh. Lago is not killed by it, and will remain relevant for OST.
1. Ansible is supported today in vagrant. I believe a fair start of support exists in Lago as well - but as Barak pointed out, probably need a bit of Cloud-Init work too perhaps - not sure.
Engine is not containerized for real yet. Once it does, we'll probably deploy it via standard Pod deployment options (saw today ansible-containers, for example - sounds neat).

Eyal Edri January 29, 2017 at 6:32 PM
This is the tool we're using for tracking issues for OST, so its the right place, if needed we can have a trello card pointing to this ticket.
0. Lago. I don't think that the fact Vagrant was used for POC KubeVirt alone is a major advantage to re-do all the work we've done to make OST run with Lago,
Moving to Vagrant will mean investing a lot of efforts and resources ( which we don't have, looking at the Lago+OST headcount ATM ) into making something we already have working for little value IMO and will delay significantly the work to support KubeVirt.
We can consider it once we'll have a full time maintainer for Lago/OST.
1. Once we support Ansible as deploy scripts in OST, which shouldn't be hard after the recent merges to Lago, we can just use the existing Kubenetes playbooks, not sure how using Vagrant makes it simpler.
Ansible for engine - not sure how/what we'll need, but provided us with POD definitions, I'm guessing once you have Kubenetes its just a matter of running / deploying the engine container? ( so according to the request we should have available engine & vdsm in containers, otherwise I didn't understand the request )

Yaniv Kaul January 29, 2017 at 5:19 PM
- too many items there, I think. We should probably document it in KubeVirt's trello and not here.
0. Do we use Lago as is, or vagrant? Unclear to me, and I do see a nice advantage for the community using vagrant.
1. Ansible installing K8S - makes sense to me. This is quite integrated into vagrant and is easy to do (though requires Ansible on Host L0) in Lago. Do we have Ansible to install Engine? Quite possibly from QCI - we should utilize that - but it's not within a Container - Engine is not yet containerized (production-ready, at least)
3. Good point about Add Host. Donno yet...
4. ACK
5. ACK.

Eyal Edri January 29, 2017 at 3:56 PM
Just to emphasis the scope and requirements, let me re-iterate on what we agreed on the email:
A new OST suite will be created which will run the same tests run today in basic suite, the difference will be:
1. Replace deploy scripts for engine + storage with Ansible playbook which will install Kubernetes
2. The Ansible playbook will also need to deploy engine + vdsm as containers in PODS once Kubernetes will be running
3. Not sure how 'add host' should work - will VDSM come ready in a container and already connect to the engine or we'll still need to run 'add host' command to add it? or an alternative 'add host' command.
4. Kubernetes will be running on VMs created by Lago
5. We will consider to use Atomic as OS for the VMs, but CentOS should work as well as starting point.
Please confirm I'm not talking nonsense and we can continue to move towards this goal

Eyal Edri January 29, 2017 at 8:52 AM
We need support for Ansible Playbooks before we start implementing this suite.

Yaniv Kaul January 28, 2017 at 4:06 PM
- I strongly agree. I think lago has just gained good support for providing the Ansible hosts list (via https://github.com/lago-project/lago/pull/428 ).
The provides the basic infrastructure needed from the host (L0) to do things via Ansible.
I'd argue though that we need to use Ansible from L1, not from L0. So I rather we copy the relevant Ansible, scripts, what not, and execute it from L1.
I'm not sure I'd use Atomic right away, though I agree it's a goal.
Details
Assignee
Gal Ben HaimGal Ben Haim(Deactivated)Reporter
Fabian DeutschFabian Deutsch(Deactivated)Components
Priority
Highest
Details
Details
Assignee

Reporter

Hey,
Yaniv Bronheim is building containers for vdsm and engine.
Lago should become capable of running OST against this setup.
The basic flow is:
1. Normal CentOS
2. Install kubernetes
3. Deploy engine and vdsm pods
The pod definitions are here:
https://gerrit.ovirt.org/gitweb?p=ovirt-container-engine.git;a=tree
https://gerrit.ovirt.org/gitweb?p=ovirt-container-node.git;a=tree
A similar script can be found here:
https://github.com/kubevirt/demo/blob/master/data/bootstrap-kubevirt.sh
But this script is deploying kubevirt, instead of the engine + vdsm container.