Guest name 'TestNode-node' is already in use.

Description

http://jenkins.ovirt.org/job/ovirt-node-ng_master_build-artifacts-el7-x86_64/138/

======================================================================
ERROR: test suite for <class 'testSanity.TestNode'>
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nose/suite.py", line 208, in run
self.setUp()
File "/usr/lib/python2.7/site-packages/nose/suite.py", line 291, in setUp
self.setupContext(ancestor)
File "/usr/lib/python2.7/site-packages/nose/suite.py", line 314, in
setupContext
try_run(context, names)
File "/usr/lib/python2.7/site-packages/nose/util.py", line 469, in try_run
return func()
File
"/home/jenkins/workspace/ovirt-node-ng_master_build-artifacts-el7-x86_64/ovirt-node-ng/tests/testVirt.py",
line 150, in setUpClass
77)
File
"/home/jenkins/workspace/ovirt-node-ng_master_build-artifacts-el7-x86_64/ovirt-node-ng/tests/testVirt.py",
line 88, in _start_vm
dom = VM.create(name, img, ssh_port=ssh_port, memory_gb=memory_gb)
File
"/home/jenkins/workspace/ovirt-node-ng_master_build-artifacts-el7-x86_64/ovirt-node-ng/tests/virt.py",
line 217, in create
dom = sh.virt_install(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/sh.py", line 1021, in _call_
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/usr/lib/python2.7/site-packages/sh.py", line 486, in _init_
self.wait()
File "/usr/lib/python2.7/site-packages/sh.py", line 500, in wait
self.handle_command_exit_code(exit_code)
File "/usr/lib/python2.7/site-packages/sh.py", line 516, in
handle_command_exit_code
raise exc(self.ran, self.process.stdout, self.process.stderr)
ErrorReturnCode_1:

RAN: '/bin/virt-install --import --print-xml --network=user,model=virtio
--noautoconsole --memory=2048 --rng=/dev/random --memballoon=virtio
--cpu=host --vcpus=4 --graphics=vnc --watchdog=default,action=poweroff
--serial=pty
--disk=path=/var/tmp/TestNode-node.qcow2,bus=virtio,format=qcow2,driver_type=qcow2,discard=unmap,cache=unsafe
--check=all=off --channel=unix,target_type=virtio,name=local.test.0
--name=TestNode-node'

STDOUT:

STDERR:
ERROR Guest name 'TestNode-node' is already in use.

Seems to be a run in a not clean environment. Not sure about what caused
this.


Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
<https://www.redhat.com/it/about/events/red-hat-open-source-day-2016>

Activity

Show:

Former user June 19, 2018 at 2:40 PM

Haven't seen the issue in a while, closing the idle ticket.

Former user January 10, 2017 at 5:17 PM

Looking at failures in another related job, I saw leftover VMs even though the post-run cleanup step is there:
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.0_build-artifacts-el7-x86_64/210/consoleFull

16:58:31 # Drop all left over libvirt domains
16:58:31 for UUID in $(virsh list --all --uuid); do
16:58:31 virsh destroy $UUID || :
16:58:31 sleep 2
16:58:31 virsh undefine --remove-all-storage --storage vda --snapshots-metadata $UUID || :
16:58:31 done
...
16:58:32 ++ virsh list --all --uuid
16:58:32 + false
16:58:32 POST BUILD TASK : SUCCESS

Running on the same command manually does show the VM:

[root@vm0085 ~]# virsh list --all --uuid
c167b682-6684-4548-8ef4-daf5d8a32d46

also, when running the other cleanup commands I see an error:

[root@vm0085 ~]# virsh destroy c167b682-6684-4548-8ef4-daf5d8a32d46
Domain c167b682-6684-4548-8ef4-daf5d8a32d46 destroyed

[root@vm0085 ~]# virsh undefine --remove-all-storage --storage vda --snapshots-metadata c167b682-6684-4548-8ef4-daf5d8a32d46
error: Specified both --storage and --remove-all-storage

Former user January 10, 2017 at 2:22 PM

Adding snapshot removal seems like a great idea.

Former user January 10, 2017 at 12:59 PM

I logged into the builder to check - the VM does indeed have a snapshot:

[root@vm0085 ~]# virsh snapshot-list TestNode-node
Name Creation Time State
------------------------------------------------------------
1479982346 2016-11-24 10:12:26 +0000 running

According to [1] I deleted it and then could remove the VM itself:

[root@vm0085 ~]# virsh snapshot-delete TestNode-node 1479982346
Domain snapshot 1479982346 deleted

[root@vm0085 ~]# virsh undefine --remove-all-storage TestNode-node
Storage volume 'vda'(/var/tmp/TestNode-node.qcow2) is not managed by libvirt. Remove it manually.
Domain TestNode-node has been undefined

Can we add this to post- and pre-run cleanups?

[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Managing_guest_virtual_machines_with_virsh-Managing_snapshots.html

Former user January 10, 2017 at 12:41 PM

Looks like we still fail to remove some VMs. In case of the job in question there's a cleanup at the start which also fails:

11:33:21 ++ virsh list --name
11:33:21 ++ xargs -rn1 virsh destroy
11:33:21 ++ virsh list --all --name
11:33:21 ++ xargs -rn1 virsh undefine --remove-all-storage
11:33:21 Storage volume 'hda'(/home/jenkins/workspace/ovirt-node-ng_ovirt-4.0_build-artifacts-el7-x86_64/ovirt-node-ng/build/diskPAVs0d.img) is not managed by libvirt. Remove it manually.
11:33:21 Storage volume 'hdb'(/home/jenkins/workspace/ovirt-node-ng_ovirt-4.0_build-artifacts-el7-x86_64/ovirt-node-ng/boot.iso) is not managed by libvirt. Remove it manually.
11:33:21 Domain LiveOS-8e80dfb0-6269-4691-8804-ecbbe9e2e582 has been undefined
11:33:21
11:33:21 Storage volume 'vda'(/var/tmp/TestNode-node.qcow2) is not managed by libvirt. Remove it manually.
11:33:21 error: Failed to undefine domain TestNode-node
11:33:21 error: Requested operation is not valid: cannot delete inactive domain with 1 snapshots

So a previous job failed to clean up and the same happened in this job again. is there any way to remove VMs with unmanaged volumes? Also, why may they be reported as unmanaged in the first place?

Fixed

Details

Assignee

Reporter

Priority

Created October 7, 2016 at 7:27 AM
Updated June 19, 2018 at 2:40 PM
Resolved October 30, 2016 at 4:07 PM