From an e-mail I sent to dev@ on January 8, 2016:
Has anyone noticed the following issue?
I create a zone where I use local storage to house my system VMs.
After the system VMs are all started, I sometimes (but rarely) notice that the local SR that was used to retrieve the system template remains (as opposed to going away once the system VMs have been kicked off).
It doesn't seem to be a big deal until I happen to re-start my management server.
At that point, CloudStack sees this local storage that should have gone away, but didn't as new local primary storage and adds it as such.
If I go to Infrastructure and Primary Storage, I can see I now have a second local primary storage for one of my XenServer hosts.
Has anyone seen this issue before?
What I've done at this point is simply remove the new primary storage from the DB, but I can't (easily) seem to get rid of the extraneous SR.
Update (on April 18, 2016):
Adrian Sender <email@example.com>
Sat 4/16/2016 4:43 AM
You replied on 4/16/2016 9:58 AM.
Hi have observed this behavior on CCP 4.3.x mostly and xenserver 6.5 less so
in 4.5.1. I use Fiber Channel LVMoHBA as the primary storage.
Seems like the same issue.
Disk Attached to Dom0 after snapshot or copy from secondary to primary:
In this example we have a disk attached to dom0, we cannot delete the disk
until we detach it.
admin.rc.precise 0 Created by template provisioner 42 GB Control domain on
[root@cpms1-1 ~]# xe vdi-list name-label="admin.rc.precise 0"
uuid ( RO) : 3d79722b-294d-4358-bc57-af92b9e9dda7
name-label ( RW): admin.rc.precise 0
name-description ( RW): Created by template provisioner
sr-uuid ( RO): dce1ec02-cce0-347d-0679-f39c9ea64da1
virtual-size ( RO): 45097156608
sharable ( RO): false
read-only ( RO): false
You will want to list out the VBD (connector object between VM and VDI) based
on the VDI UUID. Here is an example:
[root@cpms1-1 ~]# xe vbd-list vdi-uuid=3d79722b-294d-4358-bc57-af92b9e9dda7
uuid ( RO) : d9e2d89e-a82f-9e6e-c97a-afe0af47468e
vm-uuid ( RO): 0f4cb186-0167-47d6-afb5-89b00102250b
vm-name-label ( RO): Control domain on host: cpms1-1.nsp.nectar.org.au
vdi-uuid ( RO): 3d79722b-294d-4358-bc57-af92b9e9dda7
empty ( RO): false
device ( RO):
Once done, you want to first try to make VBD inactive (it may already be
inactive), "The device is not currently attached"
xe vbd-unplug uuid=d9e2d89e-a82f-9e6e-c97a-afe0af47468e
Once done, you can then break the connection:
xe vbd-destroy uuid=<UUID of VBD>
Now you can delete the disk from xencenter
Wed 4/13/2016 5:10 PM
Has anyone recently observed the following behavior:
As you can see in the image, I have three 6.5 XenServer hosts in a resource pool.
I just used them when creating a basic zone and the system VMs were deployed just fine. However, there are SRs pointing to secondary storage on my XenServer-6.5-1 and XenServer-6.5-3 hosts still (there used to be one on my XenServer-6.5-2 host, but it went away once the system VMs started up on that host).