This is a text-only version of the following page on https://raymii.org: --- Title : Fix inconsistent Openstack volumes and instances from Cinder and Nova via the database Author : Remy van Elst Date : 22-12-2014 URL : https://raymii.org/s/articles/Fix_inconsistent_Openstack_volumes_and_instances_from_Cinder_and_Nova_via_the_database.html Format : Markdown/HTML --- When running Openstack, sometimes the state of a volume or an instance can be inconsistent on the cluster. Nova might find a volume attached while Cinder says the volume is detached or otherwise. Sometimes a volume deletion hangs, or a detach does not work. If you've found and fixed the underlying issue (lvm, iscsi, ceph, nfs etc...) you need to bring the database up to date with the new consistent state. Most of the time a reset-state works, sometimes you need to manually edit the database to correct the state. These snippets show you how.
_Please note that it is important to find and fix the underlying issue._ If you for example have a volume which hangs on detaching, resetting the database is a quick hack and not a real fix. Make sure you first fix the underlying issue and cause before you update the database. These examples were tested with all components on Juno and on Icehouse with MySQL as the backing database. _Please be extermely carefull with these examples._ ### Delete an instance Your NFS backing storage might have crashed halfway during a VM delete. You've manually deleted all the related files (disk, config etc) and removed the VM domain from the backing hypervisor (virsh, esxi etc). However `nova show` still sees the VM as active (or error). A `nova reset-state --active` doesn't fix the delete part. The following query can be used to set an instance as deleted: $ mysql nova_db > update instances set deleted='1', vm_state='deleted', deleted_at='now()'' where uuid='$vm_uuid' and project_id='$project_uuid'; Normaly a `nova delete $uuid` is the correct way to delete a VM. If you want to actually delete a from the database instead of marking it as deleted, the following queries should do that: $ mysql nova_db > delete from instance_faults where instance_faults.instance_uuid = '$vm_uuid'; > delete from instance_id_mappings where instance_id_mappings.uuid = '$vm_uuid'; > delete from instance_info_caches where instance_info_caches.instance_uuid = '$vm_uuid'; > delete from instance_system_metadata where instance_system_metadata.instance_uuid = '$vm_uuid'; > delete from security_group_instance_association where security_group_instance_association.instance_uuid = '$vm_uuid'; > delete from block_device_mapping where block_device_mapping.instance_uuid = '$vm_uuid'; > delete from fixed_ips where fixed_ips.instance_uuid = '$vm_uuid'; > delete from instance_actions_events where instance_actions_events.action_id in (select id from instance_actions where instance_actions.instance_uuid = '$vm_uuid'); > delete from instance_actions where instance_actions.instance_uuid = '$vm_uuid'; > delete from virtual_interfaces where virtual_interfaces.instance_uuid = '$vm_uuid'; > delete from instances where instances.uuid = '$vm_uuid'; ### Change the compute host of a VM A `nova migrate` or `nova resize` might have failed. The disk could be already migrated or still on your shared storage but nova is confused. Make sure the VM domain is only one compute node (preferably the on it came from, use `nova migration-list` to find that out) and the backing disk/config files are also only on one hypervisor node (lsof and tgt-adm are your friends here). The following query changes the VM hypervisor host for nova: $ mysql nova_db > update instances set host='compute-hostname.domain',node='compute-hostname.domain' where uuid='$vm_uuid' and project_id='$project_uuid'; Normally a `nova migrate $vm_uuid` or a `nova resize $vm_uuid $flavor` should be enough. ### Set a volume as detached in Cinder Your backing cinder storage might have issues or bugs which cause `nova volume- detach $vm_uuid $volume_uuid` to fail sometimes. It might be detached in Nova but still have the state `Detaching` in Cinder. Make sure the VM domain has the actual disk removed. Also check our backing storage (ceph, lvm, iscsi etc..) to make sure it is actually detached and not in use anymore. Try a `cinder reset-state --state available $volume_uuid` first. If that fails, the following `cinder` mysql query sets the Cinder state to available: $ mysql cinder_db > update cinder.volumes set attach_status='detached',status='available' where id ='$volume_uuid'; Absolutely make sure that there is no data being written from to the volume, it might cause data loss otherwise. Do note that the cinder python api (`import cinderclient.v2`) also has the `cinder.volumes.detach(volume_id)` call. You do need to write some tooling around that. (