Unable to access a file since it is locked

How to unlock locked files on your ESX host

I recently used VMware Converter to do a P2V of a physical host to virtual. After Converter was at 100% and everything seemed find, I removed the Converter CDrom from the physical server. I then got a blue screen on my physical server, probably because I removed the CD to soon (hadn’t closed Converter first). But because it was virtual now, I just shutdown the host.

Trying to start the VM on the ESX host, I got an error: “Unable to access a file since it is locked”. After some investigation on the VMware community forums, I found two solutions which I combined. For own future reference I wanted to post them on my blog, but also for all you out there for whom it might be helpful to.
First credits to Rob Bohmann (http://communities.vmware.com/message/964286#964286) and ctfoster (http://communities.vmware.com/message/909582#909582) for their solutions.

  1. Make a SSH session to the ESX host where the VM was last known to be running.
  2. On the commandline run: vmkfstools -D /vmfs/volumes/path/to/file to dump information on the file into /var/log/vmkernel
  3. Next run: tail /var/log/vmkernel -n 50 , you will see output like below:
  • Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)FS3: 130: <START vmware-16.log>
  • Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)Lock [type 10c00001 offset 30439424 v 21, hb offset 4154368
  • Nov 29 15:49:17 vm22 vmkernel: gen 66493, mode 1, owner 46c60a7c-94813bcf-4273-0017a44c7727 mtime 8781867]
  • Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)Addr <4, 588, 7>, gen 20, links 1, type reg, flags 0x0, uid 0, gid 0, mode 64
  • Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)len 23973, nb 1 tbz 0, zla 2, bs 6553
  • Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)FS3: 132: <END vmware-16.log>

4. The owner of the lock is on line 3 in bold, the last part is all you need, in this case 0017a44c7727

5. Now run:   esxcfg-info | grep -i ‘0017a44c7727’ | awk -F ‘-‘ ‘{print $NF}’ it will display the system uuid of the esx server. You need to run the esxcfg-info command on each esx server in the cluster to discover the owner.
6. When you find the ESX server that matches the uuid owner, logon to that ESX server and run the command:
ps -elf | grep vmname where vmname is the problem vm. Example output below:

  • 4 S root 7570 1 0 65 -10 – 435 schedu Nov27 ? 00:00:02 /usr/lib/vmware/bin/vmkload_app /usr/lib/vmware/bin/vmware-vmx -ssched.group=host/user/pool2 -@ pipe=/tmp/vmhsdaemon-0/vmxf7fb85ef5d8b3522;vm=f7fb85ef5d8b3522 /vmfs/volumes/470e25b6-37016b37-a2b3-001b78bedd4c/iu-lsps-vstest/iu-lsps-vstest.vmx0

7. Since there is a process running, pid 7570 in the example, you need to kill it by running:   kill -9 7570
8. Once the kill is complete the files should be released.

For me there was a problem getting the correct UUID. At first I got a UUID of 0000000000000 when on line 2, entering the path to the VMX. Using the command: lsof | grep ‘VM name’ I found that not my VMX was locked, but my -flat.vmdk. Normally output would look like this:

lsof |grep LINWWD2

bash 22743 root cwd DIR 8,2 4096 652801 /home/vmware/LINWWD2
lsof 23989 root cwd DIR 8,2 4096 652801 /home/vmware/LINWWD2
grep 23990 root cwd DIR 8,2 4096 652801 /home/vmware/LINWWD2
lsof 23991 root cwd DIR 8,2 4096 652801 /home/vmware/LINWWD2

But in my situation there was an extra process: vmware-ho with the ID’s and the name of my VMDK behind it. I decided to kill it, using kill -9. Then switched to Virtual Center and tried to start the VM again. I now received an error “Unable trying to communicate with the host”. OOOOOPSSS…. I quickly checked what was happening and all VM’s were disconnected. Fortunately, before I made some steps to investigate this problem, everything showed connected again. Within a minute my host was running fine and there had been no disruption for the other VMs. I was also able to start the VM that had the locked file.

Also have a look at this link: http://conshell.net/wiki/index.php/Recovery_of_Locked_VMDK

16 thoughts on “Unable to access a file since it is locked

  1. We had the same situation as you did after STEP8. Instead of killing the process, we decided to restart the Virtual Center Agent on that ESX host with the command below.

    service mgmt-vmware restart

    We were then able to start the VM successfully. What clued me in to do this was the fact you said all your VM’s on that host showed disconnected. This made me believe you had killed that agent.

  2. I had this same problem, and all I had to do was migrate the VM to another host. At first it still wouldn’t boot up but I came back to it a few minutes later and it was up. Not sure why but it worked.

    Glad I didn’t have to follow that tortuous procedure detailed in the post!

  3. i you know how it happend only a restart of the vc agent can do the trick.
    i normally see this happen with vcb backup and or vc convertor

  4. It worked for me, I also had the vmdk file locked. It was caused after trying to activate fault tolerance in one of my vm's, and it started switching between the esx hosts, with messages like: needs secondary.

  5. It worked for me, I also had the vmdk file locked. It was caused after trying to activate fault tolerance in one of my vm's, and it started switching between the esx hosts, with messages like: needs secondary.

  6. Hi Gabe,

    Thanks for sharing this. It just helped me out with a P2V.
    I used Converter standalone and after cloning the disks, the migration task failed and the physical source server PSOD'd on me!.

    -Arnim

  7. Hi Gabe,

    Thanks for sharing this. It just helped me out with a P2V.
    I used Converter standalone and after cloning the disks, the migration task failed and the physical source server PSOD'd on me!.

    -Arnim

  8. I found the easiest was in my face to do a touch * in the VM dir. Then an fuser /path/to/locked/file
    check out the Process given
    fuser -k /path/to/locked/file
    check VCenter to see if the host is disconnected. If so run
    service mgmt-vmware restart

  9. we faced the same issue and resolved it by removing the Vm from inventory and then adding it back.
    The problem got resolved

  10. Change the VM that you are converted,  to the original host where you began converting the VM.

    The motive is that the destination ESXi have not resorces and the DRS have run just when the task converter finish.

Comments are closed.