VMware vSphere: Safely remove network controller

Well, it’s another day another fight. As we started migrating our VM’s from the old VMware ESX farms to the new environment, and upgraded the hardware suddenly the network devices were hot-plug-able, thus they did turn up in the “Safely Remove” dialog.

I myself don’t have any trouble with that. The trouble I do have is the people working with those VM’s and their possibly hazardous “uuuh, what’s this ? I don’t need this! <click-click, network-device unplugged>”

So I went googling (why isn’t that a dictionary term by now ?) and found something. Simple solution is to disable the hot plugging of hardware in the VM’s settings.

ESX: Query CDP information from the command line

I’m just tracing some troubles I’m having with a backup server and two (independent) network adapter ports (as in two ports on two different dual-port nics). If I enable the port and set it to auto configuration, it’ll get 100MBit/Half-Duplex, but the Portgroup becomes unavailable.

In order to get the connection back, I need to logon on the console (thank god even the backup server got an iLO2), and manually (as in esxcfg-nics -s 1000 -d full vmnic1) configure the adapter to 1GBit/s and full-duplex.

Since I didn’t want to go downstairs and trace the damn cables, I figured I could use the CDP features included in ESX. Turn the NICs on as 100MBit/Half-Duplex and run:

That’s all the networking guys should need. Switch’s IP-address and the Switch Port where the card is connected to. *Tada*.

Update: with ESXi 5.1, it’s just vim-cmd …

VMware vSphere and templates

I just converted one of my (old) templates, as I wanted to refresh the updates and the virus scanner. After converting, I was asked about the UUID (no clue why), and expected to be done with it. But after looking at the console, I got the following, completely cryptic message:

Unable to connect to MKS
Unable to connect to MKS

After digging a bit deeper (that is looking at the vmware.log of the virtual machine, since the message of the GUI is *real* cryptic), I’m a bit wiser:

After softly shutting the VM down, and the powering the VM back up everything is back to working order.

Updating a Linux VM from Virtual Infrastructure to vSphere

Well, if you’re gonna update a SLES10 (or even a SLES11) VM, you created with Virtual Infrastructure, you’re gonna run into a snag (like I do). Grub (or rather the kernel itself) is gonna barf.

Now, I searched for a while and didn’t find anything specific on the net, so I’m gonna write it down. Up till 3.5U4 the maximal resolution you’d be able to enter within a virtual machine was vga=0x32d (at least for my 19″ TFT’s at work). But now, after the upgrade to vSphere that isn’t working anymore.

Popped in a SLES10 install-cd selected the maximal resolution from the menu and switched to a terminal soon after it entered the graphical installer. A short cat /proc/cmdline revealed this: vga=0x334.

After switching these parameters in grub’s menu.lst, everything is back in working order and not waiting 30 seconds on boot …