So, for this project, you need a couple things. First, you need a server running VMware ESXi 6.x. Importantly, that server should have at least 2 network ports. That will enable us to have a WAN and a LAN port. For this, I’ll be using my Dell PowerEdge R710, which has 4x1GbE ports on the back, and it runs VMware ESXi 6.5.0. To get started, log into the ESXi web interface and click on Storage in the left-hand menu. You should now see a list of the current datastores on the ESXi server along with information about the datastore, i.e. Capacity, free space, drive type, etc. Click on the datastore you want to expand to select it. Optimized for VMware hypervisor: The Linux kernel is tuned for performance when Photon OS runs on VMware ESXi. Support for containers: Photon OS includes the Docker daemon and works with container orchestration frameworks, such as Mesos and Kubernetes.
HPE Smart Array Controller
The HPE Smart Array Controller provides enterprise-class storage performance, increased internal scalability with SAS Expander Card, and data protection for HPE ProLiant rack and tower servers.
There is a built-in tool for managing the Smart Array Controller in HPE Proliant servers. We can access it during the the server boot time to configure the disk arrays. However if we already have ESXi installed on the server and we want to monitor or update the array configuration directly from ESXi’s shell, we can use the offical softwares from HPE.
Install Driver & Utilities
The driver and utilities can be downloaded from HP’s offical website. Following is the hot link you can use.
- Driver: HPE ProLiant Smart Array Controller Driver for VMware vSphere 6.7
- Utilities: HPE ESXi Utilities Offline Bundle for VMware vSphere 6.7
You must install these softwares from ESXi’s shell so SSH service must be enabled. If you haven’t enabled it yet, login into ESXi Web UI -> Host -> Manage -> Services -> TSM-SSH -> Start
to start SSH service.
In this example, I am gonna use esxcli
command to install the driver and ssacli
command utility to manage the HPE Smart Array. Use WinSCP software or scp
command to copy .vib
files to ESXi’s /tmp
directory and install them. Please make sure you copied them to /tmp
otherwirse you won’t able to install successfully.
We will need to reboot the server for the changes to be effective. Let’s reboot it…
Once server is up, login into ESXi’s shell again and verify the software is installed.
Using ssacli command
With ssacli
command we get the HPE Smart Array Controller’s config information, disk status, temperature, etc. It is helpful for the monitoring activity. The command also help us to be able to configure the arrays directly from the ESXi’s shell without rebooting server to use HPE Smart Array Configuration Tool.
Following is an example of showing the HPE Smart Array config.
There are several helpful commands can be used:
If you are running KVM on an Ubuntu server, you already have an excellent Type 1 virtualization engine. Luckily, if you need to test something specific to VMware you can always run ESXi nested inside a KVM virtual machine.
In this article, I’ll be using a bare metal server running Ubuntu and KVM as a type 1 hypervisor.
I will then provide instructions for how to create a KVM virtual machine that runs VMware ESXi, and then smoke test by deploying a guest OS on top of ESXi.
Install KVM
I am assuming you are running Ubuntu and have already installed and smoke tested KVM as described in my previous article.
Enable VT-x
You need to make sure your CPU is capable of VT-x (virtualization acceleration), and then that your BIOS has VT-x enabled. Many computers have it disabled by default in the BIOS.
The easiest way to check is kmv-ok.
And you can also check using virt-host-validate.
And lastly, you should see a number greater than 0 coming back from cpuinfo.
Install Docker On Vmware Esxi
If you do not have this enabled, reboot your machine and press the special function key (F1|F2|F10|DEL|ESC|alt-S) that takes you into the BIOS. Each BIOS is different but look for “Virtualization Technology” or “VT-x”.
We cannot created a nested virtualization solution without this support.
Configure VT-x in OS
In addition to enabling VT-x at the BIOS level, you will also need to configure it at the Ubuntu OS level. In the file “/etc/modprobe.d/qemu-system-x86.conf”, set the following lines.
Then in order to workaround an issue with Ubuntu, force the value for ignore_msrs
Reboot the host, and then run the following commands.
Download vSphere Hypervisor
The VMware vSphere Hypervisor ESXi is a commercial product, but when you create an account you can download a 60 day trial.
Go to this page, scroll down the “Standard” version and click on “VMware vSphere Hypervisor (ESXi) 6.7” and then download the “VMWare vSphere Hypervisor (ESXI ISO) Image (Includes VMware Tools)”. It is about 330Mb.
Download Ubuntu Network installer
Since we are on an Ubuntu host, let’s download the ISO for the network installer of Ubuntu. This file is only 75Mb, so it is perfect for boot testing. When complete, you should have a local file named “~/Downloads/mini.iso”.
Create KVM Virtual Machine
Now we are ready to create a KVM virtual machine capable of running ESXi. Run the ‘virt-install’ command below, tailoring it to any specific cpu/disk/ram requirements.
Then validate this virtualized guest will inherit the host cpu virtualization enablement.
You need to have the CDROM connected after power up. Depending on the domain xml, you may have to attach it as device “hdb” or “hdc” so look at the <target dev=”hd?“> label of the cdrom device.
With this connection to the CDROM, go ahead and reset the machine so it can use it for booting.
NOTE: The parameters I used in the virt-install command above are intentional, and I tried steer you around several issues that you could run into. The video type of qxl is intentional (cirrus made the initial setup screen loop), the disk is IDE purposely to avoid errors during the install, and the network is using a generic e1000 type which can be detected by the installer.
The supported hardware list of ESXi is narrow, so if you can get installation to work properly using other driver types, that is fine. Just know that it took me a while to get the combination above to work.
Installing ESXi
The initial ESXi boot menu should now come up.
And then just like a normal ESXi install as described in many places on the web already [1,2,3,4].
At the welcome screen, press <ENTER>
<F11> to accept the license
<ENTER> to select the single disk as the default install drive
<ENTER> on the default US keyboard
input the initial password for root
<ENTER> if you receive CPU future support warning
Install Docker On Vmware Esxi
<F11> to validate install on disk
…installing…
message to disconnect the CDROM before rebooting
<ENTER> to reboot
After rebooting, you should see the screen below giving you the 192.168.122.x address of the ESXi server (192.168.122.0/24 is the default NAT network used by KVM).
Press <F2> and then enter in the root password you typed during the install. You will want to go to “Troubleshooting Options”, and enable the ESXi shell and SSH before pressing <ESC> twice to go back to the main screen.
ESXi Embedded Host Client
Using a web browser, go to the URL specified in the ESXi startup screen (https://<server>/ui/).
You should get the login page, use your “root” credentials to login and you should see a screen similar to below which is the Embedded Host Client.
This HTML/Javascript application is a lightweight management interface served directly from the ESXi host. Note that it cannot be used to manage vCenter (like /vsphere-client).
Creating a Virtual Machine
Remember at this point, that we are multiple layers down in virtualization engines. ESXi is running nested inside KVM. So now let’s create a guest VM inside ESXi to ensure this works.
First press “Create/Register VM”.
Then choose to “Create a new virtual machine” and press Next.
Type a name of “esxchild1” for the guest OS and select a Linux host, Ubuntu Linux (64-bit).
We only created a single 80Gb disk for the ESXi host, so allow the default datastore1 and press Next
Then the customize settings come up, and you can leave the default 1 vcpu and 1024M memory.
But the one thing you should change is the CD/DVD drive. We want to use the Ubuntu network installer ISO on your local system at “~/Downloads/mini.iso”. Select “Datastore ISO file” from the pulldown instead of “Host device” and a datastore browser will show.
Press “Upload” and select the mini.iso file from your local drive. Then click on the file in the middle column, and press “Select”. This will take you back to the main settings window and just make sure “Connect” is checked beside the CDROM.
You are now at the finish screen, review your choices and press Finish.
Now navigate using the tree on the left to “esxchild1” and notice the power is still off. Go ahead and press the power button.
You should see the preview screen change and the Ubuntu logo will be visible. Press “Console” and then “Open browser console” to get an interactive screen.
We will stop this exercise here, but this OS could be installed and this host could be used exactly the same as a guest OS created under KVM.
Stability Issues
If you find that your ESXi host often experiences the VMware “Purple Screen of Death”, remember the supported hardware list for VMware is narrow, and you are probably experiencing incompatibility with host hardware or drivers.
I saw a great deal of improvement by having KVM use the newest OVMF UEFI firmware for the virtual machine instead of using the SeaBIOS distributed with KVM (which is very old).
The firmware is easily set for a KVM host by adding a <loader> element to the <os> section of the xml. But this must be done before ESXi is installed, so it takes a rebuild if you want to switch.
Here is my article on building the latest OVMF image. Here is my article on building the latest SeaBIOS image if you want to stick with the legacy firmware standard. And here is my article on how to set the <loader> element so that your KVM virtual machine uses the updated firmware.
Further ESXi configuration steps
Here are a few more optional steps that can help in managing this ESXi host. If you are going to install vCenter next, you need to consider the static IP and hostname change mandatory – full DNS and reverse IP resolution is mandatory during the vCenter installation.
Static IP [1,2,3]
Setting a static IP is easier especially when manually managing DNS entries. This can be done from an ssh session and web GUI as well, but let’s do it directly from the ESX host console.
Choose “Configure Management Network>IPv4 Configuration” and set the IP, subnet, and default gateway as shown below.
Hostname and DNS [1,2,3]
The default name is “localhost”, so that is something you need to change if you are going to address this server by DNS or from vCenter. Choose “Configure Management Network>DNS Configuration” and set the full hostname and primary DNS server.
In the above screenshot I have set the hostname to “esxi1”, if you want the certificate to have a fully qualified domain name, then provide the full value (e.g. “esxi1.home.lab”).
If you are using fully qualified hostname, choose Choose “Configure Management Network>Custom DNS Suffixes” and enter in the domain name that should be appended. By setting the value to “home.lab” as shown below, the fully qualified name of this host becomes “esx1.home.lab”.
Certificate Regeneration [1,2,3,4]
The initial certificate delivered on the ESXi is CN=localhost.localdomain. To change this to a self-signed cert that reflects the true name use the ‘generate-certificates’ utility from an ssh session to the ESXi host.
If you now refresh your browser on the ESXi Embedded Host Client and look at the certificate, it will show your hostname for the CN. Note that it is still a self-signed certificate (and therefore untrusted by default).
SSH/SCP access
Enabling ssh access earlier at the console did enable ssh when ESXi is the target host, but ssh/scp initiated from the ESXi host to another host is still not enabled. All you will see is “FIPS mode initialized” and a timeout.
To make this work, you need to disable a firewall rule. Run the following commands to check the firewall rule, then disable it.
At that point, scp initiated from the ESXi host will work.
MOB enablement
Although it is disabled by default for security reasons, on a lab or non-production system you may need to use the MOB (Managed Object Browser). For example, to view the VT-x and EPT features passed through to your nested ESXi server using a URL like below:
https://esxi1.home.lab/mob/?moid=ha-host&doPath=capability
But first it must be enabled, and this can be done via vCenter or from the console of the ESX server using ‘vim-cmd’:
REFERENCES
https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-esxi-vcenter-server-67-release-notes.html#compatibility (HW support list for 6.7)
https://www.pluralsight.com/blog/it-ops/top-vmware-compatibility-issues (top compat issues)
https://www.thehumblelab.com/vsphere-67-homelabs-unsupported-cpu/ (working around incompat)
https://tinkertry.com/patch-esxi-6-manually (upgrade keeping vib)
https://xenserver.org/blog/entry/vga-over-cirrus-in-xenserver-6-2.html (using cirrus)
https://www.redhat.com/archives/libvirt-users/2015-April/msg00067.html (try to use raw disk not qcow2)
https://haveyoutriedreinstalling.com/2017/07/17/vsphere-6-x-certificates-just-because-you-can-doesnt-mean-you-should/ (list of certs in vsphere 6)
NOTES
If ESXi complaining about not supporting virtualization [1], from esxi host
cat /etc/vmware/config, add two keys:
esx config info, HV Support enabled if value is 3 ; number of core/threads
get cpu and core results
dmesg | grep -E “vmx|iommu”
Checking host VT-d virtualization for IO, IOMMU (different from cpu VT-x)
dmesg | grep -iE “dmar|iommu|aspm”
cat /var/log/kern.log | grep IOMMU
add to /etc/default/grub (for VT-d) bug in 18.04
GRUB_CMDLINE_LINUX_DEFAULT=”intel_iommu=on”
$ grub-install –version
$ sudo update-grub (equiv to grub2-mkconfig)
dnsmasq keep from forwarding local addresses [1]
list of esxi software components [1]
Direct ESXi host modes
maintenance mode [1],
mount cdrom from inside esxi [1,2]
ESXi software repository and installs
Get list of valid cpu flags for qemu
List cpu model feature flag capabilities
View qemu details for vm guest startup