30. April 2023

Intel Optane NVMe Drives – Sample Hardware – VMware vSAN OSA vs. ESA Infrastructure Preperation

By H. Cemre Günay

As you know, I am one of the lucky ones of the Intel Optane NVMe Sample Hardware Program powered by VMware vExpert in cooperation with Intel.

My first idea was to make a big blog post for the hole project, but now I decided to make 2 Parts. In this Part we will talk about the infrastructure/requirements and in the second part we will talk about the Performance differences between vSAN OSA vs. ESA. Let us start with a quick overview for both vSAN Architecture requirements.

VMware vSAN Original Storage Architecture Requirements

  • One SAS or SATA solid-state disk (SSD) or PCIe flash Cache device.
  • Hybrid disk group configuration must have at least one SAS or NL-SAS magnetic disk.
  • All-flash disk group configuration must have at least one SAS, or SATA solid-state disk (SSD), or PCIe flash device.
  • One SAS or SATA host bus adapter (HBA), or a RAID controller that is in passthrough mode or RAID 0 mode.
  • The memory requirements for vSAN Original Storage Architecture depend on the number of disk groups and devices that the ESXi hypervisor must manage.
  • All-flash configurations need 10 GbE Network Bandwith, Hybrid configurations can start with 1 GbE
  • Available since VMware vSphere 5.5 (2014)

VMware vSAN Express Storage Architecture Requirements

  • VMware vSAN ReadyNodes only
  • Two Socket CPU with 32 Cores each
  • Each storage pool must have at least four NVMe TLC devices with minimum 1,6TB capacity.
  • Requires at least 512 GB host memory. The memory needed for your environment depends on the number of devices in the host’s storage pool.
  • One 25 GbE NIC minimum, 100 GbE recommended
  • Only available with VMware vSphere 8 and newer

Cluster Requirements for VMware vSAN

All capacity devices, drivers, and firmware versions in your vSAN configuration must be certified and listed in the vSAN section of the VMware Compatibility Guide.
A standard vSAN cluster must contain a minimum of three hosts that contribute capacity to the cluster. A two host vSAN cluster consists of two data hosts and an external witness host. (like in my case)

irgNET Infrastructure

As you guys know of my monthly HomeLab Updates, I have 2 Dell R730 Servers with 2x Intel Xeon 2680v3 CPUs, 256 GB Memory each and 10 GbE Connection, 2 Ports for Management and VM Network and 2 Ports for VMware vSAN and vMotion crossover. My main Storage is based on vSAN OSA, with 1x 800 GB Intel DC P3700 PCI Device and 5x Dell 1,92TB SAS SSDs for each Host.

Thanks to the VMware vExpert program I can build a VMware vSAN ESA cluster with 4x Intel Optane 280 GB Drives each Host. Since my servers are a few years old, they do not support NVMe drives via the front bays, so I needed another solution. No sooner said than done, thanks to my mentor Marc Huppert, we found following adapters: https://www.ebay.de/itm/394416128557

The adapters arrived relatively quickly (from good old China) and were crafted accordingly with the Intel Optane Drives in the Ironforge – for those who play World of Warcraft 😉

One of the biggest challenges was making both Intel Optane Drives visible on their respective adapters in my VMware vCenter to create the storage pools. After installing the drives and booting the servers, I only got one Intel Optane drive visible.

To solve this problem, I had to look into the BIOS settings of my Dell R730. I found the solution under

Slot Bifurcation

Enables you to control the bifurcation of the specified slot. The configuration foa x16 slot is default x16, x8x8 or x4x4x4x4. The configuration for a x8 slot is default x8 or x4x4. After setting the slot bifurcation correctly, Slot 4 was a x16 Slot and got x4x4x4x4 and Slot 5 was a x8 slot and got x4x4, I got 2 Intel Optane Drives per adapter to view in my VMware vCenter.

Additionally, each node has an NVIDIA Tesla P4 GPU for my VMware Horizon environment.

My third Dell R730 (not shown in the picture) runs the Witness Appliances for both architectures.

Regarding to the vSAN ESA Architecture, I am not fullfilling all Hardware requirements but the most ciritcal ones are the NVMe Devices. It is now possible to build a (unsupported) 2-node vSAN ESA cluster in my constellation. I ran the following HCIBench performance tests with my vSAN OSA cluster and 10 VMs each:

  • 4k Block Size – 0% read – 100% random
  • 4k Block Size – 70% read – 100% random
  • 4k Block Size – 100% read – 0% random (performance)
  • 4k Block Size – 100% read – 100% random
  • 8k Block Size – 50% read – 100% random
  • 256k Block Size – 100% read – 100% random

So a couple of quite informative test. I will run the same tests with the rebuilt vSAN ESA cluster so we have a 1:1 comparison. Before I tore down my vSAN OSA cluster, I prepared some screenshots regarding proper Disk Group deletion, also as part of the preparation for the vSAN ESA cluster. Starting with the overview my Disk Groups:

and the overview of my cluster:

All the way to migrating my VMs to my third Dell R730.

And after successfully putting both nodes into Maintenance Mode, deleting the respective Disk Groups.

Up to disabling VMware HA

and VMware DRS

The final preparation is the deployment of the vSAN ESA Witness appliance.

To then start configuring the VMware vSAN ESA cluster, where I conclude this blog post.

Look forward to the second part where we will finally look at the direct comparison between the two architectures. 🙂