This topic covers the virtual machine requirements, including memory and data storage, for installation of the Delphix Engine on a VMware virtualization platform.

Ideally, the Delphix Engine Virtual Machine should be placed on a server where it will not contend with other VMs for network, storage or compute resources. The Delphix Engine is an I/O intensive application, and deploying it in an environment where it must share resources with other virtual machines, especially in configurations that involve sharing I/O channels, disk spindles, and network connections, can significantly reduce virtual database performance.

Virtualization Platform
  • VMware ESX/ESXi 6.0 through 6.5 (recommended)
  • VMware ESX/ESXi 5.1 (supported)
  • VMware ESX 5.5 required for VMDK sizes greater than 2TB.
Virtual CPUs
  • 8 vCPUs
  • CPU resource shortfalls can occur under high I/O throughput conditions. CPU reservation is strongly recommended for the Delphix VM, so that Delphix is guaranteed the full complement of vCPUs even when resources are overcommitted.

  • Suggested to use single core per socket, unless there are specific requirements due to hypervisor settings, hence, it is recommended to use single core per virtual socket when setting up the Delphix Engine VM on ESX.

Do Not Allocate All CPUs to Virtual Machine Guests

Never allocate all available physical CPUs to virtual machines. CPU for the ESX Server to perform hypervisor activities must be set aside before assigning vCPU's to Delphix and other VMs. We recommend a minimum of 2 CPU's be reserved for Hypervisor operation.
  • 128 GB vRAM (recommended)
  •    64GB vRAM (minimum)
  • The Delphix Engine uses its memory to cache database blocks. More memory will provide better read performance.
  • Memory reservation is required for the Delphix VM. Performance of the Delphix Engine will be significantly impacted by over-comittment of memory resources in the ESX Server. Reservation ensures that the Delphix Engine will not stall while waiting for its memory to be paged in by the ESX Server.

Do Not Allocate All Memory to Virtual Machine Guests

Never allocate all available physical memory to virtual machines. Memory for the ESX Server to perform hypervisor activities must be set aside before assigning memory to Delphix and other VMs. The default ESX minimum free memory requirement is 6% of total RAM. When free memory falls below 6%, ESX starts swapping out the Delphix guest OS. We recommend leaving about 8-10% free to avoid swapping.

For example, when running on an ESX Host with 512GB of physical memory, no more than 470GB (92%) should be allocated to the Delphix VM (and all other VMs on that host)

  1. The .ova is pre-configured to use one virtual ethernet adapter of type VMXNET 3. If additional virtual network adapters are desired, they should also be of type VMXNET 3.
  2. A 10GbE NIC in the ESX Server is recommended.
  3. If the network load in the ESX Server hosting the Delphix Engine VM is high, dedicate one or more physical NICs to the Delphix Engine.
  • Jumbo frames are highly recommended to reduce CPU utilization, decrease latency and increase network throughput. (typically 10-20% throughput improvement)
  • For environments having only gigabit networks, it is possible to aggregate several physical 1GbE NICs together to increase network bandwidth (but not necessarily to reduce latency). Refer to the VMware Knowledge Base article NIC Teaming in ESXi and ESX. Do not aggregate NICs in the Delphix Engine VM.
  • See General Network and Connectivity Requirements for information about specific port configurations, and Network Performance Configuration Options for information about network performance tuning
SCSI Controller
  • LSI Logic Parallel

When adding virtual disks make sure that they are evenly distributing the load across the maximum of 4 virtual SCSI controllers. Spreading the disks across available SCSI controllers evenly will ensure optimal IO performance from the disks. For example, a VM with 4 SCSI controllers and 8 virtual disks should distribute the disks across the controllers as follows:

disk0 = SCSI(0:0) - System Disk on Controller 0 Port 0 (ignore for purposes of load balancing)
disk1 = SCSI(0:1) - Data Disk on Controller 0 Port 1
disk2 = SCSI(1:1) - Data Disk on Controller 1 Port 1
disk3 = SCSI(2:1) - Data Disk on Controller 2 Port 1
disk4 = SCSI(3:1) - Data Disk on Controller 3 Port 1
disk1 = SCSI(0:2) - Data Disk on Controller 0 Port 2
disk2 = SCSI(1:2) - Data Disk on Controller 1  Port 2
disk3 = SCSI(2:2) - Data Disk on Controller 2 Port 2
disk4 = SCSI(3:2) - Data Disk on Controller 3   Port 2

Note: For load purposes, we generally focus on the DB storage and ignore the controller placement of the system disk.

General Storage Configuration

Storage used for Delphix must be provisioned from storage that provides data protection, e.g. by using RAID levels with data protection features, or equivalent technology. The Delphix Engine product does not protect against data loss originating at the hypervisor or SAN layers.

See Optimal Storage Configuration Parameters for the Delphix Engine

Delphix VM Configuration Storage
  1. The Delphix VM configuration should be stored in a VMFS volume (often called a "datastore").
  2. The VMFS volume should have enough available space to hold all ESX configuration and log files associated with the Delphix Engine.
  • If a memory reservation is not enabled for the Delphix Engine (in violation of memory requirements stated above), then space for a paging area equal to the Delphix Engine's VM memory must be added to the VMFS volume containing the Delphix VM configuration data.
Delphix Engine System Disk Storage
  1. The Delphix Engine system disk should be stored in a VMDK.
  2. The Delphix .ova file is configured for a 300GB system drive. The VMFS volume where the .ova is deployed should, therefore, have at least 300GB of free space prior to deploying the .ova.
  3. The VMFS volume must be located on shared storage in order to use vMotion and HA features.
  • The VMDK for the Delphix Engine System Disk Storage is often created in the same VMFS volume as the Delphix VM definition. In that case, the datastore must have sufficient space to hold the Delphix VM Configuration, the VDMK for the system disk, and a paging area if a memory reservation was not enabled for the Delphix Engine.
Database Storage
  1. VMDKs or RDMs operating in virtual compatibility mode can be used for database storage.
  2. A minimum of 4 VMDKs or RDMs should be allocated for database storage.
  3. If using VMDKs:
    • Each VMDK should be in a different VMFS volume
    • Each VMDK should be the only VMDK in its VMFS volume
    • The VMFS volumes should be assigned to dedicated physical LUNs on redundant storage. The VMFS volumes should not be shared with the ESX Server Console or any other Virtual Machines.

    • On vSphere 5.x, the VMDKs should be created with the Thick Provision Lazy Zeroed option.
    • On vSphere 4.x, the VMDKs should be created with the Thick option (Thin provisioning not selected).
  4. The quantity and size of VDMKs or RDMs assigned must be identical across all 4 controllers
  5. The physical LUNs used for VMFS volumes and RDMs should be of the same type in terms of performance characteristics such as latency, RPMs, and RAID level. In addition, the total number of disk drives that comprise the set of physical LUNs should be capable of providing the desired aggregate I/O throughput (MB/sec) and IOPS (Input/Output Operations per Second) for all virtual databases that will be hosted by the Delphix Engine.
  6. The physical LUNs used for VMFS volumes can be thin-provisioned in the storage array.
  7. For best performance, the LUNs used for RDMs should not be thin-provisioned in the storage array, but should be thick-provisioned with a size equal to the amount of storage that will be initially allocated to the Delphix Engine. The RDM can be expanded in the future when more storage is needed.
  8. Shared storage is required in order to use vMotion and HA features.
  • Allocating a minimum of 4 VMDKs or RDMs for database storage enables the Delphix File System (DxFS) to make sure that its file systems are always consistent on disk without additional serialization. This also enables the Delphix Engine to achieve higher I/O rates by queueing more I/O operations to its storage.
  • Provisioning VMDKs from isolated VMFS volumes on dedicated physical LUNs:
    • Reduces contention for the underlying physical LUNs
    • Eliminates contention for locks on the VMFS volumes from other VMs and/or the ESX Server Console
    • Enables higher availability of the Delphix VM by allowing vSphere to vMotion the VM to a different ESX host in the event of a failure of the Delphix ESX host
  • If the underlying storage array allocates physical LUNs by carving them from RAID groups, the LUNs should be allocated from different RAID groups. This eliminates contention for the underlying disks in the RAID groups as the Delphix Engine distributes IO across its storage devices.
  • If the storage array allocates physical LUNs from storage pools comprising dozens of disk drives, the LUNs should be distributed evenly across the available pools.
  • Using thin-provisioned LUNs in the storage array for VMFS volumes can be useful if you anticipate adding storage to the Delphix Engine in the future. In this case, the LUNs should be thin-provisioned with a size larger than the amount of storage that will be initially allocated to the Delphix Engine. When you want to add more storage to the Delphix Engine, use vSphere to expand the size of the VMDKs. Be sure to specify that the additional storage is also thick-provisioned and eager-zeroed.

In addition to making sure the latest VMware patches have been applied, check with your hardware vendor for updates specific to your hardware configuration.

Additional VMware Features

  • Running Delphix inside of vSphere is supported.
    • Using vMotion on a Delphix VM is supported.
  • Device passthrough is not supported.

Known Issues

According to the following HP advisory, "On HP ProLiant servers configured with any of the HP Smart Array Controllers listed in the Scope section (below) and running VMware ESXi 5.0, 5.1, or 5.5, or Red Enterprise Hat Linux 6 or 7, an out-of-memory condition may lead to a server halt and purple screen after upgrading to HP Smart Array Controller Driver (hpsa) Version 5.x.0.58-1 (ESXi 5.0 and ESXi 5.1), Version (ESXi 5.5), or Version 3.4.4-125 (Red Hat Enterprise Linux)."