Multiple Device Removal in the Delphix Engine and higher version introduces a breaking kernel module change that requires a reboot to load the new module. Therefore, a deferred reboot engine upgrade operation will be unable to remove devices until a reboot is performed.

Getting Started

Delphix storage migration allows you to remove storage devices from your Delphix Engine, provided there is sufficient unused space, thus allowing you to repurpose the removed storage device to a different server. You can also use this feature to migrate your Delphix Engine to different storage by adding the new storage devices and then removing the old storage devices.

Feature Compatibility

This feature is only compatible with Delphix Engine Releases 5.0.4 and later.  

Possible Migration Methods




Delphix Storage Migration

  • Good for migrating storage that was accidentally added to the engine or added to the engine improperly (wrong size).

  • May reduce fragmentation if new storage is added to replace old disks.

  • Good for migrating a small amount of storage (e.g.: < 10 TB).

  • With Delphix versions prior to 5.3, this mapping table can consume 2-3GB of RAM for every 1TB of allocated data that is migrated, if the disk being removed has a high level of fragmentation. From version 5.3, DxFS will migrate the data in larger blocks, comprising both allocated and unallocated space. This allows for significantly fewer mapping entries, with memory usage typically reduced to 50-100MB per TB of allocated data that is migrated.

  • May increase fragmentation on remaining disks if no new disks are added.

  • Depending on the size of the disk and storage performance this method could be less performant than other methods.

  • Each device removed could take longer than the previous one as data is remapped across the remaining disks.

  • The maximum number of 20 devices can be removed in releases prior to Delphix version 5.1.


  • Fast

  • If there is high fragmentation on the existing disks, this is copied to the new disks.

Delphix Replication

  • Data is completely rewritten from one Delphix Engine to another which significantly reduces fragmentation on the new Delphix Engine.

  • Replication can be configured to limit the impact on the network (compression and bandwidth).

  • Replication is resumable from network disruptions, on in the event of replication source or target stack/host restarts.  It is currently NOT possible to manually suspend/resume a replication job.

  • Depending on the number of objects to replicate as well as network and storage performance, this method could be considered slow.

  • This does require an outage to "migrate" the objects from the replication source Delphix Engine to the replication target Delphix Engine - outage time depends on several factors like the number of objects, incremental replication time, time to enable/disable objects, etc.

  • Only migrates storage objects like VDBs/dSources, and dependent environment information. Other items like users/policies/events/job history/config templates are NOT replicated.

Understanding Delphix Storage Migration

Delphix storage migration is a multi-step process:

  1. Identify a storage device for removal. The device you choose will depend on your use case.  
    1. To remove extra storage that is unused, you can select any device for removal. For best performance, select the device with the least allocated space; typically, this is the device that you added most recently. The allocated space is defined by the usedSize property of the storage device:

      areece-test1.dcenter 'Disk10:2'> ls
          type: ConfiguredStorageDevice
          name: Disk10:2
          bootDevice: false
          configured: true
          expandableSize: 0B
          model: Virtual disk
          reference:  STORAGE_DEVICE-6000c293733774b7bb0e4aea83513b36
          serial: 6000c293733774b7bb0e4aea83513b36
            size: 8GB
            usedSize: 7.56MB  
          vendor: VMware
    2. To migrate the Delphix Engine to new storage, add storage devices backed by the new storage to the Delphix Engine. Then remove all the devices on the old storage.
  2. Use the Delphix command-line interface (CLI) to initiate the removal of your selected device. 
  3. Data will be migrated from the selected storage device to the other configured storage devices. This process will take longer the more data there is to move; for very large disks, it could potentially take hours. You can cancel this step if necessary. 
  4. The status of the device changes from configured to unconfigured and an alert is generated to inform you that you can safely detach the storage device from the Delphix Engine. After this point, it is not possible to undo the removal, although it is possible to add the storage device back to the Delphix Engine.
  5. Use the hypervisor to detach the storage device from the Delphix Engine. After this point, the Delphix Engine is no longer using the storage device, and you can safely re-use or destroy it.

Limitations of Delphix Storage Migration

After removal, the Delphix Engine uses memory to track the removed data. In the worst-case scenario, this could be as much as 2-3 GB of memory per TB of used storage. Note that this is used storage; the overhead of removing a 1TB device with only 500MB of data on it will be much lower than the overhead of removing a 10GB device with 5GB of data on it.

User Interface

Delphix storage migration is currently available exclusively via the CLI. There are 3 entry points:

  • storage/remove    The status of the current or most recent removal, including the total memory used by all removals up to this point
  • storage/device “$device”/removeVerify –  Returns the predicted effect of removing the selected device, or an error if the device cannot be removed
  • storage/device “$device”/remove –  Begins the evacuation and removal of the selected device

Device Removal for Storage Migration


Do not remove a configured storage device or reduce its capacity. Removing or reducing a configured storage device will cause a fault with the Delphix Engine, and will require the assistance of Delphix Support for recovery.

  1. Identify which device you want to remove.

    1. If you are using a VMware RDM disk, note the UUID of the device by looking at its name in the vSphere GUI. For more information, see this Getting the UUID of a RDM Disk from VMware, via the vSphere GUI.  

    2. If you are using a VMware virtual disk, note the UUID of the device via the vSphere API. See the section of this VMware KB article on how to get the UUID of your virtual disk. 

    3. In EC2, note the attachment point – for example,  /dev/sdf.

    4. In KVM, note the UUID.

  2. Login to the Delphix CLI as a sysadmin user and navigate to storage/device.

  3. Type cd storage/device.

  4. Select your device:

    areece-test1.dcenter storage device> ls
    Disk10:2  true        8GB   0B
    Disk10:0  true        24GB  0B
    Disk10:1  true        8GB   0B
    Disk10:3  true        8GB   0B
    areece-test1.dcenter.dcenter storage device> select Disk10:2
  5. (VMware only) Confirm that your disk selection is correct by validating that the serial matches your UUID:

    areece-test1.dcenter storage device 'Disk10:2'> ls
       type: ConfiguredStorageDevice
       name: Disk10:2
       bootDevice: false
       configured: true
       expandableSize: 0B
       model: Virtual disk
       reference: STORAGE_DEVICE-6000c2909ccd9d3e4b5d62d733c5112f
       serial: 6000c2909ccd9d3e4b5d62d733c5112f
       size: 8GB
       usedSize: 8.02MB
       vendor: VMware
  6. Execute removeVerify to confirm that removal will succeed. Validate the amount of memory/storage used by the removal:

    areece-test1 storage device 'Disk10:2'> removeVerify
    areece-test1 storage device 'Disk10:2' removeVerify *> commit
       type: StorageDeviceRemovalVerifyResult
       newFreeBytes: 15.85GB
       newMappingMemory: 3.14KB
       oldFreeBytes: 23.79GB
       oldMappingMemory: 0B
  7. Execute remove to start the device evacuation:

    areece-test1 storage device 'Disk10:2'> remove
    areece-test1 storage device 'Disk10:2' remove *> commit Dispatched job JOB-1 STORAGE_DEVICE_START_REMOVAL job started for "Disk10:2". STORAGE_DEVICE_START_REMOVAL job for "Disk10:2" completed successfully.

    This does not signify that the device migration has been completed.
    A STORAGE_DEVICE_REMOVAL job will start, which handles the data migration from the disk.

  8. Wait for device evacuation to complete. Alternatively, you can cancel the evacuation. 
    Do not detach the device from the Delphix Engine in your hypervisor until the data evacuation is completed.

    You can monitor the progress of the STORAGE_DEVICE_REMOVAL job in the Management GUI under System >Jobs.

  9. Once the device evacuation has been completed, the job will finish and a fault will be generated. Detach the disk from your hypervisor and the fault will clear on its own. An example of the fault created is shown below.

Using VMDKs

When using VMDKs, deleting the wrong VMDK could cause data loss. Therefore, it is highly advisable to detach the device, then verify that the Delphix Engine continues to operate correctly, and lastly delete the VMDK.

Getting the UUID of an RDM Disk from VMware via the vSphere GUI.

In the event that the disk serial number displayed in Delphix does not match the UUID in VMware, the Delphix Engine must be powered off and back on in order to make VMware provide the correct values to the guest operating system (Delphix). This has been necessary when using vmkfstools with setuuid. When forcing the guest OS to re-read the SCSI sense data for the device, VMWare still provides the original values. Even after a simple reboot VMWare still provides the previous UUID values. It was not until the VM was explicitly powered off and back on did VMWare present the new UUID values to the guest. After which the UUIDs matched between the vmkfstools getuuid command and the CLI output.

In the ESX graphical user interface (GUI), select your VM

  1. Click Edit settings.
  2. If not already displayed, select the Hardware tab.
  3. Select the device you want to remove.
  4. Click Manage Paths.

The UUID of the device appears in the title bar, as seen below.

Getting the UUID of a VMDK from VMware, via ssh to the ESX server

  1. ssh onto the ESX server as the root user.
  2. Navigate to the directory containing the .vmdk files for the Delphix VM.
Use the 'vmkfstools -J getuuid <.vmdk filename>' command to obtain the UUID, for example:
/vmfs/volumes/25894daa-f7b2b044/delphix01-2356 # vmkfstools -J getuuid delphix01-2356_1.vmdk
UUID is 60 00 C2 91 01 bc 8e 72-31 a4 cd b0 b3 f6 e5 74

Getting the UUID of a VMDK from VMware, via VMware PowerCLI

PS C:\> Connect-VIServer -Server durban -Protocol https -Username root -Password root_password
Name                           Port  User
----                           ----  ----
durban                         443   root
PS C:\> Get-VM delphixVM | Get-HardDisk | select name,filename,@{name="UUID";expr={$_.extensiondata.backing.uuid}}
Name                                        Filename                                    UUID
----                                        --------                                    ----
Hard disk 1                                 [zfs_delphixVM] dlpx- 6000C294-a115-0327-e417-02560d86e944
Hard disk 2                                 [zfs_delphixVM] dlpx- 6000C299-38fe-5050-1eb2-1ee6db62b257
Hard disk 3                                 [zfs_delphixVM] dlpx- 6000C294-662d-c674-8957-03e0514b7006
Hard disk 4                                 [zfs_delphixVM] dlpx- 6000C29d-0719-1072-0f85-96da2efef4a3
PS C:\> Disconnect-VIServer
Are you sure you want to perform this action?
Performing operation "Disconnect VIServer" on Target "User: root, Server: durban, Port: 443".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): Y

Related Topics