This topic describes how to configure data replication between Delphix Engines. Replication is configured with Replication Profiles that contain options such as the replication schedule, the hostname of the target Engine, and the selected objects that will be replicated.
- Version Requirements: the replication target can be on the same or newer version than the replication source.
- Engine Communication: The target Delphix Engine must be reachable from the source Engine.
- Storage Allocation: The target Delphix Engine must have sufficient free storage to receive the replicated data.
- Privilege Requirements: The user in the replication profile must have administrative privileges on the source and the target engines.
Configuring the Network
Delphix Replication uses a private network protocol to communicate between two Delphix Engines. You may specify a network interface to run replication by configuring routing to direct traffic over a particular interface.
The replication network protocol uses TCP port 8415. If there is a firewall between the source and target that is blocking this port, then there are two possible solutions:
- Enable port 8415 on the firewall in order to allow connections to this port from the source to the target.
Replication can connect through a SOCKS proxy if one exists. Configure the SOCKS proxy address and port by connecting to the command-line interface (CLI) as a system administrator user and navigating to "service proxy" to update the SOCKS configuration.
Port 1080SOCKS port 1080 is used by default but can be overridden
Replication can connect through a SOCKS proxy if one exists. Configure the SOCKS proxy address and port by connecting to the command-line interface (CLI) as a system administrator user and navigating to "service proxy" to update the SOCKS configuration. Example:
Example of a SOCKS Proxy
dlpx-engine> service proxy dlpx-engine service proxy> update dlpx-engine service proxy update *> set socks.enabled=true dlpx-engine service proxy update *> set socks.host=10.2.3.4 dlpx-engine service proxy update *> set socks.username=someuser dlpx-engine service proxy update *> set socks.password=somepassword dlpx-engine service proxy update *> commit dlpx-engine service proxy> get type: ProxyService https: type: ProxyConfiguration enabled: false host: (unset) password: (unset) port: 8080 username: (unset) socks: type: ProxyConfiguration enabled: true host: 10.2.3.4 password: ******** port: 1080 username: someuser
Configuring the Replication Source Delphix Engine
- On the source Delphix Engine, click System, then Replication.
- In the left-hand navigation section, click Create Profile.
- Enter the following required fields:
- Name of the replication profile
- The hostname or IP address for the target Delphix Engine.
Replication Profile Options
There are several configuration options for your replication profiles. These give you more granular control on options such as when replication will run, how much bandwidth it may use, and which objects are replicated. Details for each option are described below.
The following configurable options are static and can not be configured at run-time. You can set these configurations while a replication spec is being executed, but the values will be applied only after the next execution.
- Automatic Replication: A policy to automatically run replication. With this option, you can set up replication based on the schedule you need. Automatic replication allows you to define a policy to automatically run replication. By default, automatic replication is disabled, meaning that you must trigger replication updates manually. To enable automatic replication, click the Enabled checkbox. With this setting, you can enter the frequency and time for replication updates to the target Delphix Engine. Automatic replication uses Quartz, a job scheduling tool (http://www.quartz-scheduler.org/), for scheduling, which can be configured via the Advanced option
- Traffic Options: Various traffic and bandwidth options are available. For example, you may want to enable encrypted traffic or limited bandwidth during replication updates.
Encrypting Traffic: By default, replication streams are unencrypted, which provides maximum performance on a secure network. However, this setting allows you to encrypt traffic during replication.
Encrypting the replication stream will consume additional CPU resources and may limit the maximum bandwidth that can be achieved.
- Network Connections: Allows setting the number of underlying network connections that can be used by replication.
Limiting Bandwidth: By default, replication will run at the maximum speed permitted by the underlying infrastructure. In some cases, particularly when a shared network is being used, replication can increase resource contention and may impact the performance of other operations. This option allows administrators to specify the maximum bandwidth that replication can consume.
Objects Being Replicated: Select the objects you wish to be replicated from the source engine to the target engine. In the right-hand column, under Objects Being Replicated, you can select the objects you want to replicate. Some selected objects may have dependencies – other objects that will be pulled into replication because they share data.
- This is not guaranteed to be the full set of dependent objects. The full set of objects and their dependents will be calculated at the time of replication.
- You can not proceed to configure a Replication Profile without selecting an object and the last object from a Replication Profile can not be removed. If you need to remove the last object from the Replication Profile, you must delete that Replication Profile.
- The object is removed from the replication target namespace only after a subsequent replication job is executed for the associated replication specification.
- Remove the object from the specification on the source
- Execute the replication specification
- The object is removed as a part of the replication job.
The object is not removed while modifying the replication specification. To add back the object, the entire object needs to be sent again.
- When replicating a group, all dSources and VDBs currently in the group, or added to the group at a later time, will be included.
- If you select a Delphix Self-Service data template, all data containers created from that template will be included. Likewise, if you select a data container, its parent data template will be included.
- Regardless of whether you select a VDB individually or as part of a group, the parent dSource or VDB (and any parents in its lineage) are automatically included.
- This is required because VDBs share data with their parent object.
- In addition, any environments containing database instances used as part of a replicated dSource or VDB are included as well.
- When replicating individual VDBs, only those database instances and repositories required to represent the replicated VDBs are included. Other database instances that may be part of the environment, such as those for other VDBs, are not included.
Configuring the Target Delphix Engine
Additional configuration on the target engine is not needed. Replicated objects will appear in an alternate received replica (or namespace) that mirrors the original object layout.
To view replicated objects from the Delphix Engine:
- Click System, then select Replication.
- Look under Received Replicas. All replicated objects are read-only until the replica is failed over. For more information about managing replicas and how to activate a replica, see the topics Replicas and Failover and Controlled Failover.
You can create and manage objects on the target server without affecting subsequent updates, though this can cause conflicts on failover that require additional time to resolve. For disaster recovery use cases, it is recommended to keep the target passive and not create any local objects. This will avoid conflicts and guarantee a smooth failover operation.
Multiple sources can replicate to the same target, allowing for the flexible geographical distribution of data. This is not a recommended practice for disaster recovery, because it increases the probability of conflicts on failover and may oversubscribe resources on the target if multiple replicas are failed over and there is insufficient infrastructure to support the combined workloads.