This topic describes configuration options to maximize the performance of a target host in a Delphix Engine deployment.
On Solaris, by default the maximum I/O size used for NFS read or write requests is 32K. When Oracle does I/O larger than 32K, the I/O is broken down into smaller requests that are serialized. This may result in poor I/O performance. To increase the maximum I/O size:
As superuser, add to the /etc/system
file:
* For Delphix: change the maximum NFS block size to 1M
set nfs:nfs3_bsize=0x100000
Run this command:
# echo "nfs3_bsize/W 100000" | mdb -kw
It is necessary to install a new Service Management Facility (SMF) service that will tune TCP parameters after every boot. These are samples of the files needed to create the service:
- As superuser, download the files and install in the path listed in the Installation location in the table.
Run the commands:
# chmod 755 /lib/svc/method/dlpx-tcptune
# /usr/sbin/svccfg validate /var/svc/manifest/site/dlpx-tune.xml
# /usr/sbin/svccfg import /var/svc/manifest/site/dlpx-tune.xml
# /usr/sbin/svcadm enable site/tcptune
Verify that the SMF service ran after being enabled by running the command:
# cat `svcprop -p restarter/logfile tcptune`
You should see output similar to this:
[ May 14 20:02:02 Executing start method ("/lib/svc/method/dlpx-tcptune start"). ]
Tuning TCP Network Parameters
tcp_max_buf adjusted from 1048576 to 4194304
tcp_cwnd_max adjusted from 1048576 to 4194304
tcp_xmit_hiwat adjusted from 49152 to 4194304
tcp_recv_hiwat adjusted from 128000 to 4194304
[ May 14 20:02:02 Method "start" exited with status 0. ]
As superuser
Run the following commands:
# ipadm set-prop -p max_buf=4194304 tcp
# ipadm set-prop -p _cwnd_max=4194304 tcp
# ipadm set-prop -p send_buf=4194304 tcp
# ipadm set-prop -p recv_buf=4194304 tcp
In Linux, the number of simultaneous NFS requests is limited by the Remote Procedure Call (RPC) subsystem. The maximum number of simultaneous requests defaults to 16. Maximize the number of simultaneous requests by changing the kernel tunable sunrpc.tcp_slot_table_entries
value to 128.
As superuser, run the following command to change the instantaneous value of simultaneous RPC commands:
# sysctl -w sunrpc.tcp_slot_table_entries=128
Edit the file /etc/modprobe.d/modprobe.conf.dist
and change the line:
install sunrpc /sbin/modprobe --first-time --ignore-install sunrpc && { /bin/mount -t rpc_pipefs sunrpc /var/lib/nfs/rpc_pipefs > /dev/null 2>&1 || :;
to
install sunrpc /sbin/modprobe --first-time --ignore-install sunrpc && { /bin/mount -t rpc_pipefs sunrpc /var/lib/nfs/rpc_pipefs > /dev/null 2>&1 ; /sbin/sysctl -w sunrpc.tcp_slot_table_entries=128; }
As superuser, run the following command to change the instantaneous value of simultaneous RPC commands:
# sysctl -w sunrpc.tcp_slot_table_entries=128
If it doesn't already exist, create the file /etc/modprobe.d/rpcinfo
with the following contents:
options sunrpc tcp_slot_table_entries=128
Beginning with RHEL 6.3, the number of RPC slots is dynamically managed by the system and does not need to be tuned. Although the sunrpc.tcp_slot_table_entries
tuneable still exists, it has a default value of 2, instead of 16 as in prior releases. The maximum number of simultaneous requests is determined by the new tuneable, sunrpc.tcp_max_slot_table_entries
, which has a default value of 65535.
As superuser, add or replace the following entries in /etc/sysctl.conf.
Note: the *rmem
, *wmem
parameter values are minimum recommendations, so no change is needed if already set to higher values.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = <Customer Default>
net.core.wmem_default = <Customer Default>
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_mem = <Customer Default>
net.ipv4.tcp_rmem = 4096 4194304 16777216
net.ipv4.tcp_wmem = 4096 4194304 16777216
Run the command:
On AIX, by default the maximum I/O size used for NFS read or write requests is 64K. When Oracle does I/O larger than 64K, the I/O is broken down into smaller requests that are serialized. This may result in poor I/O performance. IBM can provide an Authorized Program Analysis Report (APAR) that allows the I/O size to be configured to a larger value.
Determine the appropriate APAR for the version of AIX you are using:
Check if the required APAR is already installed by running this command:
# /usr/sbin/instfix -ik IV24594
If the APAR is installed, you will see a message similar to this:
All filesets for IV24594 were found.
If the APAR is not yet installed, you will see a message similar to this:
There was no data for IV24594 in the fix database.
Download and install the APAR, as necessary. To find the APARs, use the main search function at http://www.ibm.com/us/en/, specifying the name of the APAR you are looking for from step 1.
Configure the maximum read and write sizes using the commands below:
# nfso -p -o nfs_max_read_size=524288
# nfso -p -o nfs_max_write_size=524288
Confirm the correct settings using the command:
# nfso -L nfs_max_read_size -L nfs_max_write_size
You should see an output similar to this:
NAME CUR DEF BOOT MIN MAX UNIT TYPE
DEPENDENCIES
--------------------------------------------------------------------------------
nfs_max_read_size 512K 64K 512K 512 512K Bytes D
--------------------------------------------------------------------------------
nfs_max_write_size 512K 64K 512K 512 512K Bytes D
--------------------------------------------------------------------------------
On HP-UX, by default the maximum I/O size used for NFS read or write requests is 32K. When Oracle does I/O larger than 32K, the I/O is broken down into smaller requests that are serialized. This may result in poor I/O performance.
As superuser, run the following command:
# /usr/sbin/kctune nfs3_bsize=1048576
Confirm the changes have occurred and are persistent by running the following command and checking the output:
# grep nfs3 /stand/system
tunable nfs3_bsize 1048576
As superuser, edit the /etc/rc.config.d/nddconf
file, adding or replacing the following entries:
TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_recv_hiwater_def
NDD_VALUE[0]=4194304
#
TRANSPORT_NAME[1]=tcp
NDD_NAME[1]=tcp_xmit_hiwater_def
NDD_VALUE[1]=4194304
Run the command:
Confirm the settings:
# ndd -get /dev/tcp tcp_recv_hiwater_def
4194304
# ndd -get /dev/tcp tcp_xmit_hiwater_def
4194304
These are our recommendations for Windows iSCSI initiator configuration. Please note that the parameters below will affect all applications running on the Windows target host, so you should make sure that the following recommendations do not contradict best practices for other applications running on the host.
For targets running Windows Server, the iSCSI initiator driver timers can be found at: HKLM\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance Number>\<Parameters>
. Please see How to Modify the Windows Registry on the Microsoft Support site for details about configuring registry settings.
Registry Value | Type | Default | Recommended | Comments |
---|
MaxTransferLength | REG_DWORD | 262144 | 131072 | This controls the maximum data size of an I/O request. A value of 128K is optimal for the Delphix Engine as it reduces segmentation of the packets as they go through the stack. |
MaxBurstLength
| REG_DWORD | 262144 | 131072 | This is the negotiated maximum burst length. 128K is the optimal size for the Delphix Engine. |
MaxPendingRequests
| REG_DWORD | 255 | 512 | This setting controls the maximum number of outstanding requests allowed by the initiator. At most this many requests will be sent to the target before receiving response for any of the requests. |
MaxRecvDataSegmentLength | REG_DWORD | 65536 | 131072 | This is the negotiated MaxRecvDataSegmentLength. |