Data Control Tower takes aggregated metadata across all of your connected Delphix Engines and surfaces use metrics via the insights page. This section covers each of the graphs displayed and how each metric is calculated. 

Managed Source Data

The graph below demonstrates the aggregate volume of source data accessible via any of the connected engines. At a high level, this graph indicates organizational adoption.


  • These calculations are a best estimate. Oracle, SQL Server, and SAP ASE data sources provide an accurate representation, however, source volume data for other Delphix-supported data sources (Db2, EBS, Postgres, and SAP HANA) along with EDSIs is currently unavailable.
  • Please reference the Delphix Ingestion Pricing Document for more specificity around how Source Data is calculated for a particular data source.

Virtualization Object Count

Virtualization object count represents a rough measure of how source data is being leveraged in downstream processes.

Engine Count

This is an indicator of Virtualization and Masking Engines configured on the Infrastructure page in Data Control Tower.

This does not indicate whether the engine has a live connection or not, this can be determined via the engine connection status indicator on the Infrastructure page. 

Managed Source Data Types

This graph is a current point-in-time count of the source data types across your Delphix Engines. 



These calculations are a best estimate. Oracle, SQL Server, and SAP ASE provide an accurate representation, however, source volume data for other Delphix-supported data sources (Db2, EBS, Postgres, and SAP HANA) along with EDSIs is currently unavailable.

User Count

Data Control Tower counts users who have logged into Data Control Tower at least once as well as those who have used Single Sign On (SSO) to access any of the connected engines. Total Users is the aggregate count over the entire life of the Data Control Tower tenant whereas Active Users represent those users who have registered some sort of activity with Data Control Tower or a connected engine in the past 30 days. From a business value perspective, this serves as an indicator of general use and adoption.

Activating Users and Groups on an engine will cause a spike in total users as they are imported into the system if they haven’t already used Data Control Tower previously.

Operation Count

The graphs below represent aggregate monthly activities across all connected Virtualization and Masking Engines. For Virtualization - provision, refresh, and rewind are tracked. For Masking - asking, profiling, and tokenization operations are tracked. From a business value perspective, this is critical to understand the ratio of activities that are taking place and how they map to your own internal use guidelines. For example, a DevOps-focused organization will emphasize provisioning and delete/provision (aka “tearing down/spinning up”) of VDBs for every testing cycle, which will skew a ratio towards being provision heavy. Other organizations may have long-standing VDBs with strict refresh policies--these would have a refresh-heavy emphasis. With your policy in mind, you can view how activity across your Delphix deployment tracks.

Virtualization activities

Masking activities

Median Time Since the Last refresh 

The graph below starts to look at more advanced use data. Median time since the last refresh is a strong indicator of:

  • Storage Management - VDBs provisioned from a snapshot effectively pin that snapshot, preventing it from being deleted. As the production database drifts from the VDB origin snapshot, more storage is required to support the timeline from that snapshot to the newest one.  Large numbers of old VDBs can drive up storage costs, whereas keeping a low median time since the last refresh provides maximum storage savings. Quality of VDB Data - for testing use cases, unless a VDB contains fresh data, you run the risk of using stale data in your testing cycles, which could lead to quality issues down the line. 

With this graph, as a Delphix Administrator, you can keep track of both of these critical elements by ensuring that, in general, the median time since the last refresh remains low.

Median VDB Lifespan

The set of graphs below represent two different use cases that represent usage patterns for VDB lifespan. The graph on the left represents a profile for agile development, where the number of VDBs rise and fall with the regular release cadence. On the other hand, a DevOps profile would look more closely akin to the graph on the right - an overall low, but consistent count as VDBs are spun up and destroyed with every code commit. The business value can be extrapolated from the “median time since last refresh” graph as Median VDB Lifespan can indicate storage savings and freshness of data. In addition, this graph will serve as a check on overall system health and be an indicator for VDB discipline by showing deviations from whatever your use profile should look like.

Median VDB lifespan is calculated by individually calculating each VDB lifespan for any given month (M): 

M - VDB creation date

From there the median value is calculated from all the lifespans for VDBs active during that month. An example:

  • VDB1 is created on Jan 1 and deleted on Feb 1, VDB2 is created on Jan 1 and deleted on March 1, VDB3 is created on Feb 1 and is never deleted
  • At the end of January, Data Control Tower determines that VDB1 is one month old, and VDB2 is one month old, thus the median VDB lifespan is one month for January.
  • At the end of February, Data Control Tower determines that VDB2 is 2 months old, and VDB3 is one month old, thus the median VDB lifespan is 1.5 months.

Caveats

While there are numerous business insights and takeaways that can be gleaned from the “insights” dashboard, there are some considerations in terms of how the data is collected that should be highlighted:

  • Connection Timer - if Data Control Tower does not hear from an engine in 10 days, it will stop counting data from that engine.
  • Expected Variances - As you add new engine connections to Data Control Tower, you should see variances in the graphs. This is expected due to the influx of new engine metadata.
  • Compute Cadence - Insights data is not computed on the fly, it is updated on a daily occurrence.