Continuous Data Protection (CDP) is a technology that has been around for a while. Lots of companies have looked to leverage it within their products and, as IT professionals the advantage that it gives us in the protection of critical workloads is sort after as the holy grail of workload and data protection. That said, there hasn’t been too many companies that have gotten it right. In my opinion, this is because we are talking about underlying technical fundamentals of what it is to have, a CDP solution challenged by the very nature of that which it protects.

In this post, rather than looking at specific CDP technology like what Veeam released as part of Veeam Backup & Replication v11, I wanted to step back and go back to some fundamentals around CDP and why businesses may or may not require it as part of their data protection and recovery plans.

What is Continuous Data Protection

Firstly, the concept of CDP can be compared to any sort of replication technology. We have a critical item or subset of data that needs to be highly available in case of some event that might put that data at risk. In its most basic form, I think about DFS-R in windows and how files and folders where replicated in near real time from one server to another. This can be said for any form of replication technology that achieves the same thing. Though over the past ten to fifteen years CDP has been synonymous with virtual machines and the ability to replicate whole VMs from a source site to a target site with ultra low Restore Time Objectives or RPOs. The other trait of CDP for virtual machines is that the Restore Time Objective or RTO is low as well.

That means these technologies aim to reduce data loss while being able to get to the data quicker than traditional backup mechanisms.

CDP maintains a journal of data changes which makes it possible to restore a machine to any previous point in time
With most CDP technologies, the replicated data is kept in a journal that allows recovery from granular points in time by keeping allowing the recovery to move at any point of the journal. The intervals are dictated by the RPO that’s either set, or dictated by the underlying technology.

CDP Solves the problem of the “backup window”, where organizations risk of losing data in a black hole created between two scheduled backups
Being able to get extremely granular with an RPO means that the time between the data changing and the last replication or backup interval is greatly reduced. Depending on the rate of change and other compute, storage and networking factors, CDP allows workloads to be protected at near real time.

CDP Provides powerful protection against threats like ransomware, as well as malicious or accidental deletion of data
With the sophistication of ransomware and other attacks becoming more and more advanced, critical systems are at risk of attack which takes out data and can render businesses dead in the water.

CDP is used for compliance with stringent data protection regulations.
Compliancy and regulation has pushed organizations to look to CDP to deliver low RPOs to ensure core line of business applications are pass those requirements. Certain verticals like Finance and Medical were key targets for CDP replication in the early days mainly because of the criticality of the data being created. This data needed to be protected with minimal data loss… and this is were CDP came into play. In today’s world, data is critical for all organizations meaning that CDP also becomes critical in their backup and data protection strategies.

The Two Types of CDP

  • Realtime CDP – Replicates data with every change allowing an organization to achieve a Recovery Point Object (RPO) of zero. Though this is hard to achieve in reality given the raw physics of computing platforms. Latency and other factors always come into play meaning that realtime CDP with zero RPO is somewhat unachievable, though there are some hardware based systems that can achieve this with synchronous replication techniques that requires write confirmation. Storage based replication can also do asynchronous replication which can also be done in software and leverages point in time snapshots. Generally, software will not be able to achieve realtime CDP.
  • Near CDP – This relates to frequent replication, meaning it is close to achieving the effect of realtime continuous data protection. The RPO will be higher than zero, and equal to the interval between configured replication points. This is achieved by being always on and replicating (generally dictated by a set RPO) only the changed data to the target. Because it is always on it does not need to be scheduled, doesn’t use snapshots and writes to the source storage don’t have to wait for acknowledgement from the target storage

Overheads

Like everything in the world, all good things come with a cost and CDP is no different. There are trade offs with moving to a data protection strategy that leverages CDP technologies. The lower the RPO the more network throughput is required to handle the streams of data at smaller intervals. There are technologies that look to optimize data transfer between the source and the target but CDP does mean more network pressure. Storage is the other major factor to consider in that the journal holds more points of time, again dictated by the RPO. The longer the journal window the more storage is required and also the lower the RPO, the more points in time a captured to be stored in that journal. There is also the potential for more compute resources required in terms of CPU and memory as there are more calculations required to move the data from source to target… though this is less of a concern than networking and storage.

Benefits

Granular recovery and quick restore times highlight the main benefits of CDP. This ensures business continuity in case of an event taking place that puts the data or workload at risk. Not only can recovery take place at the set RPO intervals, the time to recover is almost instant due to the fact that whole Virtual Machines are ready to be brought up at the target end without having to restore from a backup repository. This allows businesses to meet internal and external service level agreements (SLAs) that results in an expected outcome after events takes place. Leveraging CDP allows the granular point in time recovery of applications, data and servers. One of the other benefits of CDP over more traditional based replication technologies especially in a virtual world is that VMs are not impacted by snapshot creation or consolidation stuns.

Considerations

So when is CDP required and what types of workloads are candidates for CDP? The idea that every workload can be protected with CDP is fine… but the reality of that differs in practicality. Because of the overheads mentioned above and the associated costs involved, CDP should be used to protect the most critical Tier-1 workloads as dictated by business requirements. Not all workloads need per second RPOs and this is where the catagorization of workloads comes into play. The idea being that you apply different data protection policies to sub-sets of virtual machines and only add those deemed to be most critical to the business to the CDP policies. The majority of workloads should be able to be backed up with more traditional point in time snapshots and even non-CDP based replication leveraging snapshots.