In the prior post in this series, various issues regarding deduplication were raised – and I recommended a set of questions that all backup vendors – including both Unitrends and Veeam – should be asked.  In a post dedicated to deduplication, I advised buyers to ask a question about replication – specifically

Ask about deduplication and replication.  Vendors with per-job deduplication often need to re-deduplicate prior to replication to the cloud or another system.  The reason is that poor local deduplication practices locally lead to missing RPOs (Recovery Point Objectives) as the amount of duplicate data sent overwhelms WAN bandwidth and as local re-deduplication takes time, processor, memory, and I/O resources.  Ask hard questions about precisely how deduplication and replication interact and ask specifically if data is being “more deduplicated” prior to replication; if so it will tend to use much more backup system resources and require an extra complex step in the backup software to replicate.

When Veeam talks about Unitrends, what they are saying is that Unitrends doesn’t have “built-in WAN acceleration.”  As I noted in an overview post, I think that this may simply be Veeam not understanding how strong forms of deduplication work with replication and WAN acceleration (sometimes also called WAN optimization) to perform replication more efficiently over less WAN bandwidth while utilizing less CPU, memory, and I/O resources on the servers performing backup and replication.

The figure above depicts a simple example of a per-job deduplication and replication architecture versus a global deduplication and replication architecture.  In a per-job deduplication architecture, there is a separate step needed in order to ensure that you do not send duplicate blocks.  This seems to be at least some of what Veeam calls “WAN Optimization” – which is an interest marketing spin on the term as typically used – but which actually is simply the re-deduplication that needs to occur prior to replication.  Re-deduplication can be anything from creating a new storage area to creating a list of all unique blocks – but any technique used has the tendency to take time and resources prior to replication.  In a global deduplication architecture, you’ve done the work of deduplication once – for your local storage – and you can then use source deduplication to optimize the amount of data that must be sent.  Basically, with global deduplication you take one-pass at deduplication and replication rather than two passes with a per-job deduplication and replication scheme.

So what about WAN acceleration apart from source-level deduplication?  In addition to deduplication (and compression), other functions associated with WAN acceleration and Unitrends include things like latency optimization, checkpointing and source querying, connection limits, simple rate limits, and other functions.  So with better deduplication and replication we’ve got the technical win by a mile, right?  I think it’s actually more important to think about the value that world-class modern replication offers to users via enhanced continuity – whether it’s in the Unitrends Cloud, in one of our Managed Service Provider’s cloud, or at a second site with our purpose-built backup appliances or our purpose-built backup software.  Enhanced continuity with world-class replication means less WAN bandwidth for more data transmitted – which means you can have more WAN bandwidth for your business while also benefitting from VMware, Hyper-V, and physical/virtual Windows DRaaS (Disaster Recovery as a Service) spin-up with recovery assurance enabling application-level automation and orchestration.  In short, it means you don’t have to worry about the plumbing – Unitrends has you covered.

As always, would love to hear any thoughts you have with respect to deduplication, replication, continuity, or even – as my mom used to say – the price of tea in China.

 

Guide