Adaptive Data Deduplication

MORE DATA, MORE PROBLEMS

dedup.png

IDC predicts that in the next 10 years the amount of storage will grow at 100X the rate of available IT staff (50X versus 0.5X growth.) All-in-one data protection appliances help with this avalanche of data—the core component of this is the technology by which backup strategy and deduplication are realized.

SOMETIMES, LESS (STORAGE) IS MORE (RETENTION)

Unitrends uses a technology we call Adaptive Deduplication™ that results more efficient backup, capacity utilization, restore, archiving and faster replication. Adaptive Deduplication combines global byte-level deduplication and inline deduplication with source side deduplication (VMware), which reduces storage requirements by up to 95%. Global byte-level deduplication leverages all job content (synthetic and incremental forever) and incorporates a content and context sensitive, variable block-size deduplication algorithm to speed up backups, optimize storage capacity and improve WAN data transfers. Inline deduplicates data on ingest before writing to disk, eliminating the need for landing the data before deduplication, which significantly improves capacity utilization.

LESS STORAGE MEANS MORE RETENTION

We use less storage because we understand the context and content of what we’re backing up to ensure that we adapt the deduplication algorithms used based on the type of data we’re backing up. Adaptive Deduplication is smart enough to use global byte-level post processing and inline deduplication at the same time across different data types.  While inline deduplication is the default and applicable for most data types, certain applications, such as NDMP, Oracle and SharePoint do not deduplicate well. In these cases, ingest performance is a priority, and as a result global byte-level post processing is the preferred method. Combining inline and global byte-level post-processing balances capacity optimization and performance for all data types. Combining inline and global byte-level post-processing deduplication optimizes capacity utilization, minimizes backup times, and maximizes retention.

FASTER RESTORES & ARCHIVES

Deduplicated backups must be rehydrated—de-deduplicated—in order to be archived and restored. Our archives are integrated with our Adaptive Deduplication incudes Deduplication Acceleration, an in-memory hashing algorithm that speeds up rehydration, improving restores and archiving by up to 100%. Reading data direct from memory also improves replication throughput, providing up to 15MB/s, ~120Mb/s replication throughput, faster than the performance of most WANs. This allows continued achievement of Recovery Point Objectives (RPO) even as data requirements continue to grow. Furthermore, combining inline deduplication with in-place synthesis creates synthetic backups 5-10x faster from memory, freeing up critical backup resources while ensuring consistently fast recovery, backup, replication, and archiving.

UNITRENDS VIRTUAL BACKUP

Note that Adaptive Deduplication is a feature of our Recovery-Series physical backup appliances and our Unitrends Enterprise Backup virtual backup appliances. Our Unitrends Virtual Backup uses a different deduplication system; for more information see information in the product section of Unitrends Virtual Backup.

*Features of Adaptive Deduplication including inline deduplication, deduplication acceleration, VMware source side deduplication and in-place synthesis are part of Release 8.2 and only available with the purchase of new appliances (Recovery-Series or UEB). Upgrades to these features will be available soon in a future release.





Deduplication and Continuity

Deduplication and Continuity

Adaptive Deduplication: Lower Storage Costs Combined with Faster Backup and Recoveries Unitrends CTO Series: Dr. Mark Campbell

Thank you for your interest in Deduplication and Continuity


Don

Don't Get Duped by Dedupe

The purpose of deduplication is to provide more storage, particularly backup storage, for less money, right? Then wouldn't it be ridiculous if deduplication vendors were demanding that their customers pay more per terabyte of storage? Or, if they were simply pushing the task of integrating, monitoring, and managing deduplication back onto their users? This white paper is to help you understand the various approaches to deduplication, the strengths and weaknesses of each, and to introduce a different approach to deduplication, Adaptive Deduplication.

Thank you for your interest in Don't Get Duped by Dedupe