A lot of folks believe that a disaster means a fire, flood, terrorist attack, disgruntled employee, or something else.  And those things certainly are disasters.  But there are other types of disasters.  Common examples are the accidental deletion of data or software errors in vendors products.

Over at Spiceworks, there was a post recently from someone who was pretty angry that their vendor had a software error that resulted in their losing their data.  The thread is here; the applicable reply is from David Shepherd who writes:

I threw out XXX after a month of playing around with it, because in literally one second, because of one network glitch, “WE LOST ALL DATA FROM EVERY SERVER”.

The global deduplication they are so hot about, is their worst enemy when combined with the fact that they put ALL DATA IN THE SAME DATABASE, so if one thing is corrupted, it can all be just plain GONE.

The moment that happened, I said to the rep online, Surely there is some way to recover this.

“Well, we can try to send it to the recovery guys, but there’s no promises there”.

(Note: I’ve XXX’ed the name of the vendor – that’s not what’s important here.)

This obviously is a disaster.  And it’s not less of a disaster because it wasn’t a fire or flood – it’s simply another form of a disaster.

So what do you do about it.  I think David hits the heart of it when he talks about all the data being in one database.  I’d go further – I don’t believe it matters if you put it in 1 database or 20 databases if those databases are somehow replicated with each other – which of course they would need to be to have a consistent data set.  The key is to use disaster recovery techniques to protect BOTH your data and your metadata (the metadata is the data about data which is typically kept in the database).

This is where archiving comes into play.  Archiving in most implementations consists of only secondary (or tertiary, or whatever) backup.   However, archiving can also contain the metadata itself so that an archive (or set of archives) can not only be used to restore your old backups but also to restore your old system state.  Unitrends implements this type of archiving.

What I like about archiving is the fact that it’s NOT replicated – that it’s an off-line (always logically, and quite often physically if you’re doing rotational archiving) copy of that metadata.  Because I’m a big believer than you can’t be too careful doing backup.

Do you have any horror stories with losing metadata – or alternative solutions for insuring its safety?  I would love to hear about it.

Comments

  1. I tried Appassure a few months back and found one really bad issue. If your backup server shuts down incorrectly or in one case I had just restarted the server. The problem is that the global dedup database does an integrity check after the server restarts. The real big problem with that is if your Dedup repository is large, such as 3TB…It can take upwards of 24 hours or more to finish that integrity check. Nullifying an entire days worth of your back up window.

    I liked the idea that you can have multiple repositories, but that integrity check was a deal breaker for me. I could get no one to respond regarding if that can be cancelled, why it takes so long etc.

    Don’t even get me started on Backup Exec 2012! you do not have enough character space to vent my rage on a horribly executed upgrade mechanism, horrendous support, Recovery issues etc…etc…

    Looking forward to checking Unitrends out.

Comments are closed.