Gitlab suffered a well-publicized data loss and failed recovery incident.  What’s the fundamental lesson from it all?  The Gitlab backup lesson is best summarized by the following excerpt from the transcript:

So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place

[from the Gitlab.com database instance Google document]

Five = zero.  Ironic? Certainly.  Atypical?  Not on your life.

We see it all the time.  The belief that if one backup process, product, or technology is good, two must be twice as good.   If you simply slather on backup protection layer after layer, you’re golden.  It’s like sunscreen – more is better.  Unfortunately, it’s just not true.  As the Gitlab backup incident teaches us, more is not always better.  In fact, more is all too often less.

Gitlab Backup Lesson #1: More Can Be Less

When it comes to backup, many people think complex products, processes, and technology are better.  That isn’t the case.  The more complex backup is, the more that can go wrong.  As the Gitlab backup issue demonstrates, it’s easy to assume that multiple methods of backup are protecting you when in fact there is no coverage whatsoever.  The old saying “Don’t keep all your eggs in one basket” has to be updated to “Keep all your eggs in one basket – and then watch that basket” (Mark Twain.)

Gitlab Backup Lesson #2: Automation and Orchestration

True recovery assurance technology – that uses automation and orchestration to automatically test backups – is incredibly important.  Everyone understands that automated backup is an essential part of any data protection methodology, but what most don’t realize is that automated recovery and automated recovery testing is equally important.

Gitlab Backup Lesson #3: IT Professionals Matter

The singular role of “backup administrator” is an endangered one – it often becomes just one role that an IT professional must play.  Enabling a multi-tasking IT professional to avail themselves of the all-in-one enterprise backup and continuity solutions available these days is a trade-off of capital e

As always, would love to hear your thoughts on this.

Comments

  1. Would love to know what happened, what technology they were using and what went wrong. I’m sure there’s invaluable second hand experience to be gained there.

    1. Agreed; it’s fascinating. The transparency was amazing in terms of what we could see; they deserve a lot of credit for being as open as they were.

Comments are closed.