The Ongoing Battle against Ignorance, Part II
or How I Learned to Stop Fearing and Love Backups
by Travis Ogle, Service Coordinator
I have always considered myself a (at least fairly) well-informed person. That all changed when I began working here at Greystone. I know now that I am one of the cohorts of the technologically ignorant. But that is slowly changing and I have been charged with chronicling my transformation from ignorati to intelligisti for you, dear readers. The main vehicle of this astonishing transformation is a series of technology education meetings headed by our president, Peter Melby. Recently, my esteemed colleague Lohr wrote about the difference between DNS and DHCP. This month, it was the different but related topics of Backups and Virtualization.
Backups have always been an integral part of any business. For everything from everyday accidental deletions to full blown catastrophes, backups have you… well… backed up. But, there have been some big developments in the way backups work in the past five years or so. The biggest one, from my standpoint at least, is the elimination of magnetic tapes as the preferred storage medium.
Now, I was under the impression that tapes went out with the Clinton administration. Not so for data backups; until relatively recently, tapes provided more “bang-for-your-buck” in storage than any other medium. Once a day (or however long the period was), a complete copy of all your data would be transferred to those tapes. They would be removed and stored offsite. The tapes were eventually replaced with hard drives, but the principle remained the same. Copy your data to a removable, physical medium, and then move it off site. This method had a few drawbacks. The primary downside was that it only backed up your server files (i.e. the shared folder). If a catastrophe were to occur, in order to get your data back, you needed an exact replica of the server that failed (or burned down or blew away or whatever).
These days, we do things a little differently (see our TotalRescue solution). The technology is more complicated, but more efficient. First, we take a snapshot of you entire server including settings, flow rules, etc. (by etc. I mean all the other super wizard stuff), not just your shared files. That initial snap is loaded to the local vault core and transferred to the cloud. Then, a piece of software takes images once a day, figures out what has changed from the last image, and uploads the differences to the cloud. This way, if your server fails (or blows up, or gets sucked into a sharknado), the latest image can be booted remotely. You have your data back without having to procure an exact replica of your old hardware.
I can hear your responses now: “Impossible!” says one. “Witchcraft!” says another. And I don’t blame you, dear readers. Upon first glance, the dark arts seem to be involved, indeed. But that is not the case. “If not blood sacrifice by the light of the full moon to the malevolent lord, then how?” you may ask.
Why, virtualization of course!
Virtualization is a process that allows us to partition a server and assign differing functions to those partitions. Think of it like an old Victorian house that was made into apartments (thanks to Brian for the analogy); where once it housed one family, it can now house several families, each operating separately from the others. A server can have an “apartment” dedicated to backups, one to email, and yet another to what have you. This way, if you accidentally delete a file, it is still on the local server in the backup partition. If your server is completely destroyed, the virtualization software helps to translate data so you don’t need an exact replica. It may even be time to get an upgrade!