Government IT leaders know that efficient data backup and recovery are integral to mission success. But there are a few challenges that agencies have to overcome if they’re really going to secure and access their data.
The first batch of backup and recovery solutions were built to address the challenges of application tiers powered by diverse infrastructures.
To control costs, backup systems needed to move large amounts of data across sprawling environments and manage it across multiple media types — like disk and tape. Traditional backup systems also needed to satisfy long-term data retention requirements, generally using offsite tape archives. There was less of a focus on getting up-to-the-minute data quickly, and more of a focus on simply storing lots of backup data in one place at the lowest cost possible.
With traditional systems, specially trained engineers were required to schedule backups and other common tasks. They also had to monitor these traditional solutions because the tools lacked intelligence to optimize resources or avoid failed backups. This in turn led to ongoing tuning and sometimes re-architecting. This level of complexity also turned backup software upgrades into major endeavors with noticeable risk that took time away from other projects.
Today, that model doesn’t make sense for government. Agencies don’t have the staff to dedicate to manually scheduling and architecting of backup and recovery. They also don’t have the budget, especially when you consider the growing amount of data.
The cost of backup and recovery has always been a significant part of the IT budget. As data has grown exponentially, so has the cost of backing up and storing data. Sometimes, data protection even costs more than primary storage. Experienced IT organizations would often budget for 2-3x the primary data cost to cover data protection and backup.
Another constraint with traditional systems is interoperability with virtualized environments.
Most backup systems were originally designed to support physical hosts. Virtualization was the last major wave of computing innovation with cloud technologies entering the scene. Before virtualization, systems’ RAM and CPU were underutilized, which provided resources during off hours for backups. Through virtualization, overall RAM and CPU usage were driven much higher, and storage moved onto a central array. Without careful planning or newer technology, backups can push virtualized systems past their resource maximum.
Finally, virtualization and cloud have made IT infrastructures more complex than ever before. Plus, many organizations are adopting technologies like the internet of things and DevOps that alter the way data is received and used. That makes orchestrating backup and recovery all the more challenging.
Traditional systems were created to interact with some components of IT, but not all the ones that exist today. Without virtualized platforms like cloud to manage data, agencies can struggle processing data and navigating their new applications and solutions.
Plus, almost all vendors require professional services to install and configure a backup system to the point of optimal functionality. In order to use the system, administrators often must attend a week-long training and then dedicate significant time to actual deployment and management.
That’s not sustainable. Backup and recovery needs to be intuitive for the average administrator to use. And it should be automated as much as possible to ensure that new systems are automatically protected when they are added.
Since IT departments are increasingly adopting hybrid cloud models, they also need hyperconverged infrastructures with modular scalability and increasing levels of virtualization. And lastly, backup and recovery must prioritize security, across the entire lifecycle of data.
To understand how to achieve those results, learn more at our recent course, The Next Generation of Data Backup and Recovery, on GovLoop Academy.
Leave a Reply
You must be logged in to post a comment.