In a recent GovLoop industry perspective, Pivot3 Federal Sales Engineer Matt Demas explained the basics of what hyper-convergence is and how it works. Put simply, “hyper-convergence is taking all the complexity of a data center and simplifying it into a box, where each functionality works as a building block,” Demas said.
In traditional IT architectures, networking, storage and compute capabilities are built and maintained separately, using different software and hardware, even as these capabilities interact with one another to support common functions. Each component is assigned its own element manager, resulting in more network complexity and multi-vendor solutions. Ultimately, IT infrastructures become encumbered with limited configurations and redundant components that prevent systems from per – forming their full capabilities.
In a hyper-converged infrastructure, compute, storage, networking and virtualization resources are integrated into a single commodity server and made available throughout an entire IT enterprise. Managers can run all IT workloads through a single vendor, on one easy-to-use management platform.
But just because these functions are consolidated does not mean that they must all be used in the same way and to the same scale. Because a hyper-converged architecture is software-centric, each of these capabilities remains modular and IT professionals can update or scale them as needed without sacrificing performance or availability
What’s wrong with traditional storage?
For government agencies, the need for hyper-converged solutions is clear: Traditional IT architectures are becoming increasingly untenable, given budget constraints, the rapid increase in dynamic technologies, and the performance issues that arise when integrating new web-scale technologies with traditional IT architectures.
The most obvious concern for most public-sector organizations is cost. Buying separate components to execute compute, networking, and storage functions is more time- and labor-intensive, and procurement officials can’t leverage economies of scale in their pricing requests. Once deployed, these separate solutions accrue even more costs, as IT professionals must be trained and paid to manage multiple solutions.
Yet even if cost were not an issue, the traditional approach to IT remains unsustainable. The rapid rate of technological evolution is only increasing with time, and legacy systems cannot keep pace because they are siloed and complex to manage. Furthermore, replacing or updating those systems on a one-off basis will only provide temporary, incomplete solutions to agencies’ technology gaps and increase – rather than reduce – IT complexity.
This piecemeal approach to update IT architectures can also cause other organizational problems. “A lot of [technology] implementations are extremely complex,” Pivot3 Vice President of Global VDI Sales Mike Dunbar explained. “When you set up separate storage, compute, and networking, there’s a lot of fine-tuning that has to happen, which can lead to significant performance issues.”
Demas gave an example of how a minor error could trigger a chain reaction to bigger, more noticeable issues. “You’ll have major booting issues just because of a small networking tweak that went wrong somewhere in the setup or because the storage wasn’t configured 100 percent properly,” he said.
Leave a Reply
You must be logged in to post a comment.