GovLoop

How Virtualization is Optimizing Government Data Centers

This blog is an excerpt from GovLoop’s recent industry perspective, The Value of Virtualization: Optimizing Government Data Centers. Download the full perspective here.

The Demand for Optimization

Governments at all levels — federal, state and local — are under pressure to do more with less. This does not mean merely cutting budgets. Real economy is beginning to come from optimizing IT infrastructure to improve performance, productivity and flexibility.

One of the driving advances in federal IT today is virtualization. Virtualization, the separation of an application from the hardware it runs on and the underlying operating system, allows agencies to rapidly provision resources, adapt quickly to changing requirements and optimize the use of existing computing capacity.

GovLoop partnered with Tintri, an industry leader in building VM-aware storage specifically for virtualized applications, to discuss why a system designed from the ground up to provide storage for virtualized workloads is needed to address the issues created by virtualization.

The Legacy Challenge

Legacy IT systems were designed around individual physical servers and the infrastructure was built to meet the requirements of siloed applications. Legacy IT infrastructure does not provide an agile, flexible IT environment – and it often discourages change.

Even with the benefits of virtualization, many IT shops continue to struggle to keep up in a legacy environment with growing business demands and application proliferation. Virtualization helps the IT infrastructure maximize utilization of each server’s CPU and memory, but unless storage technology is optimized for virtual applications, the full benefits cannot be fully realized or even accurately measured.

Although usage of computing resources can be maximized by virtualization, the demands created by virtual applications can create performance issues and management complexity. Traditional storage was not designed to handle the workloads of virtual applications. And because various elements of the new environment are each being managed separately, there is no overall visibility into the performance of these elements.

The storage industry has attempted to address the virtual workload issue with technologies such as flash storage to replace traditional disk arrays; hybrid storage consisting of both flash and disks; and tiered storage, which prioritizes data based on how it is being used. These can help improve performance, but do not address management complexity. And when performance problems persist, because there is no overall system visibility, there is no effective way to troubleshoot to find the solution.

Hyper-convergence is another attempted storage solution. It is a software-centric architecture that integrates compute, storage, networking and virtualization resources in a commodity hardware box. This converged tier can in theory reduce the pain of managing storage separately, but in many environments it is difficult to maintain control over the whole infrastructure that is required to make this solution scale adequately.

Virtualizing multiple workloads makes it harder for administrators to see the impact of new workloads, find bottlenecks and identify problems such as misconfiguration of virtual machines and shared infrastructure. Traditional storage architectures provide only limited visibility into a virtual environment. Performance can be evaluated on the level of the logical unit number (LUN) of the storage device being addressed or of the volume or file system. But these architectures cannot isolate the performance of virtual machines or provide insight into that performance. In an effort to better understand and control the impact of virtualized workloads, administrators sometimes resort to allocating a single LUN or volume for a single virtual machine. But because of limitations in scaling and the increased management overhead of traditional storage architectures, this is not a practical solution.

Because of these limitations, some agencies avoid mixing workloads on the same storage system, resulting in siloed architectures that undermine the effectiveness of virtualization.

The VM-Aware Storage Solution

The Tintri approach to virtualization provides VM-aware storage that is simple to deploy and built to handle the disparate and often performance intensive workloads created by virtualization and multiple applications in a modern IT infrastructure. Through its performance analytics, Tintri provides end-to-end visibility from the virtual application server through the network infrastructure to the underlining storage. This allows virtualization administrators to see and measure the “real-time” performance, latency and throughput for individual applications. These performance analytics are also used by system to assign performance reserves that guarantee performance to VMs as well as implement VM Scale-out that uses 30 days of history to predict and place the VMs on the right system/datastore. Because Tintri was designed for virtualized workloads, it provides predictable application performance through individual quality of service (QOS) performance lanes for each virtual application.

The U.S. Department of Defense has realized the benefits of VM-aware storage from Tintri in several installations, improving efficiency, reducing the time for deployment of new virtual machines and easing management of storage systems.

The U.S. Army Joint Systems Integration Lab (JSIL) turned to Tintri to eliminate slowdowns in its existing SAN storage solution, which had been built for capacity rather than speed. JSIL experienced very slow read/write times, resulting in sluggish responses and unpredictable performance in the virtual machines supporting the virtualized training environment.

Virtualizing and consolidating multiple workloads can provide better utilization of IT resources, but requires a storage structure created specifically to support virtualized workloads. With a comprehensive view of virtual machines, including end-to-end tracking of “real-time” performance across the data center infrastructure, administrators can gain access to the metrics they need to ensure optimized performance for each virtual machine and the scale-out capability to independently grow the compute and storage infrastructure with the business demands.

Download the full perspective here.

Exit mobile version