Twenty-five years ago this month, Helsinki, Finland, became the epicenter of what would become a major technological revolution. There, a young developer, Linus Torvalds, introduced a free operating system that he described as “a hobby” and solicited requests for the “features most people would want.”
Those words, written in a Usenet group at the time of the Linux operating system’s launch, may seem humble and quaint, but they actually still reflect the mindset behind today’s version of Linux. Twenty-five years later, the open source OS is still benefiting from contributions from scrappy developers, eager to tool around with the code and add features that make the software more powerful and user-friendly.
The difference today, of course, is that Linux and open source software in general have evolved from a “hobby” to preferred solutions for many organizations, including government agencies. The Department of Defense and Department of Veterans Affairs are two high-profile examples of agencies that have embraced open source software in recent years. That’s a far cry from a decade ago, when open source was rarely, if ever, seen as a viable software alternative for government agencies.
As more agencies adopt Linux, administrators should keep in mind that although the OS is certainly different from standard, proprietary operating systems, some of the same challenges still apply. Administrators must still monitor the performance of their applications running on Linux. And there is still the risk of server hardware failure or simply slow response times due to CPU overload, insufficient memory and disk space.
Yes, the more things change…well, administrators know the drill. Linux presents a great number of benefits, including the ability to operate in any infrastructure and the support of a vast developer community. But IT managers must still have processes in place to ensure the OS continues to run smoothly. These processes should include:
Monitor the performance of the Linux server. Just like any other operating system, Linux can suffer slowdowns due to latency, packet loss and high response times that can impact application performance. Administrators should use monitoring solutions to alert them to potential performance issues that could impact application response times. They can identify processes that may be hogging resources and terminate those processes immediately, so that the server continues to operate efficiently. Further, all resources, including CPU, RAM, storage and more, should be monitored, and IT professionals should use the knowledge they glean from this activity to forecast future capacity needs.
Administrators should then take things a step further by monitoring the overall health of their Linux servers. They should carefully check things such as fan speed, state of power supplies, temperature and other components. They can also reduce hardware failure through alerts and notifications that signal when one of these components is in a critical state.
Keep applications running on the server healthy too. Beyond physical components, administrators should also actively monitor the performance of the applications running on their Linux servers. There are hundreds of applications that run perfectly well on Linux (including the traditional mainstays like web and email servers), but those applications can often be subject to slow response times and bottlenecks, much like the Linux server itself.
It’s often difficult to pinpoint the cause of the problem, however, it could be network bandwidth, server resources or other factors. To rectify this, administrators should deploy application monitoring tools designed to detect anomalies and diagnose problems. These tools can help managers isolate root causes, regardless of whether their applications are running on a physical or virtual server. They can also provide detail on the contextual relationship between applications (i.e., the “application stack”), which can assist in troubleshooting.
Although it may seem that these strategies are similar to those attributed to proprietary OS’s, it’s important to note that Linux is and always has been a very different kind of beast. Because it is open source, it is constantly evolving, thanks to the tireless work of the open source community and its developers. That type of evolution can be extremely difficult to keep up with using only manual monitoring procedures. This evolution also means that Linux has, in some eyes, become far more complex than it was originally intended to be. Therefore, automated monitoring for Linux environments becoming a must.
Still, the fact that I’m even writing about Linux on a government community blog is actually pretty amazing. It shows how far agencies have come in just a few short years. Linux adoption within the public sector is a testament to the commitment of these agencies to use whatever it takes to create the most efficient and interoperable IT environment possible. Here’s to hoping they continue to monitor their adopted systems so they can keep steadily moving along the path that Linus Torvalds began building in 1991.
Joe Kim is part of the GovLoop Featured Blogger program, where we feature blog posts by government voices from all across the country (and world!). To see more Featured Blogger posts, click here.
Leave a Reply
You must be logged in to post a comment.