This interview is an excerpt from our recent research guide, “Continuous Diagnostics and Mitigation (CDM) and Einstein: The Foundations of Federal Civilian Cyberdefense.” To download the complete guide, click here.
As cybersecurity has evolved into an arms race between hackers and protectors, continuous monitoring has quickly replaced periodic security posture assessments. But according to Mark Zalubas, Vice President of Engineering for Merlin International, a cybersecurity and IT solutions provider, many agencies are charging into these large efforts before fully understanding their unique characteristics and implications.
Continuous monitoring solutions generate a lot of data. It’s analogous to your doctor outfitting you with a device that monitors your heart rate six times a minute instead of just once a year when you see her for a checkup. Agencies must be prepared to handle this massive influx of data.
According to Zalubas, different continuous monitoring solutions receive their deluge of data between two ends of a spectrum. On one end of the spectrum, there is the “fire hose scenario,” where you have massive amounts of data rapidly and constantly delivered to your solution from a single location. Full network packet capture, or PCAP, solutions are an example of this end of the spectrum.
On the other end of the spectrum are solutions that receive small amounts of data from large quantities of devices. Governance, risk, and compliance, or GRC, solutions are an example of this end of the spectrum. Nevertheless, the end result is the same: “You are still dealing with a lot of data.”
Agencies embrace new continuous monitoring technologies and their ability to deliver better insight and thus better protection. “But with a lot more data also comes a lot more storage, processing, bandwidth, configuration, and thus a lot more expense,” said Zalubas. “There are complex solutions that are more challenging in configuration and performance than what most IT professionals have dealt with in the past.”
For example, a PCAP solution on a 10 Gbps network tap could generate over 100 TB of storage per day. Likewise, a GRC solution in an enterprise with 500K endpoints must configure every one of those disparate endpoints to forward the required data within the desired periodicity. The attention to detail required to properly construct, configure, and instrument these solutions for continuous monitoring is high.
“But that is only half of the operational battle,” Zalubas said. “The solutions aren’t just there to collect data. You need to be able to make use of it too. You can’t really bugger this cyber data and process it later, because collection never really stops.”
The ability to scan, alert, query, analyze, and take action on the data is where real value is derived. The system must be designed to query data as it is continuously coming in, or contention for physical resources will reduce the solution’s performance.
The complexity of analysis rules and correlation algorithms must also be considered. Data from these systems flow at rates that humans can’t possibly keep up with, so automated rules must be configured to alert humans to only those events that are worthy of manual investigation.
But, what do you look for? What correlation of firewall, antivirus, intrusion prevention cybersecurity events are concerning? What Windows misconfiguration should be immediately alerted? These rules must be coded and kept up-to-date within a constantly evolving IT infrastructure.
Zalubas advised, “Agencies should assess their overall cybersecurity posture and attempt to balance cybersecurity initiatives so that no one area receives too much attention and no area too little.”
He has encouraged agencies trying to “perfect” continuous monitoring solutions, unconscious of the fact that they have reached a point of diminishing returns. For instance, it doesn’t make sense to store PCAPs for exceptionally long periods if their valuable shelf life is days-to-weeks, especially if that means shorting another cybersecurity solution that isn’t close to its point of diminishing returns.
Finally, Zalubas noted that many agencies are surprised by the challenge of continuous remediation. Continuous monitoring efforts are fruitless without the corresponding capabilities to remediate the flaws and vulnerabilities that the analysis uncovers. Continuous remediation can cost just as much, if not more, than continuous monitoring in technical and operational costs.
For most government organizations, balancing the demands of other enterprise functions with their efforts to craft complex data architecture is overwhelming. That’s why agencies like the Department of Veterans Affairs and the Department of Health and Human Services have turned to Merlin International for a variety of continuous monitoring and cybersecurity solutions.
Merlin’s monitoring tools and expertise enable government leaders to gain deep insight into a continuous stream of near real-time snapshots of the state of risk to their security, data, the network, end points, and even cloud devices and applications to support better risk management decisions.
As these myriad tools and technologies are applied. Merlin’s engineers can also carefully integrate them into your current IT architecture and ensure the new data flows don’t overload your agency.
Leave a Reply
You must be logged in to post a comment.