The Need for Analysis in Intelligence-Driven Defense

I recently ran across this very interesting paper by Dan Guido; the paper is titled, "A Case Study of Intelligence-Driven Defense".  Dan's research point out the fallacy of the current implementation of the Defender's Paradigm, and how attackers, however unknowingly, are exploiting this paradigm.  Dan's approach throughout the paper is to make his very valid points based on analysis of information, rather than vague statements and speculation.

In the paper, Dan takes the stance, in part, that:

- Over the past year, MITRE's CVE identified and tracked more than 8,000 vulnerabilities
- Organizations expend considerable resources (man-power, money, etc.)
- In 2010, only 13 vulnerabilities "were exploited to install SpyEye, Zeus, Gozi, Clampi and other info-stealing Trojans in massive exploitation campaigns".

As such, his point is that rather than focusing on compliance and having to address all 8K+ vulnerabilities, a more effective use of resources would be to put more focus on those vulnerabilities that are actually being included in the crimeware packs for mass malware distribution.  Based on the information that Dan's included in the paper, this approach would also work for targeted attacks by many of the "advanced" adversaries, as well.

Dan goes on to say:

"Analysis of attacker data, and a focus on vulnerabilities exploited rather than vulnerabilities discovered, might yield more effective defenses."

This sounds very Deming-esque, and I have to agree.  Admittedly, Dan's paper focuses on mass malware, but in many ways, I think that the approach Dan advocates can be used across a number of other data breach and compromise issues.  For example, many of the folks who are working the PCI forensic audits are very likely still seeing a lot of the same issues across the board...the same or similar malware placed on systems that are compromised using some of the same techniques.  So, Dan's approach to intel-driven defense can be applied to other aspects of DFIR besides mass malware infections in an equally effective manner.

Through the analysis he described in his paper, Dan was able to demonstrate how focusing on a few, low-cost (or even free), centrally-managed updates could have significantly reduced the attack surface of an organization and limited, inhibited, or even stopped mass malware infections.  The same approach could clearly be applied to many of the more targeted attacks, as well.

Where the current approach to infosec defense falls short is the lack of adequate detection, response, and analysis.

Detection - look at the recent reports available from TrustWave, Verizon, and Mandiant, and consider the percentage of their respective customers for whom "detection" consists of third-party notification.  So determining that an organization is infected or compromised at all can be difficult.  Detection itself often requires that monitoring be put in place, and this can be scary, and thought of as expensive to those who don't have it already.  However, tools like Carbon Black (Cb) can provide a great deal of ROI, as it's not just a security monitoring tool.  When used as part of a security monitoring infrastructure, Cb will retain copies of binaries (many of the binaries that are downloaded to infected systems will be run and deleted) as well as information about the processes themselves...information that persists after the process has exited and the binary has been deleted as part of the attack process.  The next version of Cb will include Registry modifications and network initiations, which means that the entire monitored infrastructure can then be searched for other infected systems, all from a central location and without adding anything additional to the systems themselves.

Response - What happens most often when systems are found to be infected?  The predominant response methodology appears to be that systems suspected to be compromised or infected are taken offline, wiped, and the operating system and data are reinstalled.  As such, critical information (and intelligence) is lost.

Analysis - Analysis of mass malware is often impossible because a sample isn't collected.  If a sample is available, critical information about "in the wild" artifacts are not documented, and AV vendor write-ups based on the provided samples will only give a partial view of the malware capabilities, and will very often include self-inflicted artifacts.  I've seen and found malware on compromised systems for which if no intelligence had been provided, the malware analyst would have had a very difficult time performing any useful analysis.

In addition, most often, the systems themselves are not subject to analysis...memory is not collected, nor is an image acquired from the system.  In most cases, this simply isn't part of the response process, and even when it is, these actions aren't taken because the necessary training and/or tools aren't available, or they are but they simply aren't on-hand at the time.  Military members are familiar with the term "immediate actions"...these are actions that are drilled into members so that they become part of "muscle memory" and can be performed effectively while under stress.  A similar "no-excuses" approach needs to be applied as part of a top-down security posture that is driven by senior management.

Another important aspect of analysis is sharing.  The cycle of developing usable, actionable intelligence depends on analysis being performed.  That cycle can move much faster if the analysis methodologies are shared (and open to review and improvement), and if the results of the analysis are also shared.  Consider the current state of affairs...as Dan points out, the cycle of developing the crimeware packs for mass malware infections is highly specialized and compartmentalized, and clearly has an economic/monetary stimulus.  That is, someone who's really good at one specific step in the chain (i.e., locating vulnerabilities and developing exploits, or pulling the exploits together into a nice, easy to use package) performs that function, and then passes it along to the next step, usually for a fee.  As such, there's the economic motivation to provide a quality product and remain relevant.  Ultimately, the final product in the chain is being deployed against infrastructures with little monitoring in place (evidenced by the external third party notification...).  When the IT staff (who are, by definition, generalists) are notified of an issue, the default reaction is to take the offending systems offline, wipe them and get them back into service.  As such, analysis is not being done and a great deal of valuable intelligence is being lost.

Why is that?  Is it because keeping systems up and running, and getting systems back online as quickly as possible are the primary goals of infected organizations?  Perhaps this is the case.  But what if there were some way to perform the necessary analysis in a timely manner, either because your staff has the training to do so, or because you've collected the necessary information and have a partner or trusted adviser who can perform that analysis?  How valuable would it be to you if that partner could then provide not only the results of the analysis in a timely manner, but also provide additional insight or intelligence to help you defend your organization due to partnership with law enforcement and other intel-sharing organizations?

Consider this...massive botnets have been taken down (Khelios, Zeus, etc.) when small, dedicated groups have worked together, with a focused approach, to achieve a common goal.  This approach has been proven to be highly effective...but it doesn't have to be restricted to just those folks involved in those groups.  This has simply been a matter of a couple of folks saying, "we can do this", and then doing it.

The state of DFIR affairs is not going to get better unless the Defender's Paradigm is changed.  Information needs to be collected and analyzed, and from that, better defenses can be put in place in a cost effective manner.  From the defender's perspective, there may seem to be chaos on the security landscape, but that's because we're not operating from an intelligence-driven perspective...there are too many unknowns, but these unknowns are there because the right resources haven't been focused on the problem.

Here are Shawn Henry's (former exec. assistant director at the FBI) thoughts on sharing intel:
“We have to share threat data. My threat data will help enable you to be more secure. It will help you be predictive and will allow you to be proactive. Despite our best efforts we need to assume that we will get breached, hence we need to ensure our organisations have consequence management in its systems that allow us to minimise any damage.” 

Resources
Dan's Exploit Intel Project video