Modern Intrusion Detection (part 1)
Current intrusion detection is at a crossroads when considering the modern security environment. Most computers rely on out-dated anti-virus software to keep them malware free, and most computers have malware of one sort or another installed. To quote Sun Tzu, “Know thy self, know thy enemy. A thousand battles, a thousand victories.”(The Art of War, Sun Tzu). When we apply this quote to security theorems, it roughly translates into white listing and black listing. These methodologies can be implemented within different systems, with scalable options, based on specific cost and security needs. In this paper I will investigate intrusion detection and prevention systems (IDPS) to find a better defensive solution than the hyped up and ineffective precept of “detect, remove, prevent”, with an end goal of bringing forward best practice solutions and theoretical solutions with actionable end goals including a response plan.
Different core methodologies of intrusion detection and prevention normally fall under 3 general schools of thought: blacklisting, whitelisting, and heuristic approaches. All of these methods typically require some sort of administrator, who updates, analyzes, and takes action. Although each methodology inherently has certain weaknesses, stacking them will provide defense-in-depth, safe guarding the system through tiered security environments. For best application, of where and how to use these methodologies, I will first investigate their respective strengths and weaknesses.
Blacklisting compares the signatures of known threats against the system in attempt to identify incidents. This is effective for detecting previously known threats but not effective in detecting unknown threats, called 0-day threats. This is the most common methodology adopted by anti-virus vendors, and the reason AV-vendors are more ‘reactive’, than ‘proactive’. Unfortunately the amount of 0-day vulnerabilities in the wild is exponentially increasing. This is a sad result of the ‘Industrialization of Hacking’, and serious hackers writing toolkits for script kiddies. To help convey the seer amount of unknown threats out there, McAfee put this research together, “Today when we quantify the malware world, the consensus is to use the number of unique files in our collections distinguished by their MD5 hash (or checksum). On June 30, we counted 43,337,677 unique binary files (starting from January 1st). Perhaps we’ll reach 54 million by the end of December. “(Malware at Midyear: a Summary,Francois Paget). Obviously malware growth is out of control, which is why simple black-listing is no longer a solution. Many security researchers term this as, “The death of anti-virus”, but that is far from true, anti-virus is not dead, it is just an ineffective stand-alone solution.
White listing defines permissible software which is given authorization to run on a host. These systems prevent any executable or application which is not specifically granted permission to execute. Marcus Ranum sums it up well, “In Marcus-land, where I come from, you decide what is a necessary application first, not after you have 40,000 employees who have gotten so used to it that they now think Twitter is a constitutionally protected right. Isn't a virus or malware just unauthorized execution that someone managed to sneak onto your machine? If we adopt a model whereby there are programs that are authorized (i.e., on a whitelist) and the operating system should terminate everything else, then malware and viruses are history, unless their authors can somehow fool the administrator into authorizing them to run.” (Is AntiVirus Dead? Ranum) White-listing certainly is an enlightening concept, but not a panacea of security theory. Marcus even presents one of white listing’s security holes at the end of his argument, being that the root of authentication can be corrupt or tricked, but there are even more issues below the surface. For example, another vulnerability exists in the validity of the whitelisted processes, which is why signed hashes (implementing cryptographic algorithms) are typically used to verify their integrity. And unfortunately whitelisting is tough to scale, one of its major downfalls is allowing newly approved software, as the whitelist has to be managed and updated periodically.
The heuristic security model learns as it encounters new threats, creating a flexible solution comprised of both white and black listing. This method is best at catching huge deviations of protocol, and obvious or ongoing intrusion attempts, while the smaller deviations may slip past undetected. This method invokes pre-defined roles of user groups or actions, behavior lists. Then it will profile subject actions (or traffic) and continually update them as invoked. These profiles will either fit into the larger definition of pre-defined roles, or fall out and raise flags. Obviously a lot of flags are raised (remember this is a scalable option, most effective in large ‘user’ systems), and a lot of these are false positives, so the system is usually slanted to sound at clear intrusions. This also means a lot of smaller, or one time intrusions slip past a heuristic system unnoticed.
One answer is not the best, I believe Bruce Schneier put it best, “Security is never black and white. If someone asks, "for best security, should I do A or B?" the answer almost invariably is both. But security is always a trade-off. Often it's impossible to do both A and B-- there's no time to do both, it's too expensive to do both, or whatever -- and you have to choose. In that case, you look at A and B and you make you best choice. But it's almost always more secure to do both.” (“Is AntiVirus Dead?”, Schneier) Therefore, we should really apply multiple systems in a layered approach, known as defense in depth. Applied appropriately, this should allow us the security of whitelisting, the flexibility of blacklisting, and the scalability of heuristics.
I will be following this post up with one that details how these methodologies are integrated into systems with a defense in depth approach. Then, how those systems are used in a tiered approach to provide defense in depth across an entire work environment for professional intrusion detection.
Different core methodologies of intrusion detection and prevention normally fall under 3 general schools of thought: blacklisting, whitelisting, and heuristic approaches. All of these methods typically require some sort of administrator, who updates, analyzes, and takes action. Although each methodology inherently has certain weaknesses, stacking them will provide defense-in-depth, safe guarding the system through tiered security environments. For best application, of where and how to use these methodologies, I will first investigate their respective strengths and weaknesses.
Blacklisting compares the signatures of known threats against the system in attempt to identify incidents. This is effective for detecting previously known threats but not effective in detecting unknown threats, called 0-day threats. This is the most common methodology adopted by anti-virus vendors, and the reason AV-vendors are more ‘reactive’, than ‘proactive’. Unfortunately the amount of 0-day vulnerabilities in the wild is exponentially increasing. This is a sad result of the ‘Industrialization of Hacking’, and serious hackers writing toolkits for script kiddies. To help convey the seer amount of unknown threats out there, McAfee put this research together, “Today when we quantify the malware world, the consensus is to use the number of unique files in our collections distinguished by their MD5 hash (or checksum). On June 30, we counted 43,337,677 unique binary files (starting from January 1st). Perhaps we’ll reach 54 million by the end of December. “(Malware at Midyear: a Summary,Francois Paget). Obviously malware growth is out of control, which is why simple black-listing is no longer a solution. Many security researchers term this as, “The death of anti-virus”, but that is far from true, anti-virus is not dead, it is just an ineffective stand-alone solution.
White listing defines permissible software which is given authorization to run on a host. These systems prevent any executable or application which is not specifically granted permission to execute. Marcus Ranum sums it up well, “In Marcus-land, where I come from, you decide what is a necessary application first, not after you have 40,000 employees who have gotten so used to it that they now think Twitter is a constitutionally protected right. Isn't a virus or malware just unauthorized execution that someone managed to sneak onto your machine? If we adopt a model whereby there are programs that are authorized (i.e., on a whitelist) and the operating system should terminate everything else, then malware and viruses are history, unless their authors can somehow fool the administrator into authorizing them to run.” (Is AntiVirus Dead? Ranum) White-listing certainly is an enlightening concept, but not a panacea of security theory. Marcus even presents one of white listing’s security holes at the end of his argument, being that the root of authentication can be corrupt or tricked, but there are even more issues below the surface. For example, another vulnerability exists in the validity of the whitelisted processes, which is why signed hashes (implementing cryptographic algorithms) are typically used to verify their integrity. And unfortunately whitelisting is tough to scale, one of its major downfalls is allowing newly approved software, as the whitelist has to be managed and updated periodically.
The heuristic security model learns as it encounters new threats, creating a flexible solution comprised of both white and black listing. This method is best at catching huge deviations of protocol, and obvious or ongoing intrusion attempts, while the smaller deviations may slip past undetected. This method invokes pre-defined roles of user groups or actions, behavior lists. Then it will profile subject actions (or traffic) and continually update them as invoked. These profiles will either fit into the larger definition of pre-defined roles, or fall out and raise flags. Obviously a lot of flags are raised (remember this is a scalable option, most effective in large ‘user’ systems), and a lot of these are false positives, so the system is usually slanted to sound at clear intrusions. This also means a lot of smaller, or one time intrusions slip past a heuristic system unnoticed.
One answer is not the best, I believe Bruce Schneier put it best, “Security is never black and white. If someone asks, "for best security, should I do A or B?" the answer almost invariably is both. But security is always a trade-off. Often it's impossible to do both A and B-- there's no time to do both, it's too expensive to do both, or whatever -- and you have to choose. In that case, you look at A and B and you make you best choice. But it's almost always more secure to do both.” (“Is AntiVirus Dead?”, Schneier) Therefore, we should really apply multiple systems in a layered approach, known as defense in depth. Applied appropriately, this should allow us the security of whitelisting, the flexibility of blacklisting, and the scalability of heuristics.
I will be following this post up with one that details how these methodologies are integrated into systems with a defense in depth approach. Then, how those systems are used in a tiered approach to provide defense in depth across an entire work environment for professional intrusion detection.