Misconceptions Regarding "Offensive Security Tools"
This post is a response to Andrew Thompson's (@QW5kcmV3)'s post "Misconceptions: Unrestricted Release Of Offensive Security Tools", where he argues that the open sourcing of ‘offensive security tools’ (OSTs), on the Internet has "no viable justification". The post offers little-to-no real solutions and is rife with misunderstandings. This debate has existed for a long time, from the crypto wars, to the vuln disclosure conversations, to the wassenaar arrangement, and it's all essentially different twists on the same conversation.
Should there be export controls or limited release on a piece of information due to the damage it may cause?
It's an intriguing debate, but not new, and the result has been a lot of important lessons Andrew's argument fails to take into account. Andrew's main point is essentially that open sourcing ‘offensive security tools’ increases the capabilities of bad guys and should not be allowed to happen, either through voluntary compliance or enforced regulation. This argument primarily uses emotional scare tactics and the omission of key facts to try and rally readers to support the cause. We are going to cover some of the major misconceptions in his article, as well as show why ‘offensive security tools’ serve a vital purpose by being open source and freely available.
Let's start with some misconceptions around increasing capabilities. Right off the bat, Andrew provides a simple formula to define a threat:
OPPORTUNITY + CAPABILITY + MOTIVE = THREAT
This equation is used regularly in criminology and I think it works well, depending on the situation. However Andrew's understanding and how he defines each variable of that equation is off the mark. He equates the discovery of a new vulnerability to an "OPPORTUNITY", but once it's automated or weaponized into a tool it's now a "CAPABILITY". This is incorrect, as the "OPPORTUNITY" in a criminal or threat scenario is the actual insecure element or victim in the scenario or environment that the threat actor plans to action. That is to say, if I want to rob ATMs, the knowledge and tools are the "CAPABILITY", finding an insecure ATM is the "OPPORTUNITY", and my intent to commit crime or steal money is the "MOTIVE". So releasing a vulnerability is not an "OPPORTUNITY", the understanding of a vulnerability in itself is a "CAPABILITY", albeit just not matured and automated into a working exploit. Further, a writeup of a technique, with or without an accompanying tool often requires a broader capability set (skill) just to understand it. The initial capabilities required to pull off these hacks (expertise) decreases over time, from the time of the techniques release, as more understanding, writeups, and tools come out regarding the technique. One could argue that incident response write-ups are also proliferating capabilities, by showing other actors what works and what to avoid when this consulting firm responds. Just like the 'offensive security tools' would work at a number of locations, those techniques covered in IR write-ups often present new techniques or tools that would work in an exuberant number of locations that have the bare minimum for digital security. All this is to say, even with these capabilities, you still need to find the "OPPORTUNITY". For example, finding servers on the open internet using scanning or Shodan would present multiple "OPPORTUNITIES". Thankfully, the defenders can help minimize “OPPORTUNITY” through network posture or defense in depth strategies. This allows proactive defenders to help shape this equation and reduce the likelihood that threats will be successful in their environment.
Why is this important? Because technology itself is a capability enabler, it has been making operations in a number of verticals easier and more accessible, year after year. How those increasing capabilities are used is up to the person implementing them. For example, wireless cameras have become more available over the recent years, which can be used to either set up CCTV or surreptitiously spy on people. It's up to those who implement the technology to decide what their “MOTIVE” is. I'm glad Andrew acknowledges upfront that those who commit crimes are those who are truly culpable in this equation, not those producing the knowledge, tools, or capabilities. However, my point is that the open sourcing of these tech capabilities are for good reason. These tools are often created as proof of concepts to demonstrate a post-exploitation techniques. They are created as learning tools for newcomers and professionals alike, as well as verification tools so that technology-minded people can test their own computing environments for security issues. Vulnerability management is a critical function for an informed organization to defend itself from intrusion. Open sourcing tools that identify vulnerable infrastructure is critical for that informed world to assess their weaknesses, less they continue to operate blind. Any effort to gate access to this information only handicaps the weakest and most vulnerable, and is not in line with a free, open, and transparent society. This ability to use these tools for vulnerability identification is certainly a “viable justification” for open source ‘offensive security tools’. The public doesn’t need a barrier-to-entry for vulnerability identification. The biggest misconception presented in this article is that if one were to fix all of the issues ‘offensive security tools’ identify than "you may thwart all usefulness of a system to a threat actor and your users." Andrew presents this as a Black-or-White fallacy. This is a cop out that refuses to acknowledge these tools, in an open source and freely available form, serve a real purpose in identifying and verifying security misconfigurations and/or detection gaps. They are freely available to help consumers and corporations alike identify and fix these issues, and crucially to facilitate a community where the lessons learned are contributed back in a meaningful way. By ignoring the value of open source tools, misidentifying the core issues of vulnerability, and dismissing the power of the free flow of information and knowledge, Andrew has chosen to place blame on open source software as opposed to insecure systems and the people responsible for their operational integrity. The article seems very out of touch with how ‘offensive security tools’ play a critical part of the information security lifecycle.
Andrew repeatedly promotes the notion that if only open source ‘offensive security tools’ were not freely available, the world would be a better place. One of the more popular OSTs today, Cobalt Strike, is used very actively by both red teams and real threat actors. However, the tool actually has a highly restricted release model, requiring verification of buyers and refusing to sell to overseas parties. That said, the majority of Cobalt Strike servers on the open Internet are using clearly cracked versions and cracked license keys, showing that even these restricted licensing models are reticently circumvented. Another widely used OST that has had massive impacts on the security world is Mimikatz. Mimikatz was actually released as an open source tool after it was stolen by clandestine services from Benjamin at a security conference in Russia! He went on to make the world aware of this technique with stunning success, to the point that many defensive tools can now strongly counter Mimikatz. Another great example is the Shadow Brokers hacking the NSA and releasing tools like Eternal Blue and the DanderSpritz framework. What happened when their tools were released? Multiple vulnerabilities were fixed (SMBv1), their tools were fingerprinted, and the world became a more secure place through the understanding of their techniques. While some of these exploit techniques were reused in large malware attacks, the public release and patches let many security aware organizations get in front of and prevent attacks like WannaCry. As for the argument that restricted release is even feasible at an industry wide level, the Shadow Brokers release paints a clear picture: one of the most sophisticated, well funded, and secure hacking groups (NSA’s TAO) had their toolbox compromised and dumped on the Internet. Just like we've seen with the previous conversations, restricted release groups create a false sense of "security" all while restricting access to only those deemed "trusted". Most red teams I know try to keep their tools private to increase the expected life-span of the tools. Red team economics do not benefit from sharing their tools publicly, this is usually a last step for them and driven by external factors. Most red teams keep their techniques private and only release them publicly when they are trying to share education and awareness about the existence of a technique or piece of research. The CCDC Red Team is a great example of a team that has a treasure trove of private tools, but will release snippets over time to aid blue teams. Why the secrecy? Because this is an all-volunteer red team - we don't have the budgets to pay for 0-days, licensed tools, or paid developers to churn out new tools. Our tools are developed on nights and weekends, and intend to emulate the APTs the students will face in the real world. My point here is many great red teams already practice restricted release at smaller scales and there is often a reason associated with open source release.
Restricting information and knowledge often hurts consumers in the long run, as we have already seen in the previous conversations on these topics (exploit disclosure). A large group of legitimate professionals in the information security industry use this open research to identify these vulnerabilities so that they can be remediated through various solutions. The elimination or restriction of these capabilities will dramatically hurt this industry which often relies on this constant research and sharing to stay cutting edge and productive at the same time. It’s nice that this conversation is so old, because the international community generally agrees on this too, see the wassenaar arrangement, where they have begun adding exceptions for exactly such ‘offensive security tools’. Ultimately, the current trend of computing is common utilities and programming concepts becoming more accessible while higher end computing becomes more complex. To that end, these common OSTs are easily detected and defended against. This is especially true for commercial products in the EDR and detection space, as they should be studying and writing signatures on the most popular versions of these tools. Yes, you can set up a CNA campaign easier today with open source ‘offensive security tools’, but you have a much higher likelihood of getting caught and tipping off defenders without a modest investment in the tools; it's all a trade-off the attackers have to make. It’s actually a good thing that the barrier to entry for identifying weakness is at an all time low, thanks to freely available knowledge and tools, but vulnerability identification has always been a critical effort organizations need to invest in before they are targeted by malicious actors. You will never be able to stop all of the people learning and sharing their new security insights on the Internet, the same way there is bullet proof and anonymous hosting today. It's much better to lean into the problem, take advantage of the free tools to check your own vulnerabilities and exploit their weaknesses for easy detection wins. Not being able to rely on clandestine analysis when attackers use OSTs just means defenders need to leverage other forms of attribution. That's an unfortunate reality of some situations though, and also a major tradeoff on the offensive security side as well; using open source tools can speed up development time but also makes actors more susceptible to detection or exploitation. Essentially, you need to be flexible in how you respond to threats and attribute each case. You can't get mad if a case doesn't fit a model you want or attackers are using something you don’t think is fair. Ultimately, there are a lot of misconceptions here and I think Andrew is taking the wrong lessons from the offensive security community. I hope this post helped clear up some of the fear mongering about open source tool development in the security community. If your part of the offensive security community, please don't feel shamed into not releasing your tools. I urge you to do more research and release more tools. The open research you do has value in the fact that it’s transparent, thus anyone can learn from those experiences. If I still haven't convinced you, listen to Haroon shed some light on the issue:
Should there be export controls or limited release on a piece of information due to the damage it may cause?
It's an intriguing debate, but not new, and the result has been a lot of important lessons Andrew's argument fails to take into account. Andrew's main point is essentially that open sourcing ‘offensive security tools’ increases the capabilities of bad guys and should not be allowed to happen, either through voluntary compliance or enforced regulation. This argument primarily uses emotional scare tactics and the omission of key facts to try and rally readers to support the cause. We are going to cover some of the major misconceptions in his article, as well as show why ‘offensive security tools’ serve a vital purpose by being open source and freely available.
Let's start with some misconceptions around increasing capabilities. Right off the bat, Andrew provides a simple formula to define a threat:
OPPORTUNITY + CAPABILITY + MOTIVE = THREAT
This equation is used regularly in criminology and I think it works well, depending on the situation. However Andrew's understanding and how he defines each variable of that equation is off the mark. He equates the discovery of a new vulnerability to an "OPPORTUNITY", but once it's automated or weaponized into a tool it's now a "CAPABILITY". This is incorrect, as the "OPPORTUNITY" in a criminal or threat scenario is the actual insecure element or victim in the scenario or environment that the threat actor plans to action. That is to say, if I want to rob ATMs, the knowledge and tools are the "CAPABILITY", finding an insecure ATM is the "OPPORTUNITY", and my intent to commit crime or steal money is the "MOTIVE". So releasing a vulnerability is not an "OPPORTUNITY", the understanding of a vulnerability in itself is a "CAPABILITY", albeit just not matured and automated into a working exploit. Further, a writeup of a technique, with or without an accompanying tool often requires a broader capability set (skill) just to understand it. The initial capabilities required to pull off these hacks (expertise) decreases over time, from the time of the techniques release, as more understanding, writeups, and tools come out regarding the technique. One could argue that incident response write-ups are also proliferating capabilities, by showing other actors what works and what to avoid when this consulting firm responds. Just like the 'offensive security tools' would work at a number of locations, those techniques covered in IR write-ups often present new techniques or tools that would work in an exuberant number of locations that have the bare minimum for digital security. All this is to say, even with these capabilities, you still need to find the "OPPORTUNITY". For example, finding servers on the open internet using scanning or Shodan would present multiple "OPPORTUNITIES". Thankfully, the defenders can help minimize “OPPORTUNITY” through network posture or defense in depth strategies. This allows proactive defenders to help shape this equation and reduce the likelihood that threats will be successful in their environment.
Why is this important? Because technology itself is a capability enabler, it has been making operations in a number of verticals easier and more accessible, year after year. How those increasing capabilities are used is up to the person implementing them. For example, wireless cameras have become more available over the recent years, which can be used to either set up CCTV or surreptitiously spy on people. It's up to those who implement the technology to decide what their “MOTIVE” is. I'm glad Andrew acknowledges upfront that those who commit crimes are those who are truly culpable in this equation, not those producing the knowledge, tools, or capabilities. However, my point is that the open sourcing of these tech capabilities are for good reason. These tools are often created as proof of concepts to demonstrate a post-exploitation techniques. They are created as learning tools for newcomers and professionals alike, as well as verification tools so that technology-minded people can test their own computing environments for security issues. Vulnerability management is a critical function for an informed organization to defend itself from intrusion. Open sourcing tools that identify vulnerable infrastructure is critical for that informed world to assess their weaknesses, less they continue to operate blind. Any effort to gate access to this information only handicaps the weakest and most vulnerable, and is not in line with a free, open, and transparent society. This ability to use these tools for vulnerability identification is certainly a “viable justification” for open source ‘offensive security tools’. The public doesn’t need a barrier-to-entry for vulnerability identification. The biggest misconception presented in this article is that if one were to fix all of the issues ‘offensive security tools’ identify than "you may thwart all usefulness of a system to a threat actor and your users." Andrew presents this as a Black-or-White fallacy. This is a cop out that refuses to acknowledge these tools, in an open source and freely available form, serve a real purpose in identifying and verifying security misconfigurations and/or detection gaps. They are freely available to help consumers and corporations alike identify and fix these issues, and crucially to facilitate a community where the lessons learned are contributed back in a meaningful way. By ignoring the value of open source tools, misidentifying the core issues of vulnerability, and dismissing the power of the free flow of information and knowledge, Andrew has chosen to place blame on open source software as opposed to insecure systems and the people responsible for their operational integrity. The article seems very out of touch with how ‘offensive security tools’ play a critical part of the information security lifecycle.
Andrew repeatedly promotes the notion that if only open source ‘offensive security tools’ were not freely available, the world would be a better place. One of the more popular OSTs today, Cobalt Strike, is used very actively by both red teams and real threat actors. However, the tool actually has a highly restricted release model, requiring verification of buyers and refusing to sell to overseas parties. That said, the majority of Cobalt Strike servers on the open Internet are using clearly cracked versions and cracked license keys, showing that even these restricted licensing models are reticently circumvented. Another widely used OST that has had massive impacts on the security world is Mimikatz. Mimikatz was actually released as an open source tool after it was stolen by clandestine services from Benjamin at a security conference in Russia! He went on to make the world aware of this technique with stunning success, to the point that many defensive tools can now strongly counter Mimikatz. Another great example is the Shadow Brokers hacking the NSA and releasing tools like Eternal Blue and the DanderSpritz framework. What happened when their tools were released? Multiple vulnerabilities were fixed (SMBv1), their tools were fingerprinted, and the world became a more secure place through the understanding of their techniques. While some of these exploit techniques were reused in large malware attacks, the public release and patches let many security aware organizations get in front of and prevent attacks like WannaCry. As for the argument that restricted release is even feasible at an industry wide level, the Shadow Brokers release paints a clear picture: one of the most sophisticated, well funded, and secure hacking groups (NSA’s TAO) had their toolbox compromised and dumped on the Internet. Just like we've seen with the previous conversations, restricted release groups create a false sense of "security" all while restricting access to only those deemed "trusted". Most red teams I know try to keep their tools private to increase the expected life-span of the tools. Red team economics do not benefit from sharing their tools publicly, this is usually a last step for them and driven by external factors. Most red teams keep their techniques private and only release them publicly when they are trying to share education and awareness about the existence of a technique or piece of research. The CCDC Red Team is a great example of a team that has a treasure trove of private tools, but will release snippets over time to aid blue teams. Why the secrecy? Because this is an all-volunteer red team - we don't have the budgets to pay for 0-days, licensed tools, or paid developers to churn out new tools. Our tools are developed on nights and weekends, and intend to emulate the APTs the students will face in the real world. My point here is many great red teams already practice restricted release at smaller scales and there is often a reason associated with open source release.
Restricting information and knowledge often hurts consumers in the long run, as we have already seen in the previous conversations on these topics (exploit disclosure). A large group of legitimate professionals in the information security industry use this open research to identify these vulnerabilities so that they can be remediated through various solutions. The elimination or restriction of these capabilities will dramatically hurt this industry which often relies on this constant research and sharing to stay cutting edge and productive at the same time. It’s nice that this conversation is so old, because the international community generally agrees on this too, see the wassenaar arrangement, where they have begun adding exceptions for exactly such ‘offensive security tools’. Ultimately, the current trend of computing is common utilities and programming concepts becoming more accessible while higher end computing becomes more complex. To that end, these common OSTs are easily detected and defended against. This is especially true for commercial products in the EDR and detection space, as they should be studying and writing signatures on the most popular versions of these tools. Yes, you can set up a CNA campaign easier today with open source ‘offensive security tools’, but you have a much higher likelihood of getting caught and tipping off defenders without a modest investment in the tools; it's all a trade-off the attackers have to make. It’s actually a good thing that the barrier to entry for identifying weakness is at an all time low, thanks to freely available knowledge and tools, but vulnerability identification has always been a critical effort organizations need to invest in before they are targeted by malicious actors. You will never be able to stop all of the people learning and sharing their new security insights on the Internet, the same way there is bullet proof and anonymous hosting today. It's much better to lean into the problem, take advantage of the free tools to check your own vulnerabilities and exploit their weaknesses for easy detection wins. Not being able to rely on clandestine analysis when attackers use OSTs just means defenders need to leverage other forms of attribution. That's an unfortunate reality of some situations though, and also a major tradeoff on the offensive security side as well; using open source tools can speed up development time but also makes actors more susceptible to detection or exploitation. Essentially, you need to be flexible in how you respond to threats and attribute each case. You can't get mad if a case doesn't fit a model you want or attackers are using something you don’t think is fair. Ultimately, there are a lot of misconceptions here and I think Andrew is taking the wrong lessons from the offensive security community. I hope this post helped clear up some of the fear mongering about open source tool development in the security community. If your part of the offensive security community, please don't feel shamed into not releasing your tools. I urge you to do more research and release more tools. The open research you do has value in the fact that it’s transparent, thus anyone can learn from those experiences. If I still haven't convinced you, listen to Haroon shed some light on the issue: