Updates, Links, etc.
RegRipper Plugin Updates
I updated a plugin recently and provided a new one, and thought I'd share some information about those updates here...
The updated plugin is environment.pl, originally written in 2011; the update that I added to the code was to specifically look for and alert on the value described in this blog post. So, four years later, I added a small bit of code to the plugin to look for something specific in the data.
I added the malware.pl plugin, which can be run against any hive; it has specific sections in its code that describe what's being looked for, in which hive, along with references as to the sources from which the keys, values or data in question were derived - why was I looking for them in the first place? Essentially, these are all artifacts I find myself looking for time and again, and I figured I'd just keep them together in one plugin. If you look at the plugin contents, you'll see that I copied the code from secrets.pl and included it.
There are a couple of other plugins I thought I'd mention, in case folks hadn't considered using them....
The sizes.pl plugin was written to address malware maintaining configuration information in a Registry value, as described in this Symantec post. You can run this plugin against any hive.
The rlo.pl plugin is an interesting plugin, and the use of the plugin was illustrated in this Secureworks blog post. As you can see in the figure to the left, there are two Registry keys that appear to have the same name.
In testing for this particular issue, I had specifically crafted two Registry key names, using the method outlined in the Secureworks blog post. This allowed me to create some useful data that mimicked what we'd seen, and provided an opportunity for more comprehensive testing.
As you can see from the output of the plugin listed below, I had also crafted a Registry value name using the same method, to see if the plugin would detect that, as well.
C:\Perl\rr>rip -r d:\cases\local\ntuser.dat -p rlo
Launching rlo v.20130904
rlo v.20130904
(All) Parse hive, check key/value names for RLO character
RLO control char detected in key name: \Software\gpu.etadp [gpupdate]
RLO control char detected in key name: \Software\.etadpupg [gpupdate]
RLO control char detected in value name: \Software\.etadpupg :.etadpupg [gpupdate]
Now, when running the rlo.pl plugin, analysts need to keep in mind that it's looking for something very specific; in this case, indications of the RLO Unicode control character. What's great about plugins like this is that you can include them in your process, run them every time you're conducting analysis, and they'll alert you when there's an issue.
Just as PSA, I have provided these plugins but I haven't updated any of the profiles...I leave that up to the users. So, if you're downloading the plugins folder and refreshing it in place, do not expect to see the
Anti-Forensic Malware
I ran across this InfoSecurity Magazine article recently, and while the title caught my attention, I was more than a bit surprised at the lack of substance.
There are a couple of statements in the blog post that I wanted to address, and share my thoughts on...
Increasingly, bad actors are using techniques that leave little trace on physical disks. And unfortunately, the white hats aren’t keeping up: There’s a shortage of digital forensics practitioners able to investigate these types of offensives.
As to the first sentence, sometimes, yes. Other times, not so much.
The second statement regarding "white hats" is somewhat ambiguous, don't you think? Who are the "white hats"? From my perspective, if "white hats" are the folks investigating these breaches, it's not so much that we aren't keeping up, as it is that the breaches themselves aren't being detected in a timely manner, due to a lack of instrumentation. By the time the "white hats" get the call to investigate the breach, a great deal of the potential evidence has been obviated.
Finally, I don't know that I agree with the final statement, regarding the shortage of practitioners. Sometimes, there's nothing to investigate. As I described in a recent blog post, when putting together some recent presentations, I looked at the statistics in annual security trends reports. One of the statistics I found interested was dwell time, or median time to detection. The point I tried to make in the presentations was that when consultants go on-site to investigate a breach, they're able to see indicators that allow them to identify these numbers. For example in the M-Trends 2015 report, there was an infrastructure that had been compromised 8 years before the compromise was detected.
I would suggest that it's not so much a shortage of practitioners able to investigate these breaches, it's a lack of management oversight that prevents the infrastructure from being instrumented in a manner that provides for timely detection of breaches. By the time some breaches are detected (many through external, third party notification), the systems in question have likely been rebooted multiple times, potentially obviating memory analysis all together.
If a crime is committed and the perpetrator had to walk across a muddy field to commit that crime (leaving footprints), and that field is dug up and paved over with a parking lot before the crime is reported, you cannot then say that there aren't enough trained responders able to investigate the crime.
...seen a rise in file-less malware, which exists only in volatile memory and avoids installation on a target’s file system.
"File-less malware"? Like Poweliks? Here's a TrendMicro blog post regarding PhaseBot, which references a TrendMicro article on Poweliks. Sure, there may not be a file on disk, but there's something pulled from the Registry, isn't there?
Malware comes from somewhere...it doesn't magically appear out of nowhere. If you take a system off of the main network and reboot it, and find indications of malware persisting, then it's somewhere on the system. Just because it is in memory, but there are no obvious indications of the malware within the file system doesn't mean that it can't be found.
Hunting
At the recent HTCIA 2015 Conference, I attended Ryan's presentation on "Hunting in the Dark", and I found it fascinating that at a sufficient level of abstraction, those of us who are doing "hunting" are doing very similar things; we may use different terms to describe it (what Ryan refers to as "harvesting and stacking", the folks I work with call it "using strategic rules")
Ryan's presentation was mostly directed to folks who work within one environment, and was intended to address the question of, "...how do I get started?" Ryan had some very good advice for folks in that position...start small, take a small bite, and use it to get familiar with your infrastructure to learn what is "normal", and what might not be normal.
Along those lines, a friend of mine recently asked a question regarding detecting web shells in an environment using only web server logs. Apparently in response to that question, ThreatStream posted an article explaining just how to do this. So this is an example of how someone can start hunting within their own environment, with limited resources. If you're hunting for web shells, there are number of other things I'd recommend looking at, but the original question was how to do so using only the web server logs.
The folks at ThreatStream also posted this article regarding "evasive maneuvers" used by a threat actor group. If you read the article, you will quickly see that it is more about obfuscation techniques used in the malware and it's communications means, which can significantly effect network monitoring. Reading the article, many folks will likely take a look at their own massive lists of C2 domain names and IP addresses, and append those listed in the article to that list. So, like most of what's put forth as 'threat intelligence', articles such as this are really more a means for analysts to say, "hey, look how smart I am, because I figured this out...". I'm sure that the discussion of assembly language code is interesting, and useful to other malware reverse engineers, but how does a CISO or IT staff utilize the contents of the third figure to protect and defend their infrastructure?
However, for anyone who's familiar with the Pyramid of Pain, you'll understand the efficacy of a bigger list of items that might...and do...change quickly. Instead, if you're interested in hunting, I'd recommend looking for items such as the persistence mechanism listed in the article, as well as monitoring for the creation of new values (if you can).
Like I said, I agree with Ryan's approach to hunting, if you're new to it...start small, and learn what that set of artifacts looks like in your environment. I did the same thing years ago, before the terms "APT" and "hunting" were in vogue...back then, I filed it under "doing my job". Essentially, I wrote some small code that would give me a list of all systems visible to the domain controllers, and then reach out to each one and pull the values listed beneath the Run keys, for the system and the logged in user. The first time I ran this, I had a pretty big list, and as I started seeing what was normal and verifying entries, they got whitelisted. In a relatively short time, I could run this search during a meeting or while I was at lunch, and come back to about half a page of entries that had to be run down.
Tools
I ran across this post over at PostModernSecurity recently, and I think that it really illustrates somethings about the #DFIR community beyond just the fact that these tools are available for use.
The author starts his post with:
...I covet and hoard security tools. But I’m also frugal and impatient,..
Having written some open source tools, I generally don't appreciate it when someone "covets and hoards" what I've written, largely because in releasing the tools, I'd like to get some feedback as to if and how the tool fills a need. I know that the tool meets my needs...after all, that's why I wrote it. But in releasing it and sharing it with others, I've very often been disappointed when someone says that they've downloaded the tool, and the conversation ends right there, at that point...suddenly and in a very awkward manner.
Then there's the "frugal and impatient" part...I think that's probably true for a lot of us, isn't it? At least, sometimes, that is. However, there are a few caveats one needs to keep in mind when using tools like those the author has listed. For instance, what is the veracity of the tools? How accurate are they?
More importantly, I saw the links to the free "malware analysis" sites...some referenced performing "behavioral analysis". Okay, great...but more important than the information provided by these tools is how that information is interpreted by the analyst. If the analyst is focused on free and easy, the question then becomes, how much effort have they put into understanding the issue, and are they able to correctly interpret the data returned by the tools?
For example, look at how often the ShimCache or AppCompatCache data from the Windows Registry is misinterpreted by analysts. That misinterpretation then becomes the basis for findings that then become statements in reports to clients.
There are other examples, but the point is that if the analyst hasn't engaged in the academic rigor to understand something and they're just using a bunch of free tools, the question then becomes, is the analyst correctly interpreting the data that they're being provided by those tools?
Don't get me wrong...I think that the list of tools is a good one, and I can see myself using some of them at some point in the future. But when I do so, I'll very likely be looking for certain things, and verifying the data that I get back from the tools.
I updated a plugin recently and provided a new one, and thought I'd share some information about those updates here...
The updated plugin is environment.pl, originally written in 2011; the update that I added to the code was to specifically look for and alert on the value described in this blog post. So, four years later, I added a small bit of code to the plugin to look for something specific in the data.
I added the malware.pl plugin, which can be run against any hive; it has specific sections in its code that describe what's being looked for, in which hive, along with references as to the sources from which the keys, values or data in question were derived - why was I looking for them in the first place? Essentially, these are all artifacts I find myself looking for time and again, and I figured I'd just keep them together in one plugin. If you look at the plugin contents, you'll see that I copied the code from secrets.pl and included it.
There are a couple of other plugins I thought I'd mention, in case folks hadn't considered using them....
The sizes.pl plugin was written to address malware maintaining configuration information in a Registry value, as described in this Symantec post. You can run this plugin against any hive.
In testing for this particular issue, I had specifically crafted two Registry key names, using the method outlined in the Secureworks blog post. This allowed me to create some useful data that mimicked what we'd seen, and provided an opportunity for more comprehensive testing.
As you can see from the output of the plugin listed below, I had also crafted a Registry value name using the same method, to see if the plugin would detect that, as well.
C:\Perl\rr>rip -r d:\cases\local\ntuser.dat -p rlo
Launching rlo v.20130904
rlo v.20130904
(All) Parse hive, check key/value names for RLO character
RLO control char detected in key name: \Software\gpu.etadp [gpupdate]
RLO control char detected in key name: \Software\.etadpupg [gpupdate]
RLO control char detected in value name: \Software\.etadpupg :.etadpupg [gpupdate]
Now, when running the rlo.pl plugin, analysts need to keep in mind that it's looking for something very specific; in this case, indications of the RLO Unicode control character. What's great about plugins like this is that you can include them in your process, run them every time you're conducting analysis, and they'll alert you when there's an issue.
Just as PSA, I have provided these plugins but I haven't updated any of the profiles...I leave that up to the users. So, if you're downloading the plugins folder and refreshing it in place, do not expect to see the
Anti-Forensic Malware
I ran across this InfoSecurity Magazine article recently, and while the title caught my attention, I was more than a bit surprised at the lack of substance.
There are a couple of statements in the blog post that I wanted to address, and share my thoughts on...
Increasingly, bad actors are using techniques that leave little trace on physical disks. And unfortunately, the white hats aren’t keeping up: There’s a shortage of digital forensics practitioners able to investigate these types of offensives.
As to the first sentence, sometimes, yes. Other times, not so much.
The second statement regarding "white hats" is somewhat ambiguous, don't you think? Who are the "white hats"? From my perspective, if "white hats" are the folks investigating these breaches, it's not so much that we aren't keeping up, as it is that the breaches themselves aren't being detected in a timely manner, due to a lack of instrumentation. By the time the "white hats" get the call to investigate the breach, a great deal of the potential evidence has been obviated.
Finally, I don't know that I agree with the final statement, regarding the shortage of practitioners. Sometimes, there's nothing to investigate. As I described in a recent blog post, when putting together some recent presentations, I looked at the statistics in annual security trends reports. One of the statistics I found interested was dwell time, or median time to detection. The point I tried to make in the presentations was that when consultants go on-site to investigate a breach, they're able to see indicators that allow them to identify these numbers. For example in the M-Trends 2015 report, there was an infrastructure that had been compromised 8 years before the compromise was detected.
I would suggest that it's not so much a shortage of practitioners able to investigate these breaches, it's a lack of management oversight that prevents the infrastructure from being instrumented in a manner that provides for timely detection of breaches. By the time some breaches are detected (many through external, third party notification), the systems in question have likely been rebooted multiple times, potentially obviating memory analysis all together.
If a crime is committed and the perpetrator had to walk across a muddy field to commit that crime (leaving footprints), and that field is dug up and paved over with a parking lot before the crime is reported, you cannot then say that there aren't enough trained responders able to investigate the crime.
...seen a rise in file-less malware, which exists only in volatile memory and avoids installation on a target’s file system.
"File-less malware"? Like Poweliks? Here's a TrendMicro blog post regarding PhaseBot, which references a TrendMicro article on Poweliks. Sure, there may not be a file on disk, but there's something pulled from the Registry, isn't there?
Malware comes from somewhere...it doesn't magically appear out of nowhere. If you take a system off of the main network and reboot it, and find indications of malware persisting, then it's somewhere on the system. Just because it is in memory, but there are no obvious indications of the malware within the file system doesn't mean that it can't be found.
Hunting
At the recent HTCIA 2015 Conference, I attended Ryan's presentation on "Hunting in the Dark", and I found it fascinating that at a sufficient level of abstraction, those of us who are doing "hunting" are doing very similar things; we may use different terms to describe it (what Ryan refers to as "harvesting and stacking", the folks I work with call it "using strategic rules")
Ryan's presentation was mostly directed to folks who work within one environment, and was intended to address the question of, "...how do I get started?" Ryan had some very good advice for folks in that position...start small, take a small bite, and use it to get familiar with your infrastructure to learn what is "normal", and what might not be normal.
Along those lines, a friend of mine recently asked a question regarding detecting web shells in an environment using only web server logs. Apparently in response to that question, ThreatStream posted an article explaining just how to do this. So this is an example of how someone can start hunting within their own environment, with limited resources. If you're hunting for web shells, there are number of other things I'd recommend looking at, but the original question was how to do so using only the web server logs.
The folks at ThreatStream also posted this article regarding "evasive maneuvers" used by a threat actor group. If you read the article, you will quickly see that it is more about obfuscation techniques used in the malware and it's communications means, which can significantly effect network monitoring. Reading the article, many folks will likely take a look at their own massive lists of C2 domain names and IP addresses, and append those listed in the article to that list. So, like most of what's put forth as 'threat intelligence', articles such as this are really more a means for analysts to say, "hey, look how smart I am, because I figured this out...". I'm sure that the discussion of assembly language code is interesting, and useful to other malware reverse engineers, but how does a CISO or IT staff utilize the contents of the third figure to protect and defend their infrastructure?
However, for anyone who's familiar with the Pyramid of Pain, you'll understand the efficacy of a bigger list of items that might...and do...change quickly. Instead, if you're interested in hunting, I'd recommend looking for items such as the persistence mechanism listed in the article, as well as monitoring for the creation of new values (if you can).
Like I said, I agree with Ryan's approach to hunting, if you're new to it...start small, and learn what that set of artifacts looks like in your environment. I did the same thing years ago, before the terms "APT" and "hunting" were in vogue...back then, I filed it under "doing my job". Essentially, I wrote some small code that would give me a list of all systems visible to the domain controllers, and then reach out to each one and pull the values listed beneath the Run keys, for the system and the logged in user. The first time I ran this, I had a pretty big list, and as I started seeing what was normal and verifying entries, they got whitelisted. In a relatively short time, I could run this search during a meeting or while I was at lunch, and come back to about half a page of entries that had to be run down.
Tools
I ran across this post over at PostModernSecurity recently, and I think that it really illustrates somethings about the #DFIR community beyond just the fact that these tools are available for use.
The author starts his post with:
...I covet and hoard security tools. But I’m also frugal and impatient,..
Having written some open source tools, I generally don't appreciate it when someone "covets and hoards" what I've written, largely because in releasing the tools, I'd like to get some feedback as to if and how the tool fills a need. I know that the tool meets my needs...after all, that's why I wrote it. But in releasing it and sharing it with others, I've very often been disappointed when someone says that they've downloaded the tool, and the conversation ends right there, at that point...suddenly and in a very awkward manner.
Then there's the "frugal and impatient" part...I think that's probably true for a lot of us, isn't it? At least, sometimes, that is. However, there are a few caveats one needs to keep in mind when using tools like those the author has listed. For instance, what is the veracity of the tools? How accurate are they?
More importantly, I saw the links to the free "malware analysis" sites...some referenced performing "behavioral analysis". Okay, great...but more important than the information provided by these tools is how that information is interpreted by the analyst. If the analyst is focused on free and easy, the question then becomes, how much effort have they put into understanding the issue, and are they able to correctly interpret the data returned by the tools?
For example, look at how often the ShimCache or AppCompatCache data from the Windows Registry is misinterpreted by analysts. That misinterpretation then becomes the basis for findings that then become statements in reports to clients.
There are other examples, but the point is that if the analyst hasn't engaged in the academic rigor to understand something and they're just using a bunch of free tools, the question then becomes, is the analyst correctly interpreting the data that they're being provided by those tools?
Don't get me wrong...I think that the list of tools is a good one, and I can see myself using some of them at some point in the future. But when I do so, I'll very likely be looking for certain things, and verifying the data that I get back from the tools.