Thoughts and Comments

Exploit Artifacts
There was an interesting blog post on the MMPC site recently, regarding an increase in attacks against the Help and Support Center vulnerability. While the post talks about an increase in attacks through the use of signatures, one thing I'm not seeing is any discussion of the artifacts of the exploit. I'm not talking about a secondary or tertiary download...those can change, and I don't want people to think, "hey, if you're infected with this malware, you were hit with the hcp:// exploit...". Look at it from a compartmentalized perspective...exploit A succeeds and leads to malware B being downloaded onto the system. If malware B can be anything...how do we go about determining the Initial Infection Vector? After all, isn't that what customers ask us? Okay, raise your hand if your typical answer is something like, "...we were unable to determine that...".

I spoke briefly to Troy Larson at the recent SANS Forensic Summit about this, and he expressed some interest in developing some kind of...something...to address this issue. I tend to think that there's a great benefit to this sort of thing, and that a number of folks would benefit from this, including LE.

Awards
The Forensic 4Cast Awards were...uh...awarded during the recent SANS Forensic Summit, and it appears that WFA 2/e received the award for "Best Forensics Book". Thanks to everyone who voted for the book...I greatly appreciate it!

Summit Follow-up
Chris and Richard posted their thoughts on the recent SANS Forensic Summit. In his blog post, Richard said:

I heard Harlan Carvey say something like "we need to provide fewer Lego pieces and more buildings." Correct me if I misheard Harlan. I think his point was this: there is a tendency for speakers, especially technical thought and practice leaders like Harlan, to present material and expect the audience to take the next few logical steps to apply the lessons in practice. It's like "I found this in the registry! Q.E.D." I think as more people become involved in forensics and IR, we forever depart the realm of experts and enter more of a mass-market environment where more hand-holding is required?

Yes, Richard, you heard right...but please indulge me while I explain my thinking here...

In the space of a little more than a month, I attended four events...the TSK/Open Source conference, the FIRST conference, a seminar that I presented, and the SANS Summit. In each of these cases, while someone was speaking (myself included) I noticed a lot of blank stares. In fact, at the Summit, Carole Newell stated at one point during Troy Larson's presentation that she had no idea what he was talking about. I think that we all need to get a bit better at sharing information and making it easier for the right folks to get (understand) what's going on.

First, I don't think for an instant that, from a business perspective, the field of incident responders and analysts is saturated. There will always be more victims (i.e., folks needing help) than there are folks or organizations qualified and able to really assist them.

Second, one of the biggest issues I've seen during my time as a responder is that regardless of how many "experts" speak at conferences or directly to organizations, those organizations that do get hit/compromised are woefully unprepared. Hey, in addition to having smoke alarms, I keep fire extinguishers in specific areas of my house because I see the need to immediate, emergency response. I'm not going to wait for the fire department to arrive, because even with their immediate response, I can do something to contain the damage and losses. My point is that if we can get the folks who are on-site on-board, maybe we'll have fewer intrusions and data breaches that are (a) third-party notification weeks after the fact, and (b) actually have some preserved data available when we (responders) show up.

If we, as "experts", were to do a better job of bringing this stuff home and making it understandable, achievable and useful to others, maybe we'd see the needle move just a little bit in the good guy's favor when it comes to investigating intrusions and targeted malware. I think we'd get better responsiveness from the folks already on-site (the real _first_ responders) and ultimately be able to do a better job of addressing incidents and breaches overall.

Tools
As part of a recent forensic challenge, Wesley McGrew created pcapline.py to help answer the questions of the challenge. Rather than focusing on the tool itself, what I found interesting was that Wesley was faced with a problem/challenge, and chose to create something to help him solve it. And then provide it to others. This is a great example of the I have a problem and created something to solve it...and others might see the same problem as well approach to analysis.

Speaking of tools, Jesse is looking to update md5deep, and has posted some comments about the new design of the tool. More than anything else, I got a lot out of just reading the post and thinking about what he was saying. Admittedly, I haven't had to do much hashing in a while, but when I was doing PCI forensic assessments, this was a requirement. I remember looking at the list and thinking to myself that there had to be a better way to do this stuff...we'd get lists of file names, and lists of hashes...many times, separate lists. "Here, search for this hash..."...but nothing else. No file name, path or size, and no context whatsoever. Why does this matter? Well, there were also time constraints on how long you had before you had to get your report in, so anything that would intelligently speed up the "analysis" without sacrificing accuracy would be helpful.

I also think that Jesse is one of the few real innovators in the community, and has some pretty strong arguments (whether he's aware of it or not) for moving to the new format he mentioned as the default output for md5deep. It's much faster to check the size of the file first...like he says, if the file size is different, you're gonna get a different hash. As disk sizes increase, and our databases of hashes increase, we're going to have to find smart ways to conduct our searches and analysis, and I think that Jesse's got a couple listed right there in his post.

New Attack?
Many times when I've been working an engagement, the customer wants to know if they were specifically targeted...did the intruder specifically target them based on data and/or resources, or was the malware specifically designed for their infrastructure? When you think about it, these are valid concerns. Brian Krebs posted recently on an issue where that's already been determined to be the case...evidently, there's an issue with how Explorer on Windows 7 processes shortcut files, and the malware in question apparently targets specific SCADA systems.

At this point, I don't which is more concerning...that someone knows enough about Seimens systems to write malware for them, or that they know that SCADA systems are now running Windows 7...or both?

When I read about this issue, my first thoughts went back to the Exploit Artifacts section above...what does this "look like" to a forensic examiner?

Hidden Files
Not new at all, but here's a good post from the AggressiveVirusDefense blog that provides a number of techniques that you can use to look for "hidden" files. Sometimes "hidden" really isn't...it's just a matter of perception.

DLL Search Order as a Persistence Mechanism
During the SANS Forensic Summit, Nick Harbour mentioned the use of MS's DLL Search Order as a persistence mechanism. Now he's got a very good post up on the M-unition blog...I won't bother trying to summarize it, as it won't do the post justice. Just check it out. It's an excellent read.