Updates

NoVA Forensics Meetup
Last night's meetup went pretty well...there's nothing wrong with humble beginnings.  We had about 16 people show up, and a nice mix of folks...some vets, some new to the community...but it's all good.  Sometimes having new folks ask questions in front of those who've done it for a while gets the vets to think about/question their assumptions.  Overall, the evening went well...we had some good interaction, good questions, and we gave away a couple of books. 

I think that we'd like to keep this on a Wed or Thu evening, perhaps once a month...maybe spread it out over the summer due to vacations, etc. (we'll see).  What we do need now is a facility with presentation capability.  Also, I don't think that we want to have the presentations fall on just one person...we can do a couple of quick talks of a half hour each, or just have someone start a discussion by posing a question to the group.

Besides just basic information sharing, these can be good networking events for the folks who show up.  Looking to add to your team?  Looking for a job?  Looking for advice on how to "break in" to the business?  Just come on by and talk to folks.

So, thanks to everyone who showed up and made this first event a success.  For them, and for those who couldn't make it, we'll be having more of these meetups...so keep your eyes out and don't hold back on the thoughts, comments, or questions.


Volatility
Most folks familiar with memory analysis know about the simply awesome work provided through the Volatility project.  For those who don't know, this is an open source project, written in Python, for conducting memory analysis.

Volatility now has a Python implementation of RegRipper built-in, thanks to lg, and you can read a bit more about the RegListPlugin.  Gleeda's got an excellent blog post regarding the use of the UserAssist plugin.

I've talked a bit in my blog, books, and presentations about finding alternate sources of forensic data when the sources we're looking for (or at) may be insufficient.  I've talked about XP System Restore Points, and I've pulled together some really good material on Volume Shadow Copies for my next book.  I've also talked about carving Event Log event records from unallocated space, as well as parsing information regarding HTTP requests from the pagefile.  Volatility provides an unprecedented level of access to yet another excellent resource...memory.  And not just memory extracted from a live running system...you can also use Volatility to parse data from a hibernation file, which you may find within an (laptop) image.  Let's say that you're interested in finding out how long that system has been compromised; i.e., you're trying to determine the window of exposure.  One of the sources I've turned to is crash dump logs...these are appended (the actual crash dump file is overwritten) with information about each crash, and include a pslist-like listing of processes.  Sometimes you may find references to the malware in these listings, or in the specific details regarding the crashing process.  Now, assume that you're looking at a laptop, and find a hibernation file...you know when the file was created, and using Volatility, you can parse that file and find specifics about what processes were running at the time that the system went into hibernation mode.

And that's not all you can use Volatility for...Andre posted to the SemperSecurus blog about using Volatility to study a Flash 0-day vulnerability. 

If you haven't looked at Volatility, and you do have access to memory, you should really consider diving in and giving it a shot.  

Best Tool
Lance posted to his blog, asking readers what they consider to be the best imaging and analysis tools.  As of the time that I'm writing this post, there are seven comments (several are pretty much just "agree" posts), and even reading through some of the thoughts and comments, I keep coming back to the same thought...that the best tool available to an analyst is that grey matter between their ears.

This brings to mind a number of thoughts, particularly due to the fact that last week I had two opportunities to consider some things for topics of analyst training, education and experience...during one of these opportunities, I was considering the fact that when I (like many other analysts) "came up through the ranks", there were no formal schools available to non-LE analysts, aside from vendor-specific training.  Some went that route, but there were others who couldn't afford it.  For myself, I took the EnCase v.3.0 Introductory course in 1999...I was so fascinated by the approach taken to file signature analysis that I went home and wrote my own Perl code for this; not to create a tool, per se, but more to really understand what was happening "under the hood".  Over the years, knowing how things work and knowing what I needed to look for really helped me a lot...it wasn't a matter of having to have a specific tool as much as it was knowing the process and being able to justify the purchase of a product, if need be.

Breaches
If the recent spate of breaches hasn't yet convinced you that no one is safe from computer security incidents, take a look at this story from The State Worker which talks about the PII/PCI data of 2000 LE retirees being compromised.  I know, 2000 seems like such a small number, but hey...regardless of whether its 77 million or 2000, if you're one of those people who's data was compromised, it's everything.

While the story is light on details (i.e., how the breach was identified, when the IT staff reacted in relation to when the incident actually occurred, etc.), if you read through the story, you see a statement that's common throughout these types of announcements; specifically, "...taken steps to enhance security and strengthen [the] infrastructure...".  The sequence of events for incidents like this (and keep in mind, these are only the ones that are reported) is, breach, time passes, someone is notified of the breach, then steps are taken to "enhance security".  We find ourselves coming to this dance far too often.

Incident Preparedness
Not long ago, I talked about incident preparation and proactive IR...recently, CERT Societe Generale (French CERT) posted a 6 Step IRM Worm Infection cheat sheet. I think that things like this are very important, particularly when the basic steps necessarily assume certain things about your infrastructure.  For example, look at step 1 of PDF includes several of the basic components of a CSIRP...if you have all of the stuff outlined in the PDF already covered, then you're almost to a complete CSIRP, so why not just finish it off and formalize the entire thing?

Step 3, Containment, mentions neutralizing propagation vectors...incident responders need to understand malware characteristics in order to respond effectively to these sorts of incidents.

One note about this, and these sorts of incidents...worms can be especially virulent strains of malware, so this applies to malware in general...relying on your AV vendor to be your IR team is a mistake.  Incident responders have seen this time and again, and it's especially difficult for folks who do what I do, because we often get called after response efforts via the AV vendor have been ineffective, and have exhausted the local IT staff.  I'm not saying that AV vendors can't be effective...what I am saying is that in my experience, throwing signature files at an infrastructure based on samples provided by on-site staff doesn't work.  AV vendors are generally good at what they do, but AV is only part of the overall security solution.  Malware infections need to be responded to with an IR mindset, not through an AV business model.

Firefighters don't learn about putting out a fire during a fire.  Surgeons don't learn their craft during surgery.  Organizations shouldn't hope to learn IR during an incident...and the model of turning your response over to an external third party clearly doesn't work.  You need to be ready for that big incident...as you can see just from the media, it's a wave on the horizon headed for your organization.