Linkilicious in 2010

RegRipper
Paul Stutz sent me a nice email recently, telling me that he'd not only used RegRipper's rip.pl on Linux, but he'd used it to work on the NIST CFReDS hacking case. Paul posted his write-up here, and gave me the go-ahead to post the link. Check it out! What Paul does is use rip.pl and some scripting on Linux to create something of a CLI version of RegRipper, automating a good deal of what RegRipper is capable of in order to solve the challenge. This is a really good view into what someone can do with RegRipper and the tools.

SafeBoot
Didier has written a program that creates an undeletable SafeBoot key. His point is dead on...there is malware that deletes the SafeBoot key so that the user or admin cannot boot into SafeMode and recover the system. This may not be such an issue on XP (thanks to Restore Points) but it can be a pain if your IR plan includes such steps.

I have to tell you, Didier really comes out with some pretty amazing things...check out his article in the first edition of the IntoTheBoxes newsletter, on Windows 7 UserAssist keys. Also, did you know that Didier's pdfid.py tool is used by VirusTotal?

Processes
Claus also has a very good post on tools you can use to view and manage processes on Windows. A number of the tools are GUI, but some are CLI tools, and all of them could be useful during incident response. Take a look at what Claus has compiled...there are some very good options available.

Windows Firewall and Open Ports
Speaking of Claus...he posted recently about opening ports in the Windows Firewall, and that got me to thinking about the times I've examined a Windows system following an incident (intrusion??) and found that there were custom entries in the Registry that allowed traffic to or from the system.

Why CIRTs Should Fail
David Bianco's blog post on why your CIRT should fail has some excellent points, and the post is well worth the read. Take a look and let me know what you think about what he says.

In short, if a CIRT reaches what appears to be "success", there's no need for "lessons learned", right? Wrong. Ninety-nine times out of a hundred, when reviewing an incident response performed by a customer (or another analyst), there isn't a logical progression of steps, from A to B to C. Most often, it's step A, make one or two findings, and then speculate your way to Z, at which point, the book is closed on the incident. Seriously. I've been in a customer's war room when someone who hasn't been involved in the incident up to that point walks in (while the incident manager is out of the room) and announces to those in the room that the malware includes a keystroke logger. When asked how he knows that, the response is usually, "that's what hackers do...". Hey, how about we do something different...like maybe collect and analyze a sample of the malware, as well as an infected system, and see if it actually includes keystroke logging capabilities? How about if we work on fact, rather than speculation?

This is the sort of thing that can come out of lessons learned? What did we do well, and what could have gone better? Did with go with speculation because it was quicker? Could we have performed the analysis better, or could we have been better prepared?

Finally, there's a lot of talk about "best practices", and going through a lessons learned is one of those best practices for CIRTs and should be part of the CSIRP. If you're going to talk about best practices, why not follow them?

DLP
Richard Bejtlich has an excellent post that addresses thoughts on some excerpts from Randy George's Dark Side of DLP. I have to say that having been involved in a good number of response engagements that involved the (potential or real) exfiltration of sensitive (pick your appropriate definition of "sensitive", be it PCI, HIPAA, NCUA, state notification laws, etc.) data, a lot of this post rings true. First, why bother scanning for where sensitive data is stored if you don't have a policy that states what's appropriate? Really. Okay, so some IT guy runs a scan and finds sensitive data stored on a system, used by a guy in marketing. The IT guy says, "hey, you can't do that.", and the marketing guy's response is, "says who?"

I also think that Mr. George has a good point with the statement, Managing DLP on a large scale can drag your staff under like a concrete block tied to their ankles. However, I would suggest that "DLP" can be replaced with anything else that extends outside of IT, and doesn't have a policy to support it. Like incident response.

I have seen first-hand how data discovery tools have been used to great effect by organizations, just as I have seen how they've been misused. In one instance, the organization used a DLP tool to locate a very limited amount of PCI data on their network, and when they were subject to an intrusion, were able to use that information (along with other information and analysis) to demonstrate that it was extremely unlikely that the PCI data was discovered by the intruder or exposed. Thanks to picture we were able to paint for Visa, the customer only received a small fine for having the data in the first place...rather than a huge fine for the data being exposed, then having to deal with notification costs, as well as all of the costs associated with that sort of activity.

One of the biggest things I've seen when responding to incidents is that when trying to prioritize response, no one seems to know where the sensitive data is stored or processed. Knowing what data you have and use, and having the necessary policies in place to describe its storage, processing, authorized access, etc., can go a long way toward preventing exposure during an incident, as well as helping you address your response when (not if) an incident occurs.