Training, and Host-Based Analysis

I posted recently on the host-based analysis topic, and just yesterday, I finished up teaching the first rendition of the ASI Intro to Windows Forensic Analysis course, which focuses on host-based analysis.  I'm really looking forward to teaching the Timeline Analysis course in a week and a half.  As with analysis engagements, I tend to take things that I learned in a recent engagement and apply them to future work;  the same is true with the training I provide.  I've been spending a considerable amount of time over the past 12 or so hours going back over the course I just taught, and looking at ways to improve not only the next iteration of the course, but also to improve the next course that I teach.

Both of the courses are two days long, and I try to have a good deal of hands-on work and exercises.  As such, folks planning to attend the course should have some experience in performing forensic analysis, and also be comfortable (not expert, just comfortable) working at the command line.  I found that a great way to illustrate the value of certain artifacts is to provide tools and exercises that illustrate the value of a particular artifact.  As such, I provide a number of my own tools, which I've updated in functionality.  I also provide various sample files so that folks attending the course can practice using the tools.

As an example, I provide the tools discussed here, including pref.exe.  I also provide sample Prefetch files extracted from an XP system, as well as others extracted from a Vista system.  In the Intro course, we walk through each artifact, and provide several means for extracting the data of value from each.  In some cases, I only talk about various tools and provide screen shots, as due to the license agreements, I can't distribute copies of the tools themselves.

Both courses start off with the core concepts; the why of what we're doing, before we step off into the how.  In the Intro course, I spend a good deal of time talking about using multiple data sources to illustrate that something occurred.  In our first exercise scenario, we look at how to determine the active users on a system, and discuss the various sources we can look to for data; while some of these data sources may appear to be redundant, we want to look to them in order to validate our other data, as well as provide a means of analysis in the face of counter- or anti-forensics activities, no matter how unintentional those activities may be.  The reason for this is two-fold...first, some data sources are more easily mutable than others, and may be changed either over the course of time while the system is active, or changed intentionally.  I've had exams where one of the initial steps taken by administrators, prior to contacting our IR team, included removing user accounts.  As such, another reason for making use of redundant data sources is to address those times when the one data source we usually rely on isn't available, or isn't something we necessarily trust.

Another area we look at in the Intro course is indications of program execution.  We look to a number of different locations within a Windows system for indications of programs having been executed (either simply executed on the system, or those that can be associated with a user), and as such, we use a number of RegRipper plugins that you won't find anywhere else, and are only provided in conjunction with the courses. There's one to parse the AppCompatCache value data (as well as run checks for any program paths with 'temp' in the path), another to display the information in TLN format for inclusion in a timeline, as well as others to query, parse and display other Registry data that is relevant to indications of program execution, and other categories of activity.

We also discuss Volume Shadow Copies, as well as means of accessing them when analyzing an acquired image.  In the course, we focus on the VHD method for accessing VSCs, but we also discuss other means, as well.  I stress throughout the course that the purpose of accessing various artifacts in this manner is to let the analyst see what's available, so that they're better able to select the appropriate tool for the job.  For example, if you're accessing VSCs a great deal, perhaps it would be valuable to use techniques such as those that Corey's blogged about, or use something like ShadowKit, or perhaps take advantage of the VSC functionality included in TechPathway's ProDiscover.

One of the things I really like about these training courses is engaging with others and discussing how to go about performing host-based analysis based on identified goals.