Links and Updates

iTunes Forensic Analysis
I ran across a very interesting read regarding the forensic analysis of an iTunes installation via DFINews.  One of the things I see consistently within the community is that folks really want to see how someone else has done something, how they've gone about conducting an exam or investigation, and this is a good example of that.

Volatility Updates
Keep your eye on the Volatility site for updates that include support for Windows 8, thanks to the efforts of @iMHLv2, @gleeda, @moyix, and @attrc.

Speaking of Volatility, the folks at p4r4ni0d take a look at Morto.  Great work, using a great tool set.  If you want to see how others using Volatility, take a look at the blog post.

NetworkMiner
A new version of NetworkMiner has been released.  If your work involves pcap capture and analysis, this is one tool that I'd definitely recommend that you have in your kit.

Registry
Andrew Case (@attrc) put together a very good paper (blog post here) on how he went about recovering and analyzing deleted Registry hives.  Now, this is not recovering deleted keys from within hive files...Andrew recovered entire hive files from unallocated space after (per his paper) the system had been formatted and the operating system reinstalled.  Take a look at the process he went through to this...this may be something that you'd want to incorporate into your tool kit.

If you're read Windows Registry Forensics, you'll understand Andrew's approach; Registry hive files (including those from Windows 8) start with 'regf' at the first 4 bytes of the file.  The hive files are broken into 4k (4096 bytes) pages, with the first one beginning with 'regf'; the subsequent pages start with 'hbin'.  

I've done something similar with respect to Windows XP Event Logs, carving specifically for individual records rather than entire Event Log (.evt) files.  In much the same way, Andrew looked that goals of his examination, and then used the tools he had available to accomplish those goals.  Notice that in the paper, he didn't discuss re-assembling every possible hive file, but instead only those that might contain the data/information of interest to his examination.  Nor did he attempt to carve every possible file type using scalpel; he only went after the types of files that he thought were necessary.

When I wrote my event record carving tool, I had the benefit of knowing that each record contains the record size as part of its metadata; Andrew opted to grab 25MB of contiguous data from the identified offset, and from his paper, he appears to have been successful. 

Also, page 4 includes a statement that is extremely important; "This is necessary as USBSTOR keys timestamps are not always reliable."  As you're reading through the paper, you'll notice that Andrew focused on the USBStor keys in order to identify the devices he was looking for, but as you'll note from other sources (WRF, as well as Rob Lee's documentation), the time stamps on the USBStor keys are NOT a reliable indicator of when a USB device was last connected to (or inserted into) a system.  This is extremely important, and I believe very often confused.

More importantly, I think that Andrew deserves a great big "thanks" for posting his process so clearly and concisely.  This isn't something that we see in the DFIR community very often...I can only think of a few folks who do this work who've stepped up to share this sort of information.  Clearly, this is a huge benefit to the community, as I would think that there will be folks reading his paper who will think to themselves, "wow, I could've used that on that examination!", just as others will likely be using it before 2011 closes out.  Notice that there's nothing in the write-up that specifically identifies a customer or really gives away any case-specific data.

Andrew's paper is an excellent contribution to the community, and provides an excellent look at a thorough process for approaching and solving a problem using what you, as an examiner, have available to you.  Something else to consider would be to look for remnants of the (for Windows XP) setupapi.log file, which would provide an indication of devices that had been connected (plugged in, attached, inserted into) to the system.  I've done something similar with both the pagefile and unallocated space...knowing what I was looking for, I used Perl to locate indications of the specific artifacts, and then grab X number of bytes on either side of that artifact.  As an example, you could use the following entry from a setupapi.log file:

#I121 Device install of

Now, search for all instances of the above string, and then grab 200 or more bytes on either side of that offset and write it to a file.  This could provide some very useful information, as well.

Timelines
Corey has an excellent post up regarding the thought processes behind putting a timeline together.  I'd posted recently on how to go about creating mini-timelines from a subset of data; in his post, Corey discusses the thought process that he goes through when creating timelines, in general...he also provides an example.  If you look at the "Things to consider" section of his post, you'll notice some similarity to stuff I've written, as well as to Chris Pogue's Sniper Forensics presentations; in particular, the focus on the goals of the examination.

In his post, Corey mentions two approaches to timelines; the "kitchen sink" (including everything you can, and then performing analysis) and the "minimalist" approach.  From my perspective, the minimalist approach is very similar to what Corey describes in his post; you can add data sources to a timeline via a "layering" approach, in that you can start with specific data sets (file system metadata, Event Logs, Prefetch file metadata, Registry metadata, Jump Lists, etc.), and then as you begin to develop a more solid picture, add successive layers, or even just specific items, to your timeline.  The modular approach to the tools I use and have made available makes this approach (as well as creating mini-timelines) extremely easy.  For example, during an examination involving a SQL injection attack, I put together a timeline using just file system metadata and pertinent web server log entries.  For an incident involving user activity on a system, I would create a timeline using file system metadata, Registry key LastWrite times (as well as specific entries, such as UserAssist data), Event Log entries, Prefetch file metadata, and if the involved system were Windows 7, Jump List metadata (including parsing the DestList stream and sorting the entries in MFU/MRU order).  In a malware detection case, I may not initially be interested in the contents of any user's web surfing activity, with the exception of the LocalService or "Default User" user accounts.

This is not to say that one way is better or more correct than another; rather, the approach used really depends upon the needs of the examination, skill set of the analyst, etc.  I've simply found, through my own experience, that adding everything available to a timeline and then sorting things out doesn't provide me with the level of data reduction I'm looking for, whereas a more targeted approach allows me to keep focused on the goals of the examination.