TimeLine Analysis part II (Sources)
I posted earlier on TimeLine Analysis, and wanted to add some thoughts that went into the entire process with respect to sources of timeline data. Most of us think of a host system and think of an acquired image, and then think of something like TSK tools (fls) or EnCase when we think of timeline data being extracted from the file system. Historically, file MAC times have been extremely important with respect to forensic analysis, but over time, we've been adding other sources of timeline data.
For example, Lance recently posted on using MFT entries to detect the use of utilities to alter timestamps.
Of course, there's the Registry, and a number of ways for extracting timeline data from hive files, including key LastWrite times (including from deleted keys), and timestamps embedded in Registry values (UserAssist keys, MRUs, etc.). RegRipper gets a lot of this data now, so it's readily available.
Event Logs are a great source of timeline information, as the event records are in the right format to define an event.
Over the past 2 yrs or so, I've been associated with a number of examinations involving SQL injection, many of which relied on the IIS web server logs as the initial source of information.
Memory dumps have timestamped data that can be extremely useful in timeline analysis.
What about sources outside of a host system? Network packet captures, firewall or device logs, etc., all include timestamped information.
Of course, when collecting and normalized timestamped data, an analyst has to address issues such as timezones (I recently worked with some data where some of the logs were presented in EST format and others in GMT), as well as clock skew. Also, there may be events that do not lend themselves to easy extraction via some automated means, such as using a Perl script; for these events, having a dialog box where the analyst can enter all of the necessary information would likely be the best approach.
I also wanted to point out HogFly's blog post with respect to data representation. Very cool. I think it tells the story about what happened, getting dry, technical information across while at the same time engaging the non-technical reader.
For example, Lance recently posted on using MFT entries to detect the use of utilities to alter timestamps.
Of course, there's the Registry, and a number of ways for extracting timeline data from hive files, including key LastWrite times (including from deleted keys), and timestamps embedded in Registry values (UserAssist keys, MRUs, etc.). RegRipper gets a lot of this data now, so it's readily available.
Event Logs are a great source of timeline information, as the event records are in the right format to define an event.
Over the past 2 yrs or so, I've been associated with a number of examinations involving SQL injection, many of which relied on the IIS web server logs as the initial source of information.
Memory dumps have timestamped data that can be extremely useful in timeline analysis.
What about sources outside of a host system? Network packet captures, firewall or device logs, etc., all include timestamped information.
Of course, when collecting and normalized timestamped data, an analyst has to address issues such as timezones (I recently worked with some data where some of the logs were presented in EST format and others in GMT), as well as clock skew. Also, there may be events that do not lend themselves to easy extraction via some automated means, such as using a Perl script; for these events, having a dialog box where the analyst can enter all of the necessary information would likely be the best approach.
I also wanted to point out HogFly's blog post with respect to data representation. Very cool. I think it tells the story about what happened, getting dry, technical information across while at the same time engaging the non-technical reader.