Timelines
With a couple of upcoming presentations that address timelines (TSK/Open Source Conference, SANS Forensic Summit, etc.), I thought that it might be a good idea to revisit some of my thoughts and ideas behind timelines.
For me, timeline analysis as a very powerful tool, something that can be (and has been) used to great effect, if it makes sense to do so. Not all of us are always going to have investigations that are going to benefit from a timeline analysis, but for those cases where a timeline would be beneficial, they have proven to be an extremely valuable analysis step. However, creating a timeline isn't necessarily a point-and-click proposition.
I find creating the timelines to be extremely easy, based on the tools I've provided (in the Files section of the Win4n6 Yahoo group). For my uses, I prefer the modular nature of the tools, in part because I don't always have everything I'd like to have available to me. In instances where I'm doing some IR or triage work, I may only have selected files and a directory listing available, rather than access to a complete image. Or sometimes, I use the tools to create micro- or nano-timelines, using just the Event Logs, or just the Security Event Log...or even just specific event IDs from the Security Event Log. Having the raw data available and NOT being hemmed in by a commercial forensic analysis application means that I can use the tools to meet the goals of my analysis, not the other way around. See how that works? The goals of the analysis define what I look for, what I look at, and what tools I use...not the tool. The tool is just that...a tool. The tools I've provided are flexible enough to be able to be used to create a full timeline, or to create these smaller versions of timelines, which also serve a very important role in analysis.
As an example, evtparse.pl parses an Event Log (.evt, NOT .evtx) file (or all .evt files in a given directory) into the five-field TLN format I like to use. Evtparse.pl is also a CLI tool, meaning that the output goes to STDOUT and needs to be redirected to a file. But that also means that I can look for specific event IDs, using something like this:
C:\tools> evtparse.pl -e secevent.evt -t | find "Security/528" > sec_528.txt
Another option is to simply run the tool across the entire set of Event Logs and then use other native tools to narrow down what I want:
C:\tools>evtparse.pl -d f:\case\evtfiles -t > evt_events.txt
C:\tools>type evt_events.txt | find "Security/528" > sec_528.txt
Pretty simple, pretty straightforward...and, I wouldn't be able to do something like that with a GUI tool. I know that the CLI tool seems more cumbersome to some folks, but for others, it's immensely more flexible.
One thing I'm not a huge proponent of when it comes to creating timelines is collecting all possible data automagically. I know that some folks have said that they would just like to be able to point a tool at an image (maybe a mounted image), push a button, and let it collect all of the information. From my perspective, I see that as too much, simply because you don't really want everything that's available, because you'll end up having a huge timeline with way too much noise and not enough signal.
Here's an example...a bit ago, I was doing some work with respect to a SQL injection issue. I was relatively sure that the issue was SQLi, so I started going through the web server logs and found what I was looking for. SQLi analysis of web server logs is somewhat iterative, as there are things that you search for initially, then you get source IP addresses of the requests, then you start to see things specific to the injection, and each time, you go back and search the logs again. In short, I'd narrowed down the entries to what I needed and was able to discard 99% of the requests, as they were normally occurring web traffic. I then created a timeline using the web server log excerpts and the file system metadata, and I had everything I needed to tell a complete story about what happened, when it started, and what was the source of the intrusion. In fact, looking at the timeline was a lot like looking at a .bash_history file, but with time stamps!
Now, imagine what I would have if I'd run a tool that had run through the file system and collected everything possible. File system metadata, Event Logs, Registry key LastWrite times, .lnk files, all of the web server logs, etc...the amount of data would have been completely overwhelming, possibly to the point of hiding or obscuring what I was looking for, let alone crashing any tool for viewing the data.
The fact is, based on the goals of your analysis, you won't need all of that other data...more to the point, all of that data would have made your analysis and producing an answer much more difficult. I'd much rather take a reasoned approach to the data that I'm adding to a timeline, particularly as it can grow to be quite a bit of information. In one instance, I was looking at a system which had apparently been involved in an incident, but had not been responded to in some time (i.e., it had simply been left in service and used). I was looking for indications of tools being run, so the first thing I did was view the audit settings on the system via RegRipper, and found that auditing was not enabled. Then I created a micro-timeline of just the available Prefetch files, and that proved equally fruitless, as apparently the original .pf files had been deleted and lost.
In another instance, I was looking for indications that someone had logged into a system. Reviewing the audit configuration, I saw that what was being audited had been set shortly after the system had been installed, and logins were not being audited. At that point, it didn't make sense to add the Security Event Log information to the timeline.
I do not see the benefit of running regtime.pl to collect all of the Registry key LastWrite times. Again, I know that some folks feel that they need to have everything, but from a forensic analysis perspective, I don't subscribe to that line of thinking...I'm not saying it's wrong, I'm simply saying that I don't follow that line of reasoning. The reason for this is threefold; first, there's no real context to the data you're looking at if you just grab the Registry key LastWrite times. A key was modified...how does that help you without context? Second, you're going to get a lot of noise...particularly due to the lack of context. Looking at a bunch of key LastWrite times, how will you be able to tell if the modification was a result of user or system action? Third, there is a great deal of pertinent time-based information within the data of Registry values (MRU lists, UserAssist subkeys, etc.), all of which is missed if you just run a tool like regtime.pl.
A lot of what we do with respect to IR and DF work is about justifying our reasoning. Many times when I've done work, I've been asked, "why did you do that?"...not so much to second guess what I was doing, but to see if I had a valid reason for doing it. The days of backing up a truck to the loading dock of an organization and imaging everything are long past. There are engagements where you have a limited time to get answers, so some form of triage is necessary to justify which systems and/or data (i.e., network logs, packet captures, etc.) you're going to acquire. There are also times where if you grab too many of the wrong systems, you're going to be spending a great deal of time and effort justifying that to the customer when it comes to billing. The same is true when you scale down to analyzing a single system and creating a timeline. You can take the "put in everything and then figure it out" approach, or you can take an iterative approach, adding layers of data as they make sense to do so. Sometimes previewing that data prior to adding it, performing a little pre-analysis, can also be revealing.
Finally, don't get me wrong...I'm not saying that there's just one way to create a timeline...there isn't. I'm just sharing my thoughts and reasoning behind how I approach doing it, as this has been extremely successful for me through a number of incidents.
Resources
log2timeline
Cutaway's SysComboTimeline tools
For me, timeline analysis as a very powerful tool, something that can be (and has been) used to great effect, if it makes sense to do so. Not all of us are always going to have investigations that are going to benefit from a timeline analysis, but for those cases where a timeline would be beneficial, they have proven to be an extremely valuable analysis step. However, creating a timeline isn't necessarily a point-and-click proposition.
I find creating the timelines to be extremely easy, based on the tools I've provided (in the Files section of the Win4n6 Yahoo group). For my uses, I prefer the modular nature of the tools, in part because I don't always have everything I'd like to have available to me. In instances where I'm doing some IR or triage work, I may only have selected files and a directory listing available, rather than access to a complete image. Or sometimes, I use the tools to create micro- or nano-timelines, using just the Event Logs, or just the Security Event Log...or even just specific event IDs from the Security Event Log. Having the raw data available and NOT being hemmed in by a commercial forensic analysis application means that I can use the tools to meet the goals of my analysis, not the other way around. See how that works? The goals of the analysis define what I look for, what I look at, and what tools I use...not the tool. The tool is just that...a tool. The tools I've provided are flexible enough to be able to be used to create a full timeline, or to create these smaller versions of timelines, which also serve a very important role in analysis.
As an example, evtparse.pl parses an Event Log (.evt, NOT .evtx) file (or all .evt files in a given directory) into the five-field TLN format I like to use. Evtparse.pl is also a CLI tool, meaning that the output goes to STDOUT and needs to be redirected to a file. But that also means that I can look for specific event IDs, using something like this:
C:\tools> evtparse.pl -e secevent.evt -t | find "Security/528" > sec_528.txt
Another option is to simply run the tool across the entire set of Event Logs and then use other native tools to narrow down what I want:
C:\tools>evtparse.pl -d f:\case\evtfiles -t > evt_events.txt
C:\tools>type evt_events.txt | find "Security/528" > sec_528.txt
Pretty simple, pretty straightforward...and, I wouldn't be able to do something like that with a GUI tool. I know that the CLI tool seems more cumbersome to some folks, but for others, it's immensely more flexible.
One thing I'm not a huge proponent of when it comes to creating timelines is collecting all possible data automagically. I know that some folks have said that they would just like to be able to point a tool at an image (maybe a mounted image), push a button, and let it collect all of the information. From my perspective, I see that as too much, simply because you don't really want everything that's available, because you'll end up having a huge timeline with way too much noise and not enough signal.
Here's an example...a bit ago, I was doing some work with respect to a SQL injection issue. I was relatively sure that the issue was SQLi, so I started going through the web server logs and found what I was looking for. SQLi analysis of web server logs is somewhat iterative, as there are things that you search for initially, then you get source IP addresses of the requests, then you start to see things specific to the injection, and each time, you go back and search the logs again. In short, I'd narrowed down the entries to what I needed and was able to discard 99% of the requests, as they were normally occurring web traffic. I then created a timeline using the web server log excerpts and the file system metadata, and I had everything I needed to tell a complete story about what happened, when it started, and what was the source of the intrusion. In fact, looking at the timeline was a lot like looking at a .bash_history file, but with time stamps!
Now, imagine what I would have if I'd run a tool that had run through the file system and collected everything possible. File system metadata, Event Logs, Registry key LastWrite times, .lnk files, all of the web server logs, etc...the amount of data would have been completely overwhelming, possibly to the point of hiding or obscuring what I was looking for, let alone crashing any tool for viewing the data.
The fact is, based on the goals of your analysis, you won't need all of that other data...more to the point, all of that data would have made your analysis and producing an answer much more difficult. I'd much rather take a reasoned approach to the data that I'm adding to a timeline, particularly as it can grow to be quite a bit of information. In one instance, I was looking at a system which had apparently been involved in an incident, but had not been responded to in some time (i.e., it had simply been left in service and used). I was looking for indications of tools being run, so the first thing I did was view the audit settings on the system via RegRipper, and found that auditing was not enabled. Then I created a micro-timeline of just the available Prefetch files, and that proved equally fruitless, as apparently the original .pf files had been deleted and lost.
In another instance, I was looking for indications that someone had logged into a system. Reviewing the audit configuration, I saw that what was being audited had been set shortly after the system had been installed, and logins were not being audited. At that point, it didn't make sense to add the Security Event Log information to the timeline.
I do not see the benefit of running regtime.pl to collect all of the Registry key LastWrite times. Again, I know that some folks feel that they need to have everything, but from a forensic analysis perspective, I don't subscribe to that line of thinking...I'm not saying it's wrong, I'm simply saying that I don't follow that line of reasoning. The reason for this is threefold; first, there's no real context to the data you're looking at if you just grab the Registry key LastWrite times. A key was modified...how does that help you without context? Second, you're going to get a lot of noise...particularly due to the lack of context. Looking at a bunch of key LastWrite times, how will you be able to tell if the modification was a result of user or system action? Third, there is a great deal of pertinent time-based information within the data of Registry values (MRU lists, UserAssist subkeys, etc.), all of which is missed if you just run a tool like regtime.pl.
A lot of what we do with respect to IR and DF work is about justifying our reasoning. Many times when I've done work, I've been asked, "why did you do that?"...not so much to second guess what I was doing, but to see if I had a valid reason for doing it. The days of backing up a truck to the loading dock of an organization and imaging everything are long past. There are engagements where you have a limited time to get answers, so some form of triage is necessary to justify which systems and/or data (i.e., network logs, packet captures, etc.) you're going to acquire. There are also times where if you grab too many of the wrong systems, you're going to be spending a great deal of time and effort justifying that to the customer when it comes to billing. The same is true when you scale down to analyzing a single system and creating a timeline. You can take the "put in everything and then figure it out" approach, or you can take an iterative approach, adding layers of data as they make sense to do so. Sometimes previewing that data prior to adding it, performing a little pre-analysis, can also be revealing.
Finally, don't get me wrong...I'm not saying that there's just one way to create a timeline...there isn't. I'm just sharing my thoughts and reasoning behind how I approach doing it, as this has been extremely successful for me through a number of incidents.
Resources
log2timeline
Cutaway's SysComboTimeline tools