Random Stuff
Host-Based Digital Analysis
There are a lot of folks with different skill sets and specialties involved in targeted threat analysis and threat intel collection and dissemination. There are a lot of researchers with specific skill sets in network traffic analysis, malware reverse engineering, etc.
One of the benefits I find in host-based analysis is that the disk is one of the least volatile of the data sources. Ever been asked to answer the "what data left our organization" definitively? Most often, the answer to that question is, if you didn't conduct full packet capture when the data was leaving, at the time that it was leaving, then you really have no way of knowing definitively. Information in memory persists longer than what's on the wire, but if you're not there to collect memory within a reasonable time frame, you're likely going to miss the artifacts you're interested in, just the same. While the contents on disk won't tell you definitively what left that system, artifacts on disk persist far longer than those available via other sources.
With malware RE or dynamic analysis, you're getting a very limited view of what could have happened on the infected host, rather than looking at what did happen. A malware RE analyst with only a sample to to work with will be able to tell you what that sample was capable of, but won't be able to tell you what actually happened on the infected host. They can tell you that the malware included the capability to perform screen captures and keystroke logging, but they can't tell you if either or both of those capabilities were actually used.
One of the aspects of targeted threat incidents is the longevity of these groups. During one investigation I worked on a number of years ago, our team found that the original compromise had occurred via a phishing email opened by three specific employees, 21 months prior to our being called for assistance. More recently, I've found evidence of the creation of and access to web shells going back a year prior to the activity that caught our attention in the first place. Many of those who respond to these types of incidents will tell you that it is not at all unusual to find that the intruders had compromised the infrastructure several months (sometimes even a year or more) before the activity that got someone's attention (C2 comms, etc.) was generated, and it's often host-based analysis that will demonstrate that.
Also, what happens when these groups no longer use malware? If malware isn't being used, then what will be monitored on the network and looked for in logs? That's when host-based analysis becomes really important. While quite a few analysts know how to use application prefetch/*.pf files in their analysis, what happens when the intruder accesses a server? There is a great deal of information available within a system image that can provide insight into what the intruder was doing, what they were interested in, etc., if you know where to go to get it. For example, I've seen intruders use Windows Explorer to access FTP sites, and the only place that artifacts of this activity appear are in the user shellbags.
Used appropriately, host-based analysis can assist in scoping an incident, as well as be extremely valuable for collecting detailed information about an intruder's activities, even going back several months.
Now, some folks think that host-based analysis takes far too long to get answers and is not suitable for use in high-tempo environments. When used appropriately, this aspect of analysis can provide some extremely valuable insights. Like the other aspects of analysis (memory, network), host-based analysis can provide findings unique to that aspect that are not available via the others. Full disk acquisition is not always required; nor is completely indexing the image, or running keyword searches across the entire image. When done correctly, answers to critical questions can be retrieved from limited data sources, allowing the response team to take appropriate action based on those findings.
RTFM
I recently received a copy of RTFM that I'd purchased, and I have to say, I really like the layout of this book. It is definitely a "field manual", something that can be taken on-site and used to look up common command line options for widely-used tools (particularly when there is no, or limited, external access), and something that an analyst can write their own notes and reminders in. For example, the book includes some common WMIC and PowerShell commands to use to quickly collect information from a compromised system. In a lot of ways, it reads like one of the O'Reilly Publishing "...in a Nutshell" books...just the raw facts, assuming a certain level of competency in the reader, and no fluff.
As anyone who has read my books knows, I have a number checklists that I use (included in the book materials), and it occurred to me that they'd make a great field manual when pulled together in a similar format. For example, I have a cheatsheet that I use for timeline creation...rather than printing it out over and over, I could put something like this into a field manual that I could then reference when I need to without having to have an Internet connection, or look up on my system.
I think that having a field manual that includes commonly used command line options is a great idea. Also, sometimes it's hard to remember all of the different artifacts that can fall into different categories, such as 'program execution', or things to look for if you're interested in determining lateral movement within an infrastructure. Many times, it's hard for me to remember the different artifacts on different versions of Windows that fall into these categories, and having a field manual would be very useful. There are a number of useful tidbits in my blog that I cannot access if I don't have Internet access, and I can't remember everything (which is kind of why I write stuff into my blog). Having a reference guide would be extremely beneficial, I think...and I already have a couple of great sources for this sort of information - my case notes, my blog, etc.
Actually, I think that a lot of us have a whole bunch of little tidbits that we don't write down and share, and don't occur to us during the heat of the moment (because analysis can often be a very high-energy event...), but would be extremely valuable if they were shared somehow.
I'm not one of those people with an eidetic memory who can remember file and Registry paths, particularly on different versions of Windows, unless I'm using them on a regular basis. The same is true for things like tools available for different artifacts...different tools provide different information, and are useful in different circumstances.
Telling A Story
Chris Pogue recently published a post on the blog of his new employer, Nuix, describing how an investigator needs to be a story teller. Chris points out some very important points that many of us who work in this field likely see over and over, in particular the three points he lists right at about the middle of the post. Chris's article is worth a read. And congratulations to Chris on his new opportunity...I'm sure he'll do great.
Something to keep in mind, as well, is that when developing our story, when translating what you've done...log analysis, pcap capture and analysis, host-based analysis...into something that the C-suite executives can digest, we must all be very mindful to do so based on the facts that we've observed. That is to say, we must be sure to not fill in gaps in the story with assumption, or embellishment. As "experts" (in the client's eyes), we were asked to provide answers...so when telling our story, expecting the client to just "get it", or giving them a reference to go look up or research really isn't telling our story. It's being lazy. Our job is to take a myriad of highly technical facts and findings, and weaving them into a story that allows the C-suite executives to make critical business decisions, in a timely manner. That means we need to be correct, accurate, and timely. To paraphrase what I learned in the military, many times a good answer now is better than the best answer delivered too late. We need to keep in mind that while we're looking at logs, network traffic, the output of Volatility plugins, or parsing host-based data, there's a C-suite executive who has to report to a compliance or regulatory board, to whom bits and bytes, flags and Registry values mean absolutely nothing.
All of this also means that we need to be open to exposure and criticism. What Chris says in his article is admittedly easier said than done. How often do we get feedback from clients?
Mentoring
So how do we get better at telling our story, particularly when each response engagement is as different from the others we've done as snowflakes? This leads us right into a thread over on Twitter where mentoring was part of the topic. Our community is in dire need of mentoring. Mentoring is a great way to go about improving what we do, because many times we're so busy and engaged in response and analysis that we don't have the time to step back and see the forest for the trees, as it were. Sometimes it takes an outside influence to get us to see the need to change, or to show us a better way. However, I do not get the impression that many of the folks in our community are open to mentoring, and that impression has very little to do with distance.
First, mentoring should/needs to be an active, give-and-take relationship, and my experience in the community (as an analyst, writer, presenter, etc.) at large has been that there is a great deal of passivity. We rarely see thoughtful reviews of things such as books, presentations, and conferences in this community. People don't want to share their thoughts, nor have their name associated with such things, and as such, we're missing a great opportunity for overall improvement and advancement in this industry, without this give-and-take.
Second, mentoring opens the mentee to exposing what they're currently doing. Very few in this community appear to want or seek out that kind of exposure, even if it's limited to just the mentor. Years ago, I was part of a team and our manager instructed everyone to upload the reports that they'd sent to clients to a file share. After several months, I accessed the file share to upload my most recent report, and found that the folder for that quarter was empty, even though I knew that other analysts had been working really hard and billing a great deal of hours. Our manager conducted an audit and found that only a very few of us were following his instructions. While there was never any explanation that I was aware of for other analysts not uploading their reports, my thought remains that they did not want to expose what they were doing. As Chris mentioned in his article, he's been tasked with reviewing reports provided to clients by other firms. When we were on the IBM ISS ERS team together, I can remember him reviewing two such reports. I've been similarly tasked during my time in this field, and I've seen a wide range of what's been sent to clients. I've taken those experiences and tried to incorporate them into how I write my reports; I covered a great deal of this in chapter 9 of Windows Forensic Analysis 4/e.
There are a lot of folks with different skill sets and specialties involved in targeted threat analysis and threat intel collection and dissemination. There are a lot of researchers with specific skill sets in network traffic analysis, malware reverse engineering, etc.
One of the benefits I find in host-based analysis is that the disk is one of the least volatile of the data sources. Ever been asked to answer the "what data left our organization" definitively? Most often, the answer to that question is, if you didn't conduct full packet capture when the data was leaving, at the time that it was leaving, then you really have no way of knowing definitively. Information in memory persists longer than what's on the wire, but if you're not there to collect memory within a reasonable time frame, you're likely going to miss the artifacts you're interested in, just the same. While the contents on disk won't tell you definitively what left that system, artifacts on disk persist far longer than those available via other sources.
With malware RE or dynamic analysis, you're getting a very limited view of what could have happened on the infected host, rather than looking at what did happen. A malware RE analyst with only a sample to to work with will be able to tell you what that sample was capable of, but won't be able to tell you what actually happened on the infected host. They can tell you that the malware included the capability to perform screen captures and keystroke logging, but they can't tell you if either or both of those capabilities were actually used.
One of the aspects of targeted threat incidents is the longevity of these groups. During one investigation I worked on a number of years ago, our team found that the original compromise had occurred via a phishing email opened by three specific employees, 21 months prior to our being called for assistance. More recently, I've found evidence of the creation of and access to web shells going back a year prior to the activity that caught our attention in the first place. Many of those who respond to these types of incidents will tell you that it is not at all unusual to find that the intruders had compromised the infrastructure several months (sometimes even a year or more) before the activity that got someone's attention (C2 comms, etc.) was generated, and it's often host-based analysis that will demonstrate that.
Also, what happens when these groups no longer use malware? If malware isn't being used, then what will be monitored on the network and looked for in logs? That's when host-based analysis becomes really important. While quite a few analysts know how to use application prefetch/*.pf files in their analysis, what happens when the intruder accesses a server? There is a great deal of information available within a system image that can provide insight into what the intruder was doing, what they were interested in, etc., if you know where to go to get it. For example, I've seen intruders use Windows Explorer to access FTP sites, and the only place that artifacts of this activity appear are in the user shellbags.
Used appropriately, host-based analysis can assist in scoping an incident, as well as be extremely valuable for collecting detailed information about an intruder's activities, even going back several months.
Now, some folks think that host-based analysis takes far too long to get answers and is not suitable for use in high-tempo environments. When used appropriately, this aspect of analysis can provide some extremely valuable insights. Like the other aspects of analysis (memory, network), host-based analysis can provide findings unique to that aspect that are not available via the others. Full disk acquisition is not always required; nor is completely indexing the image, or running keyword searches across the entire image. When done correctly, answers to critical questions can be retrieved from limited data sources, allowing the response team to take appropriate action based on those findings.
RTFM
I recently received a copy of RTFM that I'd purchased, and I have to say, I really like the layout of this book. It is definitely a "field manual", something that can be taken on-site and used to look up common command line options for widely-used tools (particularly when there is no, or limited, external access), and something that an analyst can write their own notes and reminders in. For example, the book includes some common WMIC and PowerShell commands to use to quickly collect information from a compromised system. In a lot of ways, it reads like one of the O'Reilly Publishing "...in a Nutshell" books...just the raw facts, assuming a certain level of competency in the reader, and no fluff.
As anyone who has read my books knows, I have a number checklists that I use (included in the book materials), and it occurred to me that they'd make a great field manual when pulled together in a similar format. For example, I have a cheatsheet that I use for timeline creation...rather than printing it out over and over, I could put something like this into a field manual that I could then reference when I need to without having to have an Internet connection, or look up on my system.
I think that having a field manual that includes commonly used command line options is a great idea. Also, sometimes it's hard to remember all of the different artifacts that can fall into different categories, such as 'program execution', or things to look for if you're interested in determining lateral movement within an infrastructure. Many times, it's hard for me to remember the different artifacts on different versions of Windows that fall into these categories, and having a field manual would be very useful. There are a number of useful tidbits in my blog that I cannot access if I don't have Internet access, and I can't remember everything (which is kind of why I write stuff into my blog). Having a reference guide would be extremely beneficial, I think...and I already have a couple of great sources for this sort of information - my case notes, my blog, etc.
Actually, I think that a lot of us have a whole bunch of little tidbits that we don't write down and share, and don't occur to us during the heat of the moment (because analysis can often be a very high-energy event...), but would be extremely valuable if they were shared somehow.
I'm not one of those people with an eidetic memory who can remember file and Registry paths, particularly on different versions of Windows, unless I'm using them on a regular basis. The same is true for things like tools available for different artifacts...different tools provide different information, and are useful in different circumstances.
Telling A Story
Chris Pogue recently published a post on the blog of his new employer, Nuix, describing how an investigator needs to be a story teller. Chris points out some very important points that many of us who work in this field likely see over and over, in particular the three points he lists right at about the middle of the post. Chris's article is worth a read. And congratulations to Chris on his new opportunity...I'm sure he'll do great.
Something to keep in mind, as well, is that when developing our story, when translating what you've done...log analysis, pcap capture and analysis, host-based analysis...into something that the C-suite executives can digest, we must all be very mindful to do so based on the facts that we've observed. That is to say, we must be sure to not fill in gaps in the story with assumption, or embellishment. As "experts" (in the client's eyes), we were asked to provide answers...so when telling our story, expecting the client to just "get it", or giving them a reference to go look up or research really isn't telling our story. It's being lazy. Our job is to take a myriad of highly technical facts and findings, and weaving them into a story that allows the C-suite executives to make critical business decisions, in a timely manner. That means we need to be correct, accurate, and timely. To paraphrase what I learned in the military, many times a good answer now is better than the best answer delivered too late. We need to keep in mind that while we're looking at logs, network traffic, the output of Volatility plugins, or parsing host-based data, there's a C-suite executive who has to report to a compliance or regulatory board, to whom bits and bytes, flags and Registry values mean absolutely nothing.
All of this also means that we need to be open to exposure and criticism. What Chris says in his article is admittedly easier said than done. How often do we get feedback from clients?
Mentoring
So how do we get better at telling our story, particularly when each response engagement is as different from the others we've done as snowflakes? This leads us right into a thread over on Twitter where mentoring was part of the topic. Our community is in dire need of mentoring. Mentoring is a great way to go about improving what we do, because many times we're so busy and engaged in response and analysis that we don't have the time to step back and see the forest for the trees, as it were. Sometimes it takes an outside influence to get us to see the need to change, or to show us a better way. However, I do not get the impression that many of the folks in our community are open to mentoring, and that impression has very little to do with distance.
First, mentoring should/needs to be an active, give-and-take relationship, and my experience in the community (as an analyst, writer, presenter, etc.) at large has been that there is a great deal of passivity. We rarely see thoughtful reviews of things such as books, presentations, and conferences in this community. People don't want to share their thoughts, nor have their name associated with such things, and as such, we're missing a great opportunity for overall improvement and advancement in this industry, without this give-and-take.
Second, mentoring opens the mentee to exposing what they're currently doing. Very few in this community appear to want or seek out that kind of exposure, even if it's limited to just the mentor. Years ago, I was part of a team and our manager instructed everyone to upload the reports that they'd sent to clients to a file share. After several months, I accessed the file share to upload my most recent report, and found that the folder for that quarter was empty, even though I knew that other analysts had been working really hard and billing a great deal of hours. Our manager conducted an audit and found that only a very few of us were following his instructions. While there was never any explanation that I was aware of for other analysts not uploading their reports, my thought remains that they did not want to expose what they were doing. As Chris mentioned in his article, he's been tasked with reviewing reports provided to clients by other firms. When we were on the IBM ISS ERS team together, I can remember him reviewing two such reports. I've been similarly tasked during my time in this field, and I've seen a wide range of what's been sent to clients. I've taken those experiences and tried to incorporate them into how I write my reports; I covered a great deal of this in chapter 9 of Windows Forensic Analysis 4/e.