Proactive IR
There are a couple of things that are true about security, in general, and IR specifically. One is that security seems to be difficult for some folks to understand (it's not generally part of our culture), so those of us in the industry tend to use a lot of analogies in an attempt to describe things to others that aren't in our area of expertise. Sometimes this works, sometimes it doesn't.
Another thing that's true is that the current model for IR doesn't work For consulting companies, it's hard to keep a staff of trained, dedicated, experienced responders available and on the bench, because if they sit unused they get pulled off into other areas (because those guys "do security stuff") and like many areas of information security (web app assessments, pen testing, malware RE, etc.) the hard-core technical skills are perishable. Most companies that need such skills simply don't keep these sorts of folks around, as they look to consulting companies to provide this service.
Why doesn't this work? Think about it this way...who calls emergency incident responders? Well, those who need emergency incident response, of course. Many of us who work (or have worked) as incident responders know all too well what happens...the responders show up, often well after the incident actually occurred, and have to first develop an understanding of not just what happened (as opposed to what the customer thinks may have happened), but also "get the lay of the land"; that is, understand what the network infrastructure "looks like", what logs may be available, etc. All of this takes time, and that time means that (a) the incident isn't "responded to" right away, and (b) the clock keeps ticking as far as billing is concerned. Ultimately, what's determined with respect to the customer's needs really varies; in fact, the questions that the customer had (i.e, "what data left our network?") may not be answered at all.
So, if it doesn't work, what do we do about this? Well, the first thing is that a cultural shift is needed. Now, follow me here...all companies that provide a service or product (which is pretty much every one of them) have business processes in place, right? There's sales, customer validation, provisioning and fulfillment, and billing and collections...right? Companies have processes in place (documented or otherwise) for providing their product or service to customers, and then getting paid. Companies also have processes in place for hiring and paying employees...because without employees to provide those products or services, where would you be?
Ever since I started in information security, one of the things I've seen across the board is that most companies do not have information security as a business process. Companies will process, store and manage all manner of sensitive data...PCI, PHI, PII, critical intellectual property, manufacturing processes and plans, etc...and not have processes for protecting that data, or responding to incidents involving the possible exposure or modification of that data.
Okay, how about those analogies? Like many, I consider my family to be critical, so I have smoke alarms in my home, fire extinguishers, we have basic first aid materials, etc. So, essentially, we measures in place to prevent certain incidents, detect others, and we've taken steps to ensure that we can respond appropriately to protect those items we've deemed "critical".
Here's another analogy...when I went to my undergraduate education, we were required to take boxing. If you're standing in a class and see everyone in line getting punched in the face because they don't keep their gloves up, what do you do? Do you stand there and convince yourself that you're not going to get punched in the face? When you do get punched in the face because you didn't keep your gloves up, do you blame the other guy ("hey, dude! WTF?!?!") or do you accept responsibility for getting punched in the face? Or, do you see what's happening, realize that it's inevitable, listen to what you're being told, and develop a culture of security and get your gloves up? The thing about getting punched in the face is no matter what you say or do afterward, the fact remains...you got punched in the face.
Here's another IRL example...I recently ran across this WaPo article that describes how farms in Illinois are pre-staging critical infrastructure information in an easily accessible location for emergency responders; the intention is to "prevent or reduce property damage, injuries and even deaths" in the event of an incident. Variations of the program have reportedly been rolled out in other states, and seem to be effective. What I find interesting about the program is that in Illinois, aerial maps are taken to each farm, and the farmers (those who established, designed, and maintain the infrastructure) assist in identifying structures, etc. This isn't a "here's $40K, write us a CSIRP"...instead, the farmer has to take some ownership in the process, but I guess they do that because a 1 hour or one afternoon interview can mean the difference between minor damage and loosing everything.
Sound familiar?
As a responder, I'm aware of various legislation and regulatory bodies that have mandated the need for incident response capabilities...Visa PCI, NCUA, etc. States have laws for notification in the case of PII breaches, which indirectly require an IR capability. Right now, who's better able to respond to a breach...local IT staff who know and work in the infrastructure every day (and just need a little bit of training in incident response and containment) or someone who will arrive on-site in anywhere between 6 and 72 hours, and will still need to develop an understanding of your infrastructure?
If the local IT staff knew how to respond appropriately, and was able to contain the incident and collect the necessary data (because they had the training and tools, and processes for doing so), analysis performed by that trusted third party adviser could begin much sooner, reducing response time and overall cost. If the local IT staff (under the leadership of a C-level executive, like the farmer) were to take steps to prepare for the incident...identify and correct shortfalls in the infrastructure, determine where configuration changes to systems or the addition of monitoring would assist in preventing and detecting incidents, determine where critical data resides/transits, develops a plan for response, etc...just as is mandated in compliance requirements, then the entire game would change. Incidents would be detected by the internal staff closer to when they actually occur...rather than months later, by an external third party. Incident response would begin much quicker, and containment and scoping would follow suit.
Let's say you have a database containing 650K records (PII, PCI, PHI, whatever). According to most compliance requirements, if you cannot explicitly determine which records were exposed, you have to report on ALL of them. Think of the cost associated with that...direct costs of reporting and notification, followed by indirect costs of cleanup, fines, lawsuits, etc. Now, compare that to the cost of doing something like having your DBA write a stored procedure (includes authorization and logging) for accessing the data, rather than simply allowing direct access to the data.
Being ready for an incident is going to take work, but it's going to be less costly in the long run when (not if) an incident occurs.
What are some things you can do to prepare? Identify logging sources, and if necessary, modify them appropriately (add Process Tracking to your Windows Event Logs, increase logs size, set up a means for centralized log collection, etc.). Develop and maintain accurate network maps, and know where your critical data is located. The problem with hiring someone to do this for you is that you don't have any ownership; when the job's done, you have a map that is an accurate snapshot, but how accurate is it 6 months later? Making incident detection and tier 1 response (i.e., scoping, data collection) a business process, with the help of a trusted adviser, is going to be quicker, easier and far less costly in the long run, and those advisers will be there when you need the tier 3 analysis completed.
What about looking at things like Carbon Black? Cb has a number of uses besides just IR, and can help you solve a number of other problems. However, with respect to IR, it can not only tell you what was run and when, but it can keep a copy of it for you...so when it comes to determining the capabilities of the malware downloaded to your system, you already have a copy available; call that trusted adviser and have them analyze it for you.
Remember the first Mission: Impossible movie? After his team was wiped out, Ethan made it back to the safe house and as he reached the top of the stairwell, took the light bulb out of the socket and crushed it in his jacket, then spread the shards on the floor as he backed toward his room. What this does is provide a free detection mechanism...anyone approaching the room isn't going to know that the shards are their until they step on them and alert Ethan to their presence; incident detection.
So what are you going to do? Wait until an incident happens, or worse, wait until someone told you that an incident happened, and then call someone for help? You'll have to find someone, sign contracts, get them on-site, and then help them understand your infrastructure so that they can respond effectively. When they're first there, you're not going to trust them (they're new, after all) and you're not going to speak their language. In most cases, you're not going to know the answer to their questions...do we even have firewall logs? What about DHCP...do we log that? What will happen is that you will continue to hemorrhage data throughout this process.
The other option is to have detection mechanisms and a response plan in place and tested, and have a trusted adviser that you can call for assistance. Your local IT staff needs to be trained to perform the initial response, scoping and assessment, and even containment. While the IT director is on the phone with that trusted adviser, designated individuals are collecting and preserving data...because they know where it is and how to get it. The questions that the trusted adviser (or any other consulting firm) would ask are being answered before the call is being made, not afterward ("Uh...we had no idea that you'd ask that..."). That way, you don't loose the whole farm, and if you do get punched in the face, you're not knocked out.
By the way...one final note. This doesn't apply solely to large companies. Small business are loosing money hand over fist and some are even going out of business...you just don't hear about it as much. These same things can be done inexpensively and effectively, and need to be done. The difference is, do you get it done, even if you have to have a payment plan, or do you sit by and wait for an incident to put you out of business and lay off your employees?
Another thing that's true is that the current model for IR doesn't work For consulting companies, it's hard to keep a staff of trained, dedicated, experienced responders available and on the bench, because if they sit unused they get pulled off into other areas (because those guys "do security stuff") and like many areas of information security (web app assessments, pen testing, malware RE, etc.) the hard-core technical skills are perishable. Most companies that need such skills simply don't keep these sorts of folks around, as they look to consulting companies to provide this service.
Why doesn't this work? Think about it this way...who calls emergency incident responders? Well, those who need emergency incident response, of course. Many of us who work (or have worked) as incident responders know all too well what happens...the responders show up, often well after the incident actually occurred, and have to first develop an understanding of not just what happened (as opposed to what the customer thinks may have happened), but also "get the lay of the land"; that is, understand what the network infrastructure "looks like", what logs may be available, etc. All of this takes time, and that time means that (a) the incident isn't "responded to" right away, and (b) the clock keeps ticking as far as billing is concerned. Ultimately, what's determined with respect to the customer's needs really varies; in fact, the questions that the customer had (i.e, "what data left our network?") may not be answered at all.
So, if it doesn't work, what do we do about this? Well, the first thing is that a cultural shift is needed. Now, follow me here...all companies that provide a service or product (which is pretty much every one of them) have business processes in place, right? There's sales, customer validation, provisioning and fulfillment, and billing and collections...right? Companies have processes in place (documented or otherwise) for providing their product or service to customers, and then getting paid. Companies also have processes in place for hiring and paying employees...because without employees to provide those products or services, where would you be?
Ever since I started in information security, one of the things I've seen across the board is that most companies do not have information security as a business process. Companies will process, store and manage all manner of sensitive data...PCI, PHI, PII, critical intellectual property, manufacturing processes and plans, etc...and not have processes for protecting that data, or responding to incidents involving the possible exposure or modification of that data.
Okay, how about those analogies? Like many, I consider my family to be critical, so I have smoke alarms in my home, fire extinguishers, we have basic first aid materials, etc. So, essentially, we measures in place to prevent certain incidents, detect others, and we've taken steps to ensure that we can respond appropriately to protect those items we've deemed "critical".
Here's another analogy...when I went to my undergraduate education, we were required to take boxing. If you're standing in a class and see everyone in line getting punched in the face because they don't keep their gloves up, what do you do? Do you stand there and convince yourself that you're not going to get punched in the face? When you do get punched in the face because you didn't keep your gloves up, do you blame the other guy ("hey, dude! WTF?!?!") or do you accept responsibility for getting punched in the face? Or, do you see what's happening, realize that it's inevitable, listen to what you're being told, and develop a culture of security and get your gloves up? The thing about getting punched in the face is no matter what you say or do afterward, the fact remains...you got punched in the face.
Here's another IRL example...I recently ran across this WaPo article that describes how farms in Illinois are pre-staging critical infrastructure information in an easily accessible location for emergency responders; the intention is to "prevent or reduce property damage, injuries and even deaths" in the event of an incident. Variations of the program have reportedly been rolled out in other states, and seem to be effective. What I find interesting about the program is that in Illinois, aerial maps are taken to each farm, and the farmers (those who established, designed, and maintain the infrastructure) assist in identifying structures, etc. This isn't a "here's $40K, write us a CSIRP"...instead, the farmer has to take some ownership in the process, but I guess they do that because a 1 hour or one afternoon interview can mean the difference between minor damage and loosing everything.
Sound familiar?
As a responder, I'm aware of various legislation and regulatory bodies that have mandated the need for incident response capabilities...Visa PCI, NCUA, etc. States have laws for notification in the case of PII breaches, which indirectly require an IR capability. Right now, who's better able to respond to a breach...local IT staff who know and work in the infrastructure every day (and just need a little bit of training in incident response and containment) or someone who will arrive on-site in anywhere between 6 and 72 hours, and will still need to develop an understanding of your infrastructure?
If the local IT staff knew how to respond appropriately, and was able to contain the incident and collect the necessary data (because they had the training and tools, and processes for doing so), analysis performed by that trusted third party adviser could begin much sooner, reducing response time and overall cost. If the local IT staff (under the leadership of a C-level executive, like the farmer) were to take steps to prepare for the incident...identify and correct shortfalls in the infrastructure, determine where configuration changes to systems or the addition of monitoring would assist in preventing and detecting incidents, determine where critical data resides/transits, develops a plan for response, etc...just as is mandated in compliance requirements, then the entire game would change. Incidents would be detected by the internal staff closer to when they actually occur...rather than months later, by an external third party. Incident response would begin much quicker, and containment and scoping would follow suit.
Let's say you have a database containing 650K records (PII, PCI, PHI, whatever). According to most compliance requirements, if you cannot explicitly determine which records were exposed, you have to report on ALL of them. Think of the cost associated with that...direct costs of reporting and notification, followed by indirect costs of cleanup, fines, lawsuits, etc. Now, compare that to the cost of doing something like having your DBA write a stored procedure (includes authorization and logging) for accessing the data, rather than simply allowing direct access to the data.
Being ready for an incident is going to take work, but it's going to be less costly in the long run when (not if) an incident occurs.
What are some things you can do to prepare? Identify logging sources, and if necessary, modify them appropriately (add Process Tracking to your Windows Event Logs, increase logs size, set up a means for centralized log collection, etc.). Develop and maintain accurate network maps, and know where your critical data is located. The problem with hiring someone to do this for you is that you don't have any ownership; when the job's done, you have a map that is an accurate snapshot, but how accurate is it 6 months later? Making incident detection and tier 1 response (i.e., scoping, data collection) a business process, with the help of a trusted adviser, is going to be quicker, easier and far less costly in the long run, and those advisers will be there when you need the tier 3 analysis completed.
What about looking at things like Carbon Black? Cb has a number of uses besides just IR, and can help you solve a number of other problems. However, with respect to IR, it can not only tell you what was run and when, but it can keep a copy of it for you...so when it comes to determining the capabilities of the malware downloaded to your system, you already have a copy available; call that trusted adviser and have them analyze it for you.
Remember the first Mission: Impossible movie? After his team was wiped out, Ethan made it back to the safe house and as he reached the top of the stairwell, took the light bulb out of the socket and crushed it in his jacket, then spread the shards on the floor as he backed toward his room. What this does is provide a free detection mechanism...anyone approaching the room isn't going to know that the shards are their until they step on them and alert Ethan to their presence; incident detection.
So what are you going to do? Wait until an incident happens, or worse, wait until someone told you that an incident happened, and then call someone for help? You'll have to find someone, sign contracts, get them on-site, and then help them understand your infrastructure so that they can respond effectively. When they're first there, you're not going to trust them (they're new, after all) and you're not going to speak their language. In most cases, you're not going to know the answer to their questions...do we even have firewall logs? What about DHCP...do we log that? What will happen is that you will continue to hemorrhage data throughout this process.
The other option is to have detection mechanisms and a response plan in place and tested, and have a trusted adviser that you can call for assistance. Your local IT staff needs to be trained to perform the initial response, scoping and assessment, and even containment. While the IT director is on the phone with that trusted adviser, designated individuals are collecting and preserving data...because they know where it is and how to get it. The questions that the trusted adviser (or any other consulting firm) would ask are being answered before the call is being made, not afterward ("Uh...we had no idea that you'd ask that..."). That way, you don't loose the whole farm, and if you do get punched in the face, you're not knocked out.
By the way...one final note. This doesn't apply solely to large companies. Small business are loosing money hand over fist and some are even going out of business...you just don't hear about it as much. These same things can be done inexpensively and effectively, and need to be done. The difference is, do you get it done, even if you have to have a payment plan, or do you sit by and wait for an incident to put you out of business and lay off your employees?