When First Responders Attack!!
It still happens...an "event" or "incident" occurs within an organization, and the initial response from the folks on-site (most often, the organization's IT staff) obliterates some or all of the "evidence" of the event. Consultants are called to determine "how they got in", "how far they got" into the infrastructure, and "what data was taken", and as such, are unable to completely answer those questions (if at all) due to what happened in the first hours (or in some cases, days) after the incident was discovered.
Check out Ignorance wrecking evidence, from AdelaideNow in Australia. It's an excellent read from the perspective of law enforcement, but a good deal of what's said applies across the board.
One of the things that consultants see very often is a disparity between what first responders say they did during an initial interview, and what the analyst sees during an examination. Very often, the consultant is told that the first responders took the system offline, but didn't do anything else. However, analysis of the image shows that installing and running anti-virus and -spyware tools, deleting files, and even restoring files from backup all happened. A great deal of this can be seen once the approximate timeline of the incident is determined...and very often, you'll see an administrator login, install or delete/remove stuff, etc., and then say that they didn't do anything.
Why would this matter? Let's take a look...
Many analysts still rely on traditional examination techniques, focusing on file MAC times, etc. So an admin logs into a system and runs an AV or anti-spyware scan (or both...or just 'pokes around'...something that happens a LOT more than I care to think about...), so now all of the file access times on the system have been modified, and perhaps some files have been deleted. Anyone remember this article on anti-forensics that appeared in CIO Magazine? Why worry about that stuff, when there is more activity of this nature occurring due to either the operating system itself, or due to regular, day-to-day IT network ops?
So what's the solution? Education and training, starting with senior management. They have to make it important. After all, they're the ones that tell IT that systems have to stay up, right? If senior management were really aware of how many times (and how easily) their organization got punked or p0wned by some 15 yr old kid, then maybe they'd put some serious thought and effort into protecting their organization, their IT assets, and more importantly, the (re: YOUR) data that they store and process.
Check out Ignorance wrecking evidence, from AdelaideNow in Australia. It's an excellent read from the perspective of law enforcement, but a good deal of what's said applies across the board.
One of the things that consultants see very often is a disparity between what first responders say they did during an initial interview, and what the analyst sees during an examination. Very often, the consultant is told that the first responders took the system offline, but didn't do anything else. However, analysis of the image shows that installing and running anti-virus and -spyware tools, deleting files, and even restoring files from backup all happened. A great deal of this can be seen once the approximate timeline of the incident is determined...and very often, you'll see an administrator login, install or delete/remove stuff, etc., and then say that they didn't do anything.
Why would this matter? Let's take a look...
Many analysts still rely on traditional examination techniques, focusing on file MAC times, etc. So an admin logs into a system and runs an AV or anti-spyware scan (or both...or just 'pokes around'...something that happens a LOT more than I care to think about...), so now all of the file access times on the system have been modified, and perhaps some files have been deleted. Anyone remember this article on anti-forensics that appeared in CIO Magazine? Why worry about that stuff, when there is more activity of this nature occurring due to either the operating system itself, or due to regular, day-to-day IT network ops?
So what's the solution? Education and training, starting with senior management. They have to make it important. After all, they're the ones that tell IT that systems have to stay up, right? If senior management were really aware of how many times (and how easily) their organization got punked or p0wned by some 15 yr old kid, then maybe they'd put some serious thought and effort into protecting their organization, their IT assets, and more importantly, the (re: YOUR) data that they store and process.