Updates
Office Maldocs, SANS Macros
HelpNetSecurity had a fascinating blog post recently on a change in tactics that they'd observed (actually, it originated from a SANS handler diary post), in that an adversary was using a feature built in to MS Word documents to infect systems, rather than embedding malicious macros in the documents. The "feature" is one in which links embedded in the document are updated when the document is opened. In the case of the observed activity, the link update downloaded an RTF document, and things just sort of took off from there.
I've checked my personal system (Office 2010) as well as my corp system (Office 2016), and in both cases, this feature is enabled by default.
This is a great example of an evolution of behavior, and illustrates that "arms race" that is going on every day in the DFIR community. We can't detect all possible means of compromise...quite frankly, I don't believe that there's a list out there that we can use as a basis, even if we could. So, the blue team perspective is to instrument in a way that makes sense so that we can detect these things, and then respond as thoroughly as possible.
WMI Persistence
TrendMicro recently published a blog post that went into some detail discussing WMI persistence observed with respect to cryptocurrency miner infections. While such infections aren't necessarily damaging to an organization (I've observed several that went undetected for months...), in the sense that they don't deprive or restrict the organization's ability to access their own assets and information, they are the result of someone breaching the perimeter and obtaining access to a system and it's resources.
Matt Graeber tweeted that on Windows 10, the creation of the WMI persistence mechanism appears in the Windows Event Logs. While I understand that organizations cannot completely ignore their investment in systems and infrastructure, there needs to be some means by which older OSs are rolled out of inventory as they become obviated by the manufacturer. I have seen, or known that others have seen, active Windows XP and 2003 systems as recently as August, 2017; again, I completely understand that organizations have invested a great deal of money, time, and other resources into maintaining the infrastructure that they'd developed (or legacy infrastructures), but from an information security perspective, there needs to be any eye toward (and an investment in) updating systems that have reached end-of-life.
I'd had a blog post published on my previous employer's corporate site last year; we'd discovered a similar persistence mechanism as a result of creating a mini-timeline to analyze one of several systems infected with Samas ransomware. In this particular case, prior to the system being compromised and used as a jump host to map the network and deploy the ransomware, the system had been compromised via the same vulnerability and a cryptocoin miner installed. There was a WMI persistence mechanism created at about the same time, and another artifact (i.e., the LastWrite time on the Win32_ClockProvider Registry key had been modified...) on the system pointed us in that direction.
InfoSec Program Maturity
Going back just a bit to the topic of the maturity of IT processes and by extension, infosec programs, with respect to ransomware...one of the things I've seen a lot of over the past year to 18 months, beyond the surge in ransomware cases that started in Feb, 2016, is the questions that clients who've been hit with ransomware have been asking. These have actually been really good questions, such as, "...was sensitive data exposed or exfiltrated?" In most instances with ransomware cases, the immediate urge was to respond, "...no, it was ransomware...", but pausing for a bit, the real answer was, "...we don't know." Why didn't we know? We had no way of knowing, because the systems weren't instrumented, and we didn't have the necessary visibility to be able to answer the questions. Not just definitively...at all.
More recently with the NotPetya issues, we'd see where the client had Process Tracking enabled in the Windows Event Log, so that the Security Event Log was populated with pertinent records, albeit without the full command line. As such, we could see the sequence of commands that were associated with NotPetya, and we could say with confidence that no additional commands have been run, but without the full command lines, we couldn't stated definitively that nothing else untoward had also been done.
So, some things to consider when thinking about or discussing the maturity of your IT and infosec programs include asking yourself, "...what are the questions we would have in the case of this type of incident?", and then, "...do we have the necessary instrumentation and visibility to answer those questions?" Anyone who has sensitive data (PHI, PII, PCI, etc...) is going to have the question of "...was sensitive data exposed?", so the question would be, how would you determine that? Were you tracking full process command lines to determine if sensitive data was marshaled and prepared for exfil?
Another aspect of this to consider is, if this information is being tracked because you do, in fact, have the necessary instrumentation, what's your aperture? Are you covering just the domain controllers, or have you included other systems, including workstations? Then, depending on what you're collecting, how quickly can you answer the questions? Is it something you can do easily, because you've practiced and tweaked the process, or is it something you haven't even tried yet?
Something that's demonstrated (to me) on a daily basis is how mature the bad guy's process is, and I'm not just referring to targeted nation-state threat actors. I've seen ransomware engagements where the bad guy got in to an RDP server, and within 10 min escalated privileges (his exploit included the CVE number in the file name), deployed ransomware and got out. There are plenty of blog posts that talk about how targeted threat actors have been observed reacting to stimulus (i.e., attempts at containment, indications of being detected, etc.), and returning to infrastructures following eradication and remediation.
WEVTX
The folks at JPCERT recently (June) published their research on using Windows Event Logs to track lateral movement within an infrastructure. This is really good stuff, but is dependent upon system owners properly configuring systems in order to actually generate the log records they refer to in the report (we just talked about infosec programs and visibility above...).
This is also an inherent issue with SIEMs...no amount of technology will be useful if you're not populating it with the appropriate information.
New RegRipper Plugin
James shared a link to a one-line PowerShell command designed to detect the presence of the CIA's AngelFire infection. After reading this, it took me about 15 min to write a RegRipper plugin for it and upload it to the Github repository.
HelpNetSecurity had a fascinating blog post recently on a change in tactics that they'd observed (actually, it originated from a SANS handler diary post), in that an adversary was using a feature built in to MS Word documents to infect systems, rather than embedding malicious macros in the documents. The "feature" is one in which links embedded in the document are updated when the document is opened. In the case of the observed activity, the link update downloaded an RTF document, and things just sort of took off from there.
I've checked my personal system (Office 2010) as well as my corp system (Office 2016), and in both cases, this feature is enabled by default.
This is a great example of an evolution of behavior, and illustrates that "arms race" that is going on every day in the DFIR community. We can't detect all possible means of compromise...quite frankly, I don't believe that there's a list out there that we can use as a basis, even if we could. So, the blue team perspective is to instrument in a way that makes sense so that we can detect these things, and then respond as thoroughly as possible.
WMI Persistence
TrendMicro recently published a blog post that went into some detail discussing WMI persistence observed with respect to cryptocurrency miner infections. While such infections aren't necessarily damaging to an organization (I've observed several that went undetected for months...), in the sense that they don't deprive or restrict the organization's ability to access their own assets and information, they are the result of someone breaching the perimeter and obtaining access to a system and it's resources.
Matt Graeber tweeted that on Windows 10, the creation of the WMI persistence mechanism appears in the Windows Event Logs. While I understand that organizations cannot completely ignore their investment in systems and infrastructure, there needs to be some means by which older OSs are rolled out of inventory as they become obviated by the manufacturer. I have seen, or known that others have seen, active Windows XP and 2003 systems as recently as August, 2017; again, I completely understand that organizations have invested a great deal of money, time, and other resources into maintaining the infrastructure that they'd developed (or legacy infrastructures), but from an information security perspective, there needs to be any eye toward (and an investment in) updating systems that have reached end-of-life.
I'd had a blog post published on my previous employer's corporate site last year; we'd discovered a similar persistence mechanism as a result of creating a mini-timeline to analyze one of several systems infected with Samas ransomware. In this particular case, prior to the system being compromised and used as a jump host to map the network and deploy the ransomware, the system had been compromised via the same vulnerability and a cryptocoin miner installed. There was a WMI persistence mechanism created at about the same time, and another artifact (i.e., the LastWrite time on the Win32_ClockProvider Registry key had been modified...) on the system pointed us in that direction.
InfoSec Program Maturity
Going back just a bit to the topic of the maturity of IT processes and by extension, infosec programs, with respect to ransomware...one of the things I've seen a lot of over the past year to 18 months, beyond the surge in ransomware cases that started in Feb, 2016, is the questions that clients who've been hit with ransomware have been asking. These have actually been really good questions, such as, "...was sensitive data exposed or exfiltrated?" In most instances with ransomware cases, the immediate urge was to respond, "...no, it was ransomware...", but pausing for a bit, the real answer was, "...we don't know." Why didn't we know? We had no way of knowing, because the systems weren't instrumented, and we didn't have the necessary visibility to be able to answer the questions. Not just definitively...at all.
More recently with the NotPetya issues, we'd see where the client had Process Tracking enabled in the Windows Event Log, so that the Security Event Log was populated with pertinent records, albeit without the full command line. As such, we could see the sequence of commands that were associated with NotPetya, and we could say with confidence that no additional commands have been run, but without the full command lines, we couldn't stated definitively that nothing else untoward had also been done.
So, some things to consider when thinking about or discussing the maturity of your IT and infosec programs include asking yourself, "...what are the questions we would have in the case of this type of incident?", and then, "...do we have the necessary instrumentation and visibility to answer those questions?" Anyone who has sensitive data (PHI, PII, PCI, etc...) is going to have the question of "...was sensitive data exposed?", so the question would be, how would you determine that? Were you tracking full process command lines to determine if sensitive data was marshaled and prepared for exfil?
Another aspect of this to consider is, if this information is being tracked because you do, in fact, have the necessary instrumentation, what's your aperture? Are you covering just the domain controllers, or have you included other systems, including workstations? Then, depending on what you're collecting, how quickly can you answer the questions? Is it something you can do easily, because you've practiced and tweaked the process, or is it something you haven't even tried yet?
Something that's demonstrated (to me) on a daily basis is how mature the bad guy's process is, and I'm not just referring to targeted nation-state threat actors. I've seen ransomware engagements where the bad guy got in to an RDP server, and within 10 min escalated privileges (his exploit included the CVE number in the file name), deployed ransomware and got out. There are plenty of blog posts that talk about how targeted threat actors have been observed reacting to stimulus (i.e., attempts at containment, indications of being detected, etc.), and returning to infrastructures following eradication and remediation.
WEVTX
The folks at JPCERT recently (June) published their research on using Windows Event Logs to track lateral movement within an infrastructure. This is really good stuff, but is dependent upon system owners properly configuring systems in order to actually generate the log records they refer to in the report (we just talked about infosec programs and visibility above...).
This is also an inherent issue with SIEMs...no amount of technology will be useful if you're not populating it with the appropriate information.
New RegRipper Plugin
James shared a link to a one-line PowerShell command designed to detect the presence of the CIA's AngelFire infection. After reading this, it took me about 15 min to write a RegRipper plugin for it and upload it to the Github repository.