More DFIR Brain Droppings
Ransomware (TL;DR)
This past week, the City of Atlanta was hit with ransomware. This is not unusually...I've been seeing a lot of municipalities in the US...cities, counties, etc...getting hit, since last fall. Some of them aren't all that obvious to anyone who isn't trying to obtain government services, of course, and the media articles themselves may not be all that easy to find. Here are just a few examples:
Mecklenburg County update
Spring Hill, TN update
Murfreesboro, TN update
Just a brief read of the above articles, and maybe doing a quick Google search for other articles specific to the same venues will quickly make it clear how services in these jurisdictions are affected by these attacks.
The ArsTechnica article that covered the Atlanta attack mentioned that the ransomware used in the attack was Samsam. I've got some experience with this variant, and Kevin Strickland wrote a great blog post about the evolution of the ransomware itself, as well. (Note: both of those blog posts are almost 2 yrs old at this point). The intel team at SWRX followed that up with some really good information about the Samas ransomware campaigns recently. The big take-away from this is that the Samsam (or Samas) ransomware has not historically been email-borne. Instead, it's delivered (in most cases, rather thoughtfully) after someone has gained access to the organization, escalated privileges (if necessary), located the specific systems that they want to target, and then manually deployed the ransomware. All of these actions could have been detected early in the attack cycle. Think of it this way...instead of being delivered via mail (like a letter), it's more like Amazon delivery folks dropping the package...for something that you never ordered...off in your house...only you didn't have one of those special lock-and-camera combinations that they talked about, there was just a door or window left open. Yeah, kind of like that.
On the topic of costs, the CNN article includes:
The Federal Bureau of Investigation and Department of Homeland Security are investigating the cyberattack...
...and...
The city engaged Microsoft and a team from Cisco's Incident Response Services in the investigation...
Okay, so we're getting a lot of help, that's good. But they're getting a lot of help, and that's going to be expensive.
Do you, the reader, think that when the staff and leadership for the City of Atlanta sat down for their budget meetings last year, that they planned for this sort of thing? When something like this occurs, the direct costs include not only the analyst's time, but food, lodging, etc. Not only does it add up over time, but it's multiplied...$X per analyst per day, times Y analysts, times Z days.
From the USAToday article on the topic:
Such attacks are increasingly common. A report last year from the Ponemon Institute found that half of organizations surveyed had had one or more ransomware incidents in 2017, and 40% had experienced multiple attacks.
An IBM study found that 70% of businesses have been hit with ransomware. Over half of those paid more than $10,000 to regain their data and 20% paid more than $40,000.
In January, an Indiana hospital system paid a $50,000 ransom to hackers who hijacked patient data. The ransomware attack accessed the computers of Hancock Health in Greenfield through an outside vendor's account Thursday. It quickly infected the system by locking out data and changing the names of more than 1,400 files to "I'm sorry."
Something else that's not addressed in that Ponemon report, or at least not in the quote from the USAToday article, is that even when the ransom is paid, there's no guarantee that you'll get your files back. Just over a year ago, I did some DFIR analysis work for a client that paid the ransom and didn't get all of their files back, and the paid us to come in and try to provide some answers. So the costs are stacking up.
What about other direct and/or indirect costs associated with ransomware attacks? There are hits to productivity and the ability to provide services, costs of downtime, costs to bring consultants in to assist in discovery and recovery, etc. While we don't have the actual numbers, we can see these stacking up. But what about other costs? Not too long ago, another municipality got hit with ransomware, and a quote from one of the articles was along the lines of, "...we have no evidence that sensitive information was accessed."
Yes, of course you don't. There was no instrumentation, nor visibility, to detect the early stages of the attack, and as a consequence, no evidence of anything about the attack was recorded. On the surface that sounds like a statement meant to comfort those whose personal data was held by that organization, but focusing just 1nm beyond that statement reveals an entirely different picture; there is "no evidence" because no one was watching.
So what can we expect to see in the future? We're clearly going to see more of these attacks, because they pay; there is a monetary incentive to conducting these attacks, and an economic benefit (to the attacker) for doing so. As such, there is no doubt in my mind that two things are going to happen; one is that with the advent of GDPR and other similar legislation, this is going to open up a whole new avenue for extortion. Someone's going to gain access to an infrastructure, collect enough information to have solid proof that they've done so, and use that as their extortion. Why is this going to work? Because going public with that information is going to put the victim organization in the legislative spotlight in a way that they will not be able to avoid.
The other thing that's going to happen is that when statements regarding access to 'sensitive data' are made, there are going to be suits demanding proof. I don't think that most individuals (regardless of country) have completely given up on "privacy" yet. Someone...or a lot of someones...is/are going to go to court to demand definitive answers. A hand-waving platitude isn't going to be enough, and in fact, it's going to be the catalyst for more questions.
Something else I thought was pretty interesting from the CNN article:
When asked if the city was aware of vulnerabilities and failed to take action, Rackley said the city had implemented measures in the past that might have lessened the scope of the breach. She cited a "cloud strategy" to migrate critical systems to secure infrastructure.
This is a very interesting question (and response), in part because I think we're going to see more of questions just like this. We will never know if this question was a direct result of the Equifax testimony (by the former CEO, before Congress), but I do think that it's plausible enough to assume that that's the case.
And the inevitable has occurred...the curious have started engaging in their own research and posted their findings publicly. Now, this doesn't explicitly mean that this is the avenue used by the adversary, but it does speak to the question from the CNN article (i.e., "...were you aware of vulnerabilities...").
Attacker Sophistication
Here's a fascinating GovTech article that posits that some data breaches may be the result of professional IT pride. As I read through the article, I kept thinking of the Equifax breach, which reportedly occurred for want of a single patch, and then my mind pivoted over to what was found recently via online research conducted against the City of Atlanta.
From the article, "...it’s sometimes an easy out to say: “the bad guys are just too good.” " Yes, we see that a lot...statements are made without qualification about the "sophisiticated" attacker, but for those of us who've been in the trenches, analyzed the host and log data, and determined the initial avenue that the attacker used to get into the infrastructure...how "sophisticated" does one have to be to guess that password for an Internet-accessible Terminal Services/RDP account when that password is on every password list? Or to use a publicly available available exploit against an unpatched, unmanaged system? "Hey, here's an exploit against JBoss servers up through and including version 6, and I just found a whole bunch of JBoss servers running version 4...hold my beer."
In my experience, the attacker only needs to be as sophisticated as he needs to be. I worked an engagement once where the adversary got in, collected and archived data, and exfiltrated the archive out of the infrastructure. Batch files were left in place, and the archive was copied (not moved) from system to system, and not deleted from the previous location. The archive itself didn't have a password on it. Someone said that the adversary was sloppy and not sophisticated. However, the client had been informed of the breach via third party notification, weeks after the data was taken.
Tool Testing
Daniel Bohannon has a great article up about testing your tools.
I'd like to add to his comments about tool testing...specifically, if you're going to effectively test your tools, you need to understand what the tools do (or at least what they're supposed to do...) and how they work. Perhaps not at a bit-level (you don't have to be the tool developer), but there are some really simple troubleshooting steps you can follow if you have an issue with a tool and you feel that you want to either ask a question, or report the issue.
For one...and Jamie Levy can back me up on this..."don't work" don't work. What I mean is, if you're going to go to a tool developer (or you're going to bypass the developer and just go public), it's helpful to understand how the tool works so that you can describe how it appears to not be working. A long time ago, I learned that Volatility does NOT work if the 8Gb memory dump you received is just 8Gb of zeroes. Surprising, I know.
Oddly enough, the same is true for RegRipper; if you extract a hive file from an image, and for some reason it's just a big file full of zeroes, the RegRipper will through errors. This is also true it you use reg.exe to export hive files, or portions thereof. RegRipper is intended to be run against the raw hive file, NOT text files with a .reg extension. Instead of using "reg export", use "reg save".
The overall point is that it's important to test the tools you're using, but it's equally important to understand what the tools are supposed/designed to do.
Rattler
Speaking of tools, not long ago I ran across a reference to Rattler, which is described as, "...a tool that automates the identification of DLL's which can be used for DLL preloading attacks." Reading through the blog post that describes the issue that Rattler addresses leads me to believe that this is the DLL search order hijacking issue, in that if you have malicious DLL in the same folder as the executable, and the executable calls a DLL with the same name that's found in another folder (i.e., C:\Windows\system32), then your malicious DLL will be loaded before the legit DLL.
This past week, the City of Atlanta was hit with ransomware. This is not unusually...I've been seeing a lot of municipalities in the US...cities, counties, etc...getting hit, since last fall. Some of them aren't all that obvious to anyone who isn't trying to obtain government services, of course, and the media articles themselves may not be all that easy to find. Here are just a few examples:
Mecklenburg County update
Spring Hill, TN update
Murfreesboro, TN update
Just a brief read of the above articles, and maybe doing a quick Google search for other articles specific to the same venues will quickly make it clear how services in these jurisdictions are affected by these attacks.
The ArsTechnica article that covered the Atlanta attack mentioned that the ransomware used in the attack was Samsam. I've got some experience with this variant, and Kevin Strickland wrote a great blog post about the evolution of the ransomware itself, as well. (Note: both of those blog posts are almost 2 yrs old at this point). The intel team at SWRX followed that up with some really good information about the Samas ransomware campaigns recently. The big take-away from this is that the Samsam (or Samas) ransomware has not historically been email-borne. Instead, it's delivered (in most cases, rather thoughtfully) after someone has gained access to the organization, escalated privileges (if necessary), located the specific systems that they want to target, and then manually deployed the ransomware. All of these actions could have been detected early in the attack cycle. Think of it this way...instead of being delivered via mail (like a letter), it's more like Amazon delivery folks dropping the package...for something that you never ordered...off in your house...only you didn't have one of those special lock-and-camera combinations that they talked about, there was just a door or window left open. Yeah, kind of like that.
On the topic of costs, the CNN article includes:
The Federal Bureau of Investigation and Department of Homeland Security are investigating the cyberattack...
...and...
The city engaged Microsoft and a team from Cisco's Incident Response Services in the investigation...
Okay, so we're getting a lot of help, that's good. But they're getting a lot of help, and that's going to be expensive.
Do you, the reader, think that when the staff and leadership for the City of Atlanta sat down for their budget meetings last year, that they planned for this sort of thing? When something like this occurs, the direct costs include not only the analyst's time, but food, lodging, etc. Not only does it add up over time, but it's multiplied...$X per analyst per day, times Y analysts, times Z days.
From the USAToday article on the topic:
Such attacks are increasingly common. A report last year from the Ponemon Institute found that half of organizations surveyed had had one or more ransomware incidents in 2017, and 40% had experienced multiple attacks.
An IBM study found that 70% of businesses have been hit with ransomware. Over half of those paid more than $10,000 to regain their data and 20% paid more than $40,000.
In January, an Indiana hospital system paid a $50,000 ransom to hackers who hijacked patient data. The ransomware attack accessed the computers of Hancock Health in Greenfield through an outside vendor's account Thursday. It quickly infected the system by locking out data and changing the names of more than 1,400 files to "I'm sorry."
Something else that's not addressed in that Ponemon report, or at least not in the quote from the USAToday article, is that even when the ransom is paid, there's no guarantee that you'll get your files back. Just over a year ago, I did some DFIR analysis work for a client that paid the ransom and didn't get all of their files back, and the paid us to come in and try to provide some answers. So the costs are stacking up.
What about other direct and/or indirect costs associated with ransomware attacks? There are hits to productivity and the ability to provide services, costs of downtime, costs to bring consultants in to assist in discovery and recovery, etc. While we don't have the actual numbers, we can see these stacking up. But what about other costs? Not too long ago, another municipality got hit with ransomware, and a quote from one of the articles was along the lines of, "...we have no evidence that sensitive information was accessed."
Yes, of course you don't. There was no instrumentation, nor visibility, to detect the early stages of the attack, and as a consequence, no evidence of anything about the attack was recorded. On the surface that sounds like a statement meant to comfort those whose personal data was held by that organization, but focusing just 1nm beyond that statement reveals an entirely different picture; there is "no evidence" because no one was watching.
So what can we expect to see in the future? We're clearly going to see more of these attacks, because they pay; there is a monetary incentive to conducting these attacks, and an economic benefit (to the attacker) for doing so. As such, there is no doubt in my mind that two things are going to happen; one is that with the advent of GDPR and other similar legislation, this is going to open up a whole new avenue for extortion. Someone's going to gain access to an infrastructure, collect enough information to have solid proof that they've done so, and use that as their extortion. Why is this going to work? Because going public with that information is going to put the victim organization in the legislative spotlight in a way that they will not be able to avoid.
The other thing that's going to happen is that when statements regarding access to 'sensitive data' are made, there are going to be suits demanding proof. I don't think that most individuals (regardless of country) have completely given up on "privacy" yet. Someone...or a lot of someones...is/are going to go to court to demand definitive answers. A hand-waving platitude isn't going to be enough, and in fact, it's going to be the catalyst for more questions.
Something else I thought was pretty interesting from the CNN article:
When asked if the city was aware of vulnerabilities and failed to take action, Rackley said the city had implemented measures in the past that might have lessened the scope of the breach. She cited a "cloud strategy" to migrate critical systems to secure infrastructure.
This is a very interesting question (and response), in part because I think we're going to see more of questions just like this. We will never know if this question was a direct result of the Equifax testimony (by the former CEO, before Congress), but I do think that it's plausible enough to assume that that's the case.
And the inevitable has occurred...the curious have started engaging in their own research and posted their findings publicly. Now, this doesn't explicitly mean that this is the avenue used by the adversary, but it does speak to the question from the CNN article (i.e., "...were you aware of vulnerabilities...").
Attacker Sophistication
Here's a fascinating GovTech article that posits that some data breaches may be the result of professional IT pride. As I read through the article, I kept thinking of the Equifax breach, which reportedly occurred for want of a single patch, and then my mind pivoted over to what was found recently via online research conducted against the City of Atlanta.
From the article, "...it’s sometimes an easy out to say: “the bad guys are just too good.” " Yes, we see that a lot...statements are made without qualification about the "sophisiticated" attacker, but for those of us who've been in the trenches, analyzed the host and log data, and determined the initial avenue that the attacker used to get into the infrastructure...how "sophisticated" does one have to be to guess that password for an Internet-accessible Terminal Services/RDP account when that password is on every password list? Or to use a publicly available available exploit against an unpatched, unmanaged system? "Hey, here's an exploit against JBoss servers up through and including version 6, and I just found a whole bunch of JBoss servers running version 4...hold my beer."
In my experience, the attacker only needs to be as sophisticated as he needs to be. I worked an engagement once where the adversary got in, collected and archived data, and exfiltrated the archive out of the infrastructure. Batch files were left in place, and the archive was copied (not moved) from system to system, and not deleted from the previous location. The archive itself didn't have a password on it. Someone said that the adversary was sloppy and not sophisticated. However, the client had been informed of the breach via third party notification, weeks after the data was taken.
Tool Testing
Daniel Bohannon has a great article up about testing your tools.
I'd like to add to his comments about tool testing...specifically, if you're going to effectively test your tools, you need to understand what the tools do (or at least what they're supposed to do...) and how they work. Perhaps not at a bit-level (you don't have to be the tool developer), but there are some really simple troubleshooting steps you can follow if you have an issue with a tool and you feel that you want to either ask a question, or report the issue.
For one...and Jamie Levy can back me up on this..."don't work" don't work. What I mean is, if you're going to go to a tool developer (or you're going to bypass the developer and just go public), it's helpful to understand how the tool works so that you can describe how it appears to not be working. A long time ago, I learned that Volatility does NOT work if the 8Gb memory dump you received is just 8Gb of zeroes. Surprising, I know.
Oddly enough, the same is true for RegRipper; if you extract a hive file from an image, and for some reason it's just a big file full of zeroes, the RegRipper will through errors. This is also true it you use reg.exe to export hive files, or portions thereof. RegRipper is intended to be run against the raw hive file, NOT text files with a .reg extension. Instead of using "reg export", use "reg save".
The overall point is that it's important to test the tools you're using, but it's equally important to understand what the tools are supposed/designed to do.
Rattler
Speaking of tools, not long ago I ran across a reference to Rattler, which is described as, "...a tool that automates the identification of DLL's which can be used for DLL preloading attacks." Reading through the blog post that describes the issue that Rattler addresses leads me to believe that this is the DLL search order hijacking issue, in that if you have malicious DLL in the same folder as the executable, and the executable calls a DLL with the same name that's found in another folder (i.e., C:\Windows\system32), then your malicious DLL will be loaded before the legit DLL.