As Artificial Intelligence Evolves, So Does Its Criminal Potential

By JOHN MARKOFF
OCT. 23, 2016
Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password.
Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her.
It is actually a computer-synthesized voice, a tour-de-force of artificial intelligence technology that has been crafted to make it possible for someone to masquerade via the telephone.
Such a situation is still science fiction — but just barely. It is also the future of crime.
The software components necessary to make such masking technology widely accessible are advancing rapidly. Recently, for example, DeepMind, the Alphabet subsidiary known for a program that has bested some of the top human players in the board game Go, announced that it had designed a program that “mimics any human voice and which sounds more natural than the best existing text-to-speech systems, reducing the gap with human performance by over 50 percent.”
The irony, of course, is that this year the computer security industry, with $75 billion in annual revenue, has started to talk about how machine learning and pattern recognition techniques will improve the woeful state of computer security.
But there is a downside.
“The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of “Future Crimes.” He added, “This is not about Matthew Broderick hacking from his basement,” a reference to the 1983 movie “War Games.”
The alarm about the malevolent use of advanced artificial intelligence technologies was founded earlier this year by James R. Clapper, the director of National Intelligence. In his annual review of security, Mr. Clapper underscored the point that while A.I. systems would make some things easier, they would also expand the vulnerabilities of the online world.
The growing sophistication of computer criminals can be seen in the evolution of attack tools like the widely used malicious program known as Blackshades, according to Mr. Goodman. The author of the program, a Swedish national, was convicted last year in the United States.
The system, which was sold widely in the computer underground, functioned as a “criminal franchise in a box,” Mr. Goodman said. It allowed users without technical skills to deploy computer ransomware or perform video or audio eavesdropping with a mouse click.
The next generation of these tools will add machine learning capabilities that have been pioneered by artificial intelligence researchers to improve the quality of machine vision, speech understanding, speech synthesis and natural language understanding. Some computer security researchers believe that digital criminals have been experimenting with the use of A.I. technologies for more than half a decade.
That can be seen in efforts to subvert the internet’s omnipresent Captcha — Completely Automated Public Turing test to tell Computers and Humans Apart — the challenge-and-response puzzle invented in 2003 by Carnegie Mellon University researchers to block automated programs from stealing online accounts.
Both “white hat” artificial intelligence researchers and “black hat” criminals have been deploying machine vision software to subvert Captchas for more than half a decade, said Stefan Savage, a computer security researcher at the University of California, San Diego.
“If you don’t change your Captcha for two years, you will be owned by some machine vision algorithm,” he said.
Surprisingly, one thing that has slowed the development of malicious A.I. has been the ready availability of either low-cost or free human labor. For example, some cybercriminals have farmed out Captcha-breaking schemes to electronic sweatshops where humans are used to decoding the puzzles for a tiny fee.
Even more inventive computer crooks have used online pornography as a reward for human web surfers who break the Captcha, Mr. Goodman said. Free labor is a commodity that A.I. software won’t be able to compete with anytime soon.
So what’s next?
Criminals, for starters, can piggyback on new tech developments. Voice-recognition technology like Apple’s Siri and Microsoft’s Cortana are now used extensively to interact with computers. And Amazon’s Echo voice-controlled speaker and Facebook’s Messenger chatbot platform are rapidly becoming conduits for online commerce and customer support. As is often the case, whenever a communication advancement like voice recognition starts to go mainstream, criminals looking to take advantage of it aren’t far behind.
“I would argue that companies that offer customer support via chatbots are unwittingly making themselves liable to social engineering,” said Brian Krebs, an investigative reporter who publishes at krebsonsecurity.com.
Social engineering, which refers to the practice of manipulating people into performing actions or divulging information, is widely seen as the weakest link in the computer security chain. Cybercriminals already exploit the best qualities in humans — trust and willingness to help others — to steal and spy. The ability to create artificial intelligence avatars that can fool people online will only make the problem worse.
This can already be seen in efforts by state governments and political campaigns who are using chatbot technology widely for political propaganda.
Researchers have coined the term “computational propaganda” to describe the explosion of deceptive social media campaigns on services like Facebook and Twitter.
In a recent research paper, Philip N. Howard, a sociologist at the Oxford Internet Institute, and Bence Kollanyi, a researcher at the Corvinus University of Budapest, described how political chatbots had a “small but strategic role” in shaping the online conversation during the run-up to the “Brexit” referendum.
It is only a matter of time before such software is put to criminal use.
“There’s a lot of cleverness in designing social engineering attacks, but as far as I know, nobody has yet started using machine learning to find the highest quality suckers,” said Mark Seiden, an independent computer security specialist. He paused and added, “I should have replied: ‘I’m sorry, Dave, I can’t answer that question right now.’”
A version of this article appears in print on October 24, 2016, on page B3 of the New York edition with the headline: As Artificial Intelligence Evolves, So Does Its Criminal Potential.
Artificially intelligent ‘judge’ developed which can predict court verdicts with 79 percent accuracy
A statue representing the scales of justice at the Old Bailey, Central Criminal Court in London 
Sarah Knapton, Science Editor
24 October 2016 • 12:05am
A computer ‘judge’ has been developed which can correctly predict verdicts of the European Court of Human Rights with 79 percent accuracy.
Computer scientists at University College London and the University of Sheffield developed an algorithm which can not only weigh up legal evidence, but also moral considerations.
As early as the 1960s experts predicted that computers would one day be able to predict the outcomes of judicial decisions.
 

But the new method is the first to predict the outcomes of court cases by automatically analyzing case text using a machine learning algorithm.
“We don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes,” said  Dr. Nikolaos Aletras, who led the study at UCL Computer Science.
“It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights.”
To develop the algorithm, the team allowed an artificially intelligent computer to scan the published judgments from 584 cases relating to torture and degrading treatment, fair trials, and privacy.
They computer learned that certain phrases, facts, or circumstances occurred more frequently when there was a violation of the human rights act. After analyzing hundreds of cases the computer was able to predict a verdict with 79 per cent accuracy.
“Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgments have been predicted using analysis of text prepared by the court,” said co-author, Dr. Vasileios Lampos, UCL Computer Science.
“We expect this sort of tool would improve efficiencies of high level, in demand courts, but to become a reality, we need to test it against more articles and the case data submitted to the court.
“Ideally, we’d test and refine our algorithm using the applications made to the court rather than the published judgments, but without access to that data we rely on the court-published summaries of these submissions.
The team found that judgments by the European Court of Human Rights are often based on non-legal facts rather than directly legal arguments, suggesting that judges are often swayed by moral considerations father than simply sticking strictly to the legal framework.
Co-author Dr Dimitrios Tsarapatsanis, a law lecturer at the University of Sheffield, said: "The study, which is the first of its kind, corroborates the findings of other empirical work on the determinants of reasoning performed by high-level courts.
"It should be further pursued and refined, through the systematic examination of more data."
The research was published in the journal Computer Science.