Tech industry must secure against 'unintended consequences': Elop


The path to disruption is paved by unintended consequences, Telstra group executive of Technology, Innovation and Strategy Stephen Elop has said, with the tech industry needing to secure machine-learning and artificial intelligence (AI) applications against unconscious biases and breaches of security and trust.

Speaking during the annual Telstra Vantage conference in Melbourne on Thursday, Elop said caring, creative, and supervisory roles will be the only ones to survive the rise of automation and AI.

According to Elop -- who served as CEO of Nokia before being added to the Telstra team last year after the telco created the new role of innovation head to lead its CTO, chief scientist, software group, and corporate strategy -- while AI machines learn from the data input into their systems, this data comes tainted by humans with unconscious biases.

"At the heart of artificial intelligence is big data, and the insights that can be gleaned from advanced data analytics ... how we use data, and the data we select to train our machines can have a profound outcome on our analytics," Elop said.

As a result, badly selected data by human developers -- for instance, the recent findings that women are less likely to be shown Google ads for higher-paying jobs, that certain facial-recognition software has difficulty tracking darker skin tones, and that police are being recommended to search out crime in low socio-economic neighbourhoods -- has seen discrimination continuing through AI as a result.

"Even when we rely on so-called big data and modern technology, the societal history of discrimination can live on in our digital platforms as a function of the data across which the system has learned, and the algorithms that have been designed," he explained.

Elop also pointed towards one case where an AI bot was consciously shifted by users on the internet: Microsoft's Tay chatbot on Twitter, which learned racist behaviour from deliberate tweets.

The rise of AI and the Internet of Things (IoT) therefore must deal both with what he called "Twitter trolls" and with unconscious developer biases if the digitisation of society is going to avoid the unintentional consequences of not only reinforcing discrimination, but also breaching the privacy and trust of all users, Elop said.

"With Telstra launching what is arguably the largest continuous IoT network in the country, with the promise of everything from better energy management to safety and security, to efficiencies in the field, to the mines, to autonomous driving, all married to the power of big data, there is no question that the Internet of Things is going to have a profound impact on the outcomes that we deliver to our customers," he said.

"Privacy breaches could quickly become the unintended consequence of all of this cloud-based data."

Collecting data in such massive quantities for use in machine-learning and AI devices designed for smart homes and smart cities has the potential to be misused by bad actors, he said, particularly if data from several sources is being combined.

"The collection of data in massive quantities leads to all sorts of unintended consequences in the area of privacy, and those privacy risks multiply with big data -- not only is it the primary data itself, but the new inferences that can be gleaned from combinations of big data from different sources," he explained.

"For example, mashing up data from the increasingly popular police licence plate readers with crime patterns in the city -- all sorts of interesting information could be gleaned from that."

An example of this is Amazon's Alexa, which saves all human interaction in order to learn and become a smarter and more useful AI assistant to its owner -- but the unintentional consequence could be that this data is misused, he suggested.

"This device records and stores in the cloud everything I have ever said to it with an actual recording of my voice, with the recording beginning a few seconds before I even ask it to start listening," Elop said.

"It's only a matter of time before those recordings get used for an unintended purpose."

Most importantly, Elop said, is when developers fail to secure systems against the unintended consequence of breach of trust.

Pointing to the recent Equifax breach, Elop said both the breach of such highly personal information and the company's response to the event was a "fundamental violation" of his privacy and trust, and something that cannot simply be fixed after the fact.

People are also beginning to question what information they can believe, he added, in another sign of society's trust breaking down thanks to technological advancements.

"What can you trust? Do you trust what you read on the internet? Do you trust what you read on the internet if it aligns with your point of view? Do you trust what you read on the internet if it is tweeted by a world leader? Do you trust a device if there is a pre-programmed backdoor to the device? Do you trust the camera in your living room that could be part of a 650Gbps denial-of-service attack, or someone could be using it to surreptitiously monitor you?" he asked.

"When security breaks down, or the people entrusted to apply security fail in their roles, or when the average citizen can no longer tell the truth from fiction, we have a problem with trust.

"The unintended consequence of the failure to secure our world is a breakdown in trust in society -- and trust is the foundation of society itself."

According to Elop, the only way businesses can guard against the unintended consequences of breaches of trust and discrimination is by demonstrating the primary traits for the three roles that he mentioned would survive AI: Care, creativity, and leadership.

Most importantly, intentionality must be employed by those developing and deploying data-collecting machines and services.

"We cannot accept the many benefits that our efforts confer to our customers and to our bottom lines without taking accountability for the full range of the resulting consequences," he argued.

"Unconscious bias can only be overcome with conscious effort -- with intentionality. And, as we've learned at Telstra, privacy-related risks cannot be fully addressed with abstract policy statements and whitepapers, or with experimentation. Every new product and service with use of your data must be carefully assessed in terms of privacy and unintended consequences."

The tech industry must work towards this in order to protect society at large, he concluded.

"Those that are caring, creative, and supervisory ... those are the roles that we need to fill," he said.

"Intentionality needs care; inclusivity needs creativity; and we need to lead society through the perils of unintended consequences."

Disclosure: Corinne Reichert travelled to Telstra Vantage in Melbourne as a guest of Telstra



from Latest Topic for ZDNet in... http://ift.tt/2ygGOn3