Skip to Content

Artificial intelligence

The real threat with advances in AI may stem from our failure to create a policy framework for emerging technology.

Ian Kerr, University of Ottawa
Ian Kerr, University of Ottawa Photography by Tony Fouhse

In the world of artificial intelligence, one expert’s astonished gasp at a recent breakthrough is another’s ho-hum look of indifference. One academic’s fear that supercomputers could one day threaten civilization is another’s “meh.”

There is little consensus in the scientific community about how close we are to creating AI that exceeds or even approaches human intelligence, also known as “the technological singularity.” It could take 20 or 100 years — or it might never happen. And these debates would have remained in academic corridors had certain technology and scientific luminaries not issued dire warnings that AI might someday destroy humanity.

They include the world’s best-known cosmologist, Stephen Hawking, and Stuart Russell, co-author of the standard textbook on AI, and Silicon Valley titans Elon Musk of SpaceX and Skype co-founder Jaan Tallinn who have also tweeted and commented on the “existential risk” that emerging technologies may pose. The latter have given millions to think tanks and academic associations that explore this theme.

But how seriously should concerns about supercomputers and their intentions be taken? Many leaders in the field say there’s no need to panic — we are not even close to synthetically replicating the human brain. The immediate goal of policy makers, experts say, should be to understand how existing technology affects today’s society.

“Governments should consult AI researchers, as well as those focused on law and policy issues, about AI’s possible effects—long and short term,” says Ian Kerr, who holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa.

“They should concentrate their efforts on developing frameworks for technologies that are here and now. That said, they absolutely must do so with a view to rapid expansion in the future.”

As the birthplace of some of the algorithms that have powered recent AI breakthroughs, Canada is well-situated to ask the right questions about the applications of new technologies. At the moment, however, the conversation is not taking place.

But as machines get smarter and public and commercial interest in this area continues to grow, policy makers will need to catch up. A wide range of issues, from how data is captured and used by firms to who is liable should an earthbound or airborne robot accidentally hurt someone, will create significant challenges for the legal community in the coming years. And lawyers and law-makers increasingly will be called upon to help shape the regulatory paradigm for AI as a whole.

“Collective discussion must take place about the use of the technology as it gradually improves, in ways that are socially positive,” Yoshua Bengio, AI pioneer and head of the Machine Learning Laboratory at the University of Montreal, said in an email interview. “That is more in the domain of politics but also requires scientists to be involved in the discussion.”

Many researchers on both sides of the Atlantic agree that while we shouldn’t dismiss the idea that sentient systems could exist someday, the current focus on supercomputers deflects attention from more immediate issues such as how to manage “big data” in the interest of society. “What we should really [concentrate on] are things like personal privacy and the fact that we are already dependent on machines” that can easily fail, said Tony Prescott, a professor of cognitive neuroscience and director of the Sheffield Centre for Robotics at the University of Sheffield in Britain.

“I do not agree with the panic-generating statements from Musk [and] Hawking, none of whom are experts on the subject,” added Bengio. “There is nothing to fear in the foreseeable future, when you look carefully at the current state of the technology.”

Passing the Turing Test

The 2014 bestseller Superintelligence: Paths, Dangers, Strategies by Oxford philosopher Nick Bostrom sparked many warnings about the potential threat of AI. An “intelligence explosion,” he wrote, could lead to machines exceeding human intelligence; these sentient systems quickly would become Earth’s dominant species with catastrophic consequences for humanity – even if the machines have no evil intent.

SpaceX’s Musk, who advocates for regulatory oversight of AI and wants to colonize Mars in order to safeguard “the existence of humanity,” doesn’t shy away from using doomsday imagery when speaking of the risks associated with artificial intelligence.

“In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” the multi-billionaire told an audience at the Massachusetts Institute of Technology last October. Along with Tallinn, he has called for more studies on AI safety to monitor and contain these threats.

Oxford physicist Hawking also weighed in: “It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction,” he warned in an article written with Berkeley’s Stuart Russell, and MIT’s Max Tegmark and Frank Wilczek. “But this would be a mistake, and potentially our worst mistake in history.” 

In an interview with the BBC in December, Hawking said AI “could spell the end of the human race.”

Superhuman AI is possibly a reachable goal, wrote Russell, who co-authored the field’s standard textbook: Artificial Intelligence: A Modern Approach. “To feel comfortable that it’s not, you’d have to bet your future against the combined might of Google, Microsoft, Apple, IBM, and the world’s military establishments,” he said in the Huffington Post.

So far, no machine has passed the benchmark Turing Test, developed by Alan Turing, the English mathematician, code breaker and father of modern computing. The test entails creating a machine that can fool a person into thinking he or she is communicating with another human (though a computer program succeeded last year in convincing 10 of 30 judges from the Royal Society that he was a 13-year-old Ukrainian boy speaking English as a second language). 

The rise of machine learning

Right now, innovations in so-called deep learning are where the action is. And while no one is calling it a risk to humanity, it brings its own set of challenges for the legal profession and the rest of society.

Advances in deep learning, many of them pioneered at the universities of Toronto and Montreal, have generated a frenzy of interest from Google, Microsoft, Facebook, Yahoo, and Baidu. And with predictions that AI will transform all types of technology and alter the way people interact with machines, it’s also attracted a steady stream of venture capital dollars.

Inspired by the workings of the brain, deep learning has ushered in huge improvements in computer vision, speech recognition, and natural language processing — all things that can be used to “personalize” products, including better search engines and virtual assistants and more life-like robots.

Through deep learning, systems acquire knowledge through pattern recognition and by using what they’ve gleaned from previously entered information. The more information available to them, the more skilled they become, hence the need for enormous amounts of data. By combining powerful processors and layers of neural nets, computer programs can even learn to do certain tasks independently and with fewer inputs. Deep learning expert Geoffrey Hinton, who holds a Canada Research Chair in Machine Learning, said recently that Google is on a path to developing algorithms with “common sense.”

The underlying technology makes systems increasingly able to analyze an individual’s behaviour, mood and desires — profiling, in theory. And that has some observers worried.

Firms, as well as states, “are getting better and better at predicting people’s behaviour,” said Joanna Bryson, associate professor of artificial intelligence at the University of Bath. “That’s why regulation should focus on privacy issues – because people who are predicted can be exploited.”

“Machine learning makes it possible to mine these data to predict and [to some extent] understand humans,” Bernhard Schölkopf, director of the Max Planck Institute for Intelligent Systems in Germany, added in an email interview. Governments should focus on “the changes to our lives that will be implied by AI, and the need for [legislation] to deal with these changes.”

In the wake of AI breakthroughs during the last decade, Silicon Valley giants hoovered up the majority of deep learning scientists. In 2013, Google hired Hinton along with some of his former students from the University of Toronto after acquiring their AI start-up. Google also recently purchased London-based AI lab DeepMind. Its stated mission: “Solve intelligence.”

DeepMind’s biggest achievement to date is developing software that enabled a computer to teach itself nearly 50 vintage Atari games to a professional standard after being shown the rules of only one game. That accomplishment is likely what earned founder Demis Hassabis a seat at this year’s Bilderberg conference, an annual private gathering of the world’s leaders from a variety of fields. AI topped the 2015 agenda.

In 2013, Facebook hired deep learning leader Yann LeCun, a professor at New York University, as head of its Artificial Intelligence Research Lab. In 2014, China’s Baidu, which has reportedly requested the assistance of the country’s military to help develop its artificial intelligence capabilities, hired Stanford University Professor Andrew Ng to head its deep learning lab in Silicon Valley.

The current limits of AI

AI gives computers the ability to understand questions, search vast databases, process the information and provide answers or directions. But while the current technology appears very advanced, independence is still an illusion.

Any time we communicate with a device — book film tickets, pay a gas bill, listen to GPS directions — we still employ “weak” or “narrow” AI. Consider Apple’s Siri and Google’s self-drive cars, probably the most recognizable products using “weak” AI.  It seems intelligent, but it still has defined functions. It has no self-awareness.

“The concept of deep learning is based around training rather than creating — learning rather than instinct, to use a parallel from the animal world,” said Jos Vernon, director at WebSupergoo, and a former knowledge engineer in London.

Beyond learning to play Space Invaders like a pro, the most advanced system can only teach itself to recognize things like bacon — and count its calories. When IBM’s computer Watson > beat out the human competitors in Jeopardy, it did not know it had won.

“Strong” AI, also known as AGI or artificial general intelligence, would match or exceed human intelligence. To achieve “strong AI” status, a system must have the ability “to reason, represent knowledge, plan, learn, communicate in natural language and integrate all these skills toward a common goal,” according to an often-cited definition. Think HAL, the deranged computer in 2001: A Space Odyssey, or Lieutenant-Commander Data in Star Trek. 

Of course, “weak” AI is still very powerful.

High-speed share-trading algorithms — responsible for half of the volume of trades on Wall Street — helped cause the infamous 2010 flash crash. Before markets recovered, nearly $1-trillion was briefly wiped off the benchmark indexes within minutes.

The technology that enables the NSA to develop very sophisticated data-mining “snooping” tools also falls into this category, as do autonomous weapons, which generally use the same technology as self-driving cars.

Losing control of an AI system – through programming errors in safety-critical systems, for example – actually  represents a greater threat than the possibility of a sentient robot wanting to harm humanity, some observers say.

“This kind of scenario, it seems to me, is the most likely way in which an intelligent system will cause widespread harm,” Thomas Dietterich, a computer science professor at Oregon State University, said in an email interview.

 “The single most useful thing we can do is to commission the U.S. National Academy of Engineering to perform a study on the near- and long-term risks of AI and related technologies and develop policy recommendations for the U.S. government and industry.”

Even if nothing bad happens, experts worry we are relinquishing ever more decision-making powers to these “weak” AI machines: auto-pilots, medical diagnosis, even map-reading. This dependency must first and foremost be examined. “Society is becoming more and more dependent on these technologies,” says University of Ottawa’s Kerr, “and such dependencies could be just as socially devastating as the faraway possibility of a machine uprising.”

Professionals will rely more and more on AI decision-making without understanding the underlying technology that powers these decisions. “They will experience the same sort of existential dilemma as Abraham on the mountain who hears the voice of God but ultimately has to decide whether to take the leap of faith and depend upon a wisdom greater than his own or else defy the odds and go it alone. And going it alone will become an increasingly uncomfortable decision,” Kerr adds.

“As we try to make things more efficient, there will be a tendency to let such systems also take an increasing amount of decisions that affect our lives,’’ said Schölkopf. “Such systems can assign credit ratings, decide medical treatments, or decide who gets to pass through immigration at the airport. The possibilities are large.”

The focus on super-intelligent AI also comes as people realize that technologies like smartphones and search engines are helping to create an economic system that can thrive on large numbers of freelancers. Newly minted billion-dollar tech-reliant companies such as Uber and Airbnb require few employees to function.

Automation and globalization have already displaced many low-skilled jobs; now white-collar workers, including members of the legal profession, are also being affected.

“It’s one of the greatest fears [among EU residents] — that robots will steal their jobs,” said Erica Palmerini, professor of law at Scuola Superiore Sant’Anna in Pisa and co-ordinator of the EU-backed RoboLaw project. “It could be one of the biggest obstacles in the development of the robotics market.”

Ironically, the very people who create the vast data that are necessary for the algorithms to train on — translators, photographers, journalists and many others — are working toward their own economic demise. It remains to be seen whether the newest technological revolution will create more jobs than it destroys.

There is no doubt, however, that AI has the potential to improve humanity’s lot simply by making humans smarter and more efficient. Google’s director of engineering, Ray Kurzweil, who famously predicted that machines will pass the Turing Test by 2029, is evangelical. “AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-quality education to people all over the world,” said Kurzweil, in a blog post entitled Don’t Fear Artificial Intelligence. “Virtually everyone’s mental capabilities will be enhanced by it within a decade.”

A legal vacuum

But with “weak” AI already everywhere, how close are we to developing “strong AI”? Not close enough to worry about a robot takeover, experts say.

“The majority of the researchers who are actually making significant technical contributions in deep learning are not worried at all about the potential dangers of an AI super-intelligence,” says Bengio at the University of Montreal. “Instead they are worried that their algorithms are still too dumb.”

“There is no evidence that we are progressing exponentially,” in terms of getting closer to general AI, added Sheffield’s Prescott. “If you ask people who are studying the brain, they will say that we are really only scratching the surface.”

Many observers say there is just so much we can accomplish given the current capacity of both software and hardware.

The assumption that intelligence is computational could prove to be incorrect. According to the leading expert in quantum computing, David Deutsch, we don’t even have a viable definition of intelligence let alone any clear path to achieving it.

“Thinking consists of criticizing and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data,” the Oxford professor wrote in an article for Aeon. “Therefore, attempts to work toward creating an AGI that would do the latter are just as doomed as an attempt to bring life to Mars by praying for a Creation event to happen there.”

“Right now, we are struggling to achieve the level of intelligence we see in an ant. I don’t see that changing substantially under the current paradigm,” said Vernon at WebSupergoo. “If you don’t see that level of intelligence on your desktop or phone, it doesn’t exist.”

Meanwhile, the field as a whole has attracted scant attention from the legal world. Some jurisdictions allow for self-drive cars. In the field of robotics and particularly non-military drones, governments are grappling with legislation that allows for some freedom without compromising security. European legislators recently heard recommendations from the RoboLaw consortium of experts in law, engineering, and philosophy about how to manage the introduction of new robotic and human enhancement technologies into society. Last year, the consortium published a roadmap of sorts to regulating robotics. It contemplates a range of issues from the role of ethics in robotics regulation to considerations on liability. It even includes a discussion on whether robots should be granted legal person status like corporations.

There is little legislation, however, that specifically contemplates AI, other than some e-commerce provisions that enable “electronic agents” to enter into contracts. Other “technology neutral” laws covering copyright and the collection of data are also relevant, Kerr says.

Silicon Valley titans, as well as some AI firms, have stepped in to fill the intellectual vacuum, forming think tanks and academic associations to explore not just “x-risk,” but also more immediate concerns, such as liability. Whether they represent the views of the majority of researchers in the field is a matter of debate.

Google says it has established an ethics board, reportedly at the request of DeepMind, to examine emerging technology. But there is no information regarding guiding principles or who has been invited to sit on the board. Google declined requests for more information.

Microsoft Director of Research Eric Horvitz is funding the One Hundred Year Study on Artificial Intelligence at Stanford University. Horvitz, former president of the Association for the Advancement of Artificial Intelligence, will help select a panel that will examine how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other societal issues.

Boston-based Future of Life Institute is another newly formed organization dedicated to assessing the potential threats and benefits that AI poses. The volunteer group was co-founded by MIT’s Max Tegmark, a leading cosmologist who along with Hawking warned last year that AI might be humanity’s last invention unless “we learn how to avoid the risks.” Co-founder Tallinn provided initial funding.

“Things are really happening fast and this opens the possibility of human-level machine intelligence in our lifetime,” said Tegmark in an interview. “It’s really important to get it right, and now is the time to start thinking about it.”

In January, FLI organized a conference — closed to the press — “to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls.” This resulted in an open letter, subsequently signed by many AI experts, calling for more safety measures to prevent misuse of technology and a better understanding of how emerging technology will affect society.

After the event, Musk, who along with Hawking and actors Morgan Freeman and Alan Alda sits on the FLI’s scientific advisory board, donated $10-million to run a global research program aimed at keeping AI “beneficial to humanity.”

Tegmark said FLI has so far received about 300 proposals. It is unclear to what degree the winning research bids will focus on the societal impact of technology rather than exploring the risks of “strong AI.”

The Stanford program and FLI join a fairly crowded field of think tanks and other associations that focus on the future of AI and other threats to our species. Many of these organizations share funders like entrepreneur Tallinn as well as thought leaders like Bostrom and Tegmark.

In the U.K., Bostrom founded the Future of Humanity Institute at the University of Oxford in 2005, with money provided by futurologist and multibillionaire computer scientist James Martin. Bostrom also set up the Institute for Ethics and Emerging Technologies and is an adviser to the Centre for the Study of Existential Risk at Cambridge University, which was launched in 2012 and was co-founded by Tallinn.

In the U.S., FLI joins Berkley-based Machine Intelligence Research Institute, originally called the Singularity Institute, which looks at safety issues related to the development of “strong” AI. Tallinn has also been a major donor and Bostrom is also an adviser.

For the moment, these associations are shaping discussions on the AI safety paradigm. Ultimately, however, governments will need to participate in these talks, some observers say. In the meantime, people are likely to be better protected from all sorts of real risks and real problems the old-fashioned, low-tech way: help humans develop good ethics.

“AI itself is not the problem,” said Bath’s Bryson, who argues that AI is simply an extension of our culture and values. “The problems and solutions are us. AI enhances human power — it’s just a way of making us smarter, of letting us know more things sooner.”

“We need to be addressing any social problems directly, not just to make AI into the bogey man