Canadian regulators pondering the possibility of a truly autonomous car future must grapple with two alternate realities — one where the necessary technology is right around the corner and the other where it’s way down the road. Either way, the question is how safe is safe enough, and at what cost?
Car makers like BMW and Ford are unsurprisingly in the former camp, announcing plans to deliver fully automated vehicles (AV), with Level 5 capabilities – or full automation – within the next five years. Tesla CEO Elon Musk tweeted in January its cars will have that in “3 months maybe, 6 months definitely.” Uber, Google and other technology firms are also investing billions in this space. Self-driving advocates point not only to the potential profits, but also the life-saving and environmental benefits this technology could eventually bring.
But many in the artificial intelligence and engineering community are skeptical about this timeline. They say the technology for AVs to drive in mixed traffic — presumably, without chaos — are still many years away. For all the teeth-gnashing and shirt-rending angst about how AI might eventually kill off the human race, right now it can’t even drive to a suburban mall, let alone handle junctions like Paris’ Etoile, the intersection of Lake Shore Boulevard and Lower Jarvis St in Toronto or pretty much anywhere in Italy.
Economists will tell you that trust is necessary for a stable economy. So what are we to make of growing concerns about what businesses do with the personal data of internet users?
Poll after poll shows that consumers simply don’t trust companies with their data and are losing faith in government’s ability to protect their personal information. Meanwhile, regulators around the world struggle to keep pace with technology and business models that are, by nature, anti-privacy; at the same time they worry that overly stringent rules could dampen growth in the digital economy which is expected to contribute an estimated $4-trillion to major economies.
“Today, the digital economy is the economy,” Navdeep Bains, Canada’s minister of innovation, science and economic development, said in a speech last year. In fact, mass data has become a valuable asset. Whether it’s vacuumed up by mobile or other internet devices, consumer data increasingly drives business decisions. Web-based giants like Google and Facebook are earning fat profits from targeted advertising.
Dismissed for years as a utopian thought experiment, the concept of a basic guaranteed income is now enjoying a renaissance as fears about automation and technological unemployment are on the rise.
The speed at which basic income has snowballed onto the public agenda has been astonishing. How to translate the idea into law is another matter altogether. If the past is any indication, decisions on welfare will likely remain a task for elected lawmakers. But could we see the day when the courts step in and rule that the Canadian Charter protects economic rights, even a basic income guarantee, as fundamental to human survival?
“The very common response is that it’s beyond their jurisdiction. I don’t see that changing in the short-term,” said Margot Young, a professor in the Allard School of Law at the University of British Columbia, and expert in constitutional and social justice law. “Courts don’t take into account broad social changes overnight.”
For now, courts generally see the Charter’s main task as protecting individuals from state encroachment – described as negative rights – and not from destitution.
But popular opinion has been known to have an impact on judicial decision-making, as recent cases on assisted dying and the striking down of prostitution laws have shown.
In the world of artificial intelligence, one expert’s astonished gasp at a recent breakthrough is another’s ho-hum look of indifference. One academic’s fear that supercomputers could one day threaten civilization is another’s “meh.”
There is little consensus in the scientific community about how close we are to creating AI that exceeds or even approaches human intelligence, also known as “the technological singularity.” It could take 20 or 100 years — or it might never happen. And these debates would have remained in academic corridors had certain technology and scientific luminaries not issued dire warnings that AI might someday destroy humanity.
They include the world’s best-known cosmologist, Stephen Hawking, and Stuart Russell, co-author of the standard textbook on AI, and Silicon Valley titans Elon Musk of SpaceX and Skype co-founder Jaan Tallinn who have also tweeted and commented on the “existential risk” that emerging technologies may pose. The latter have given millions to think tanks and academic associations that explore this theme.
But how seriously should concerns about supercomputers and their intentions be taken? Many leaders in the field say there’s no need to panic — we are not even close to synthetically replicating the human brain. The immediate goal of policy makers, experts say, should be to understand how existing technology affects today’s society.