Skip to Content

Can we learn to trust artificial intelligence systems?

Our well-being as humans must be the primary concern throughout the design stage.

Red-eye Hal
iStock

“Artificial intelligence is the fourth revolution. It will [bring about a] redistribution of wealth and power. It will challenge the conception of [what] it is like to be human.” For Nicolas Economou, Chairman and CEO of H5, a company specializing in digital evidence management and search, a framework must be established around the use of autonomous and intelligent systems (AISs), including those that rely on artificial intelligence to make decisions: autonomous vehicles, certain weapons and loan-issuing programs, to name but a few.

 

A self-proclaimed cautious optimist, Economou spoke at the 17th International Conference on Artificial Intelligence and Law, held at Université de Montréal in June 2019. He worries that AISs are beginning to make decisions instead of humans, without any boundaries to keep them in check. He hopes to see people spurred into action, taking stock of the potential consequences of our inaction sooner rather than later.

“Societies are quietly surrendering to machines. Systems are making decisions in . . . secretive ways,” he said. Unfortunately, history is full of examples of dehumanized law enforcement practices and their excesses.

Economou chairs a committee within the Institute of Electrical and Electronics Engineers (IEEE), a techno-science industry think tank, with the mission of bringing human well-being to the forefront of ethical AIS design. He and his colleagues have studied the intersection between law and AISs and the resulting challenges.

One of the most pressing issues identified is the need for a framework that will give legal professionals confidence in AISs. Four essential guiding principles were identified: effectiveness, competence, accountability and transparency.

It may surprise you to see effectiveness at the top of the list, but Economou reminded listeners that it wasn’t long ago that digital evidence management and search systems were unreliable, never quite producing the expected results. To avoid falling into the trap of magical thinking, results must be measured objectively. As Economou pointed out, just because a system claims to do something, that doesn’t mean it will happen. And the only way to know whether a system outputs undesirable or unacceptable results, such as discriminatory decisions, is by putting it through its paces.

As for competence, Economou pointed to our southern neighbours: right now, in the USA, the length of your jail sentence is determined by what AI thinks of you. Of course, not just anyone can write up a presentence report, so the companies developing and using these systems need to call in professionals. The qualifications of professionals, such as psychologists and social workers, who contribute to the creation of AISs, and the fact that they are bound by professional standards, are comforting.

It is difficult to determine beforehand who should be held responsible when things go wrong. Still looking at the previous example, we will need to be able to sort out who to blame when a discriminatory verdict comes down. Should the AIS manufacturer, the social worker who consulted on the development of the algorithm, the defendant’s lawyer who didn’t raise the question of the algorithm’s potential for bias and the judge who trusted the results be held equally accountable?

We will also need to review the notion of transparency. Given the complexity of these systems, concrete steps must be taken to establish third-party oversight. Algorithm “interpreters” may be required. Explanation mechanisms for the many steps taken and their effects, like a black box, will also need to be implemented in AISs.

Economou reiterated that legal professionals can no longer ignore the reality of the situation. Guiding principles must be established in a legitimate way in the eyes of the law and those of the general public. For people to trust the AISs used in the legal system, we must remember that trust is not faith. “[The public] must [be able] to view the legal system as an institution accountable to the citizen,” he concluded.

For more information on the IEEE’s work, download Ethically Aligned Design, which includes a full chapter on legal issues.