Is it ok for AI to be used to write a legal decision impacting someone else’s rights? According to a decision released this week by a Canadian Federal judge the answer is yes.
The case (Haghshenas v. Canada (Citizenship and Immigration)) involved a refused immigration application to Canada. The applicant argued the denial was written by AI and relying on AI was a breach of administrative law principles. In finding the use of AI as a tool to write the decision was fair Justice Brown provided the following reasons:
 As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of
“Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.
 Regarding the use of the
“Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.