DECIPHERING ALGORITHMS: LAW AND ARTIFICIAL INTELLIGENCE

DECIPHERING ALGORITHMS: LAW AND ARTIFICIAL INTELLIGENCE

In a recent claim in the United Kingdom concerning the use of algorithmic systems when applied to visa applications, the Joint Council for the Welfare of Immigrants (“the Claimant”) served judicial review proceedings on the Secretary of State for the Home Department (“the Defendant”) challenging the system used by the Defendant for “sorting visa applications and allocating them to decision makers”. The Secretary of State for the Home Department used what was described in the claimant’s proceedings as a “discriminatory automated decision making algorithm (the “Streaming Tool” or “Algorithm”) to categorise visa applications as more or less risky”. Upon receipt of the judicial review proceedings, the Defendant opted to suspend the use of the algorithm “pending a redesign of the process and the way in which visa applications are allocated for decision making”. The judicial review proceedings served by the claimants contained a number of arguments which are explained in greater detail below.

Factual Background

The Defendant used an Algorithm to assess visa applications and allocate them into different categories. The Algorithm applied a risk rating to each application and the “nationality of the applicant” was “a significant factor” in the determination of the risk associated with an application. The algorithm would categorise an applicant as either Green (low risk), Amber (Medium risk) or Red (High risk).

The algorithm operated in such a way that particular nationalities were automatically categorised as “suspect” and were “thus more likely to have their visa applications rejected”. The claimants argued that the algorithm was unlawful and directly discriminatory on the grounds of race contrary to the United Kingdom’s Equality Act 2010. The claimants also argued that the algorithm was unlawful on the basis that the Secretary of State for the Home Department failed to assess “the data protection and equality implications” of implementing the algorithm contrary to their obligations under the General Data Protection Regulation 2016.

The “suspect” nationalities as determined by the algorithm were subjected to a greater level of scrutiny by Home Office officials. The algorithm operated through three stages, the first stage of which involved pre-screening an application. The algorithm would then apply a “high risk” rating to an applicant based upon factors including an applicant’s nationality provided that the nationality, so identified by the algorithm appeared on the list of “suspect nationalities”. In such cases, the applicant would not progress to a further stage, “in other words, there are some nationalities that automatically……lead to the allocation of a (presumably ‘Red’) rating”. In the event that an applicant proceeded to stage two, then a further risk assessment was carried out by the algorithm which again used an applicant’s nationality as a determining factor. In stage two, the Home Secretary compiled data which identified how many breaches of the immigration laws were associated with a nationality within the previous 12 months. This data would then be applied by the algorithm to an individual applicant and a risk rating would be allocated. In such cases where such a risk rating was assigned at stage two this would “constitute the application’s final risk rating” and an application would progress no further.

In the event that an application did progress to stage three, then the algorithm would produce a “decision tree” which required a case worker “to respond to a series of yes/no answers generated by the Tool”. At stage three, “an applicant’s nationality may or may not be a factor”, the Secretary of State for the Home Department “refused to explain or justify” the factors involved at this stage.

The Claimants alleged that the Defendant’s algorithm was directly discriminatory on the grounds of race. The claimant argued that the allocation of a “high risk” rating to what the Home Department described as “suspect” nationalities constituted less favourable treatment. The claimant highlighted the fact that “non-suspect nationalities” received less scrutiny and their applications were less likely to be rejected. 

Algorithms, Data Protection and Confirmation Bias

The Claimant noted that the allocation of Green, Amber or Red by the algorithm introduced a significant risk that a decision maker would be prone to ‘confirmation bias’ which is described as “an unconscious disinclination on the part on the decision maker to look for or give appropriate weight to evidence that contradicts the streaming rating”. In other words, the algorithm carried with it a risk that it would become a “de facto decision making tool”.

It was argued that the intention of the Home Department was that decision makers would use the algorithm as an aid when arriving at a decision. However, the evidence suggested that the decision makers were far more inclined to give “conscious or unconscious weight” to the data produced by the algorithm which would determine whether an application was granted or refused. The Claimants further argued that the Secretary of State for the Home Department had failed to carry out a lawful Data Impact Assessment in relation to the algorithm. This assessment is required to indicate whether the algorithm would have posed a “high risk to the rights and freedoms of natural persons”. The Claimants argued that a Data Impact Assessment was required to ensure that the algorithm was lawfully implemented and complied with Article 35 of the General Data Protection Regulation 2016. The Claimant argued that the Secretary of State for the Home Department did not properly guard against “the possible adverse consequences” of the algorithm.

Outcome and Conclusion

The Claimants sought a declaration that the algorithm was unlawful insofar as it used nationality as a criteria in assessing visa applications. The Claimant further sought an order prohibiting the use of the algorithm pending a “substantive review of its operation”. As above, upon receipt of the proceedings, the Defendant opted to discontinue the use of the algorithm “pending a redesign of the process” and the case never proceeded to hearing. However,the Claimants succeeded in highlighting that algorithms may have a significant and sometimes detrimental impact on the “mental processes” involved in “human decision” making. Specifically, the claimants referred to a recent lecture entitled “Algorithms, Artificial Intelligence and the Law” whereby Lord Sales stated that “algorithmic systems are so important in the delivery of commercial and public services, they need to be designed by building in human values and protection for fundamental human interests. For example, they need to be checked for biases based on gender, sexuality, class, age, ability…..”

Anyone wishing to read the Judicial Review proceedings served on The Secretary of State for the Home Department may do so by clicking on the following link https://bit.ly/3jd6zwq. Should you wish to discuss any of the above matters, please get in touch by contacting us at (01) 833 8147 or alternatively you can email us at [email protected].