On Nov. 6, the Ontario Human Rights Commission (OHRC) and the Law Commission of Ontario (LCO) released a new digital tool, The Human Rights Impact Assessment Tool (HRIA), to help developers and organizations assess their AI systems for human rights compliance.
Artificial Intelligence (AI) is a tool that can help with decision-making, reduce human error by analysing reliable data and information, and increase productivity in the workplace.
“It has great potential to benefit society and advance social and economic prosperity,” said the OHRC. HRIA is used to strengthen the knowledge and understanding of human rights impacts, provide practical guidance on human rights impacts, and identify practical mitigation strategies and remedies to address bias and discrimination from AI systems.
Nye Thomas, the Executive Director of the Law Commission of Ontario, said AI systems could contain bias due to the data they possess. However, AI systems still need legal obligations to comply with provincial and federal human rights legislation.
“The missing piece to meet these obligations is a tool, some instrument to help developers assess human rights and evaluate human rights compliance in their AI systems,” Thomas said.
“HRIA is for people who don’t have human rights law backgrounds. It was a challenge to turn complicated legal concepts into a step-by-step guide,” said Thomas.
The HRIA has two parts: Part A is an AI system assessment for human rights implications. In this section, organizations asked questions about the purpose of AI, the system’s significance, and the treatment of individual Canadians and communities. Part B is about mitigation. After the AI system gets categorized, Part B provides a series of questions to assist organizations in minimizing identified human rights issues in the given AI system.
“The whole purpose is to ensure that the AI system an organization uses runs smoothly before they publicly access it,” Thomas said. He also said the HRIA will help organizations continually think about promoting and protecting human rights in their AI systems.
Thomas said the OHRC and LCO are trying to give the AI community, developers, entrepreneurs, startups and large organizations a tool to help them meet their legal obligations and avoid human rights issues witnessed in the past. “The purpose of the HRIA is to be preventive, so it’s to help encourage organizations to comply with thinking about AI and human rights and to give them a step-by-step guide for how to achieve human rights compliance.”
โIt is essential that the opportunities, benefits, and protections associated with the use of AI be available to all Ontarians without discrimination,โ said the OHRC.
The use of AI in public sectors around the world caused harm to individuals and communities, including algorithmic discrimination based on race, gender, and other personal attributes. AI systems have a potentially high risk of human rights violations where they produce results based on characteristics protected under human rights legislation, such as race, sex, age, or disability. It could lead to flawed, inaccurate, and unintended discriminatory actions by employers, service providers or other organizations to the public.
Before releasing the tool, Thomas said the OHRC and LCO spoke to various individuals beforehand, including those from the public and private sectors within the provincial and federal government. These sectors were interested in the new technology because they saw the need for a tool that helps identify human rights regulations while using artificial intelligence in the workplace.
“The HRIA launched on Nov. 6 and initial responses from stakeholders in the AI sector have been positive. Given its recent launch, the OHRC is unaware of any organizations that have used the HRIA,โ said the OHRC.
Thomas and the OHRC hope organizations and government sectors can reduce human rights violations caused by AI systems in different workplace settings through HRIA.
“Not all AI issues are black and white. It’s our job to recognize mistakes made by AI and address them accordingly,” Thomas said.
“It is essential that the opportunities, benefits, and protections associated with the use of AI be available to all Ontarians without discrimination,” the OHRC said.