香港六合彩资料

Can we eliminate bias in AI? How Canada鈥檚 commitment to multiculturalism could help it become a world leader

Illustration
(illustration by S茅bastien Thibault)
In human beings, intelligence is no inoculation against bias and bigotry. The same holds true for computers. Intelligent machines learn about the world through the filters of human language and historical behaviour 鈥 meaning they can just as easily absorb humanity鈥檚 worst values as they can its best.
 
Researchers who aim to develop ever-smarter machines have their work cut out for them to ensure that they鈥檙e not inadvertently imbuing computers with misogyny, racism or other forms of bigotry.
 
鈥淚t鈥檚 a huge risk,鈥 says Marzyeh Ghassemi, an assistant professor in the 香港六合彩资料's department of computer science and Faculty of Medicine who focuses on health-care applications for artificial intelligence (AI). 鈥淟ike all advances that leapfrog societies forward, there are large risks that we must decide to accept or not to accept.鈥
 
Bias can creep into algorithms in many ways. In a highly influential branch of AI known as 鈥渘atural language processing,鈥 problems can arise from the 鈥渢ext corpus鈥 鈥 the source material the algorithm uses to learn about the relationships between different words.
 
Natural language processing, or 鈥淣LP,鈥 allows a computer to understand human-style speech 鈥 informal, conversational and contextual. NLP algorithms comb through billions of words of training text 鈥 the corpus might be, say, the entirety of Wikipedia. One algorithm works by assigning to each word a set of numbers that reflects different aspects of its meaning 鈥 鈥渒ing鈥 and 鈥渜ueen鈥 for instance, would have similar scores relating to the idea of royalty, but opposite scores relating to gender. NLP is a powerful system that allows machines to learn about relationships between words 鈥 in some cases, without direct human involvement.
 
鈥淓ven though we鈥檙e not always teaching them specifically, what they learn is incredible,鈥 says Kawin Ethayarajh, a researcher who focuses partly on fairness and justice in AI applications. 鈥淏ut it鈥檚 also a problem. In the corpus, the relationship between 鈥榢ing鈥 and 鈥榪ueen鈥 might be similar to the relationship between 鈥榙octor鈥 and 鈥榥urse.鈥欌
 
But of course, all kings are men; not all doctors are men. And not all nurses are women.
 
When an algorithm absorbs the sexist tropes of historical human attitudes, it can lead to real-life consequences, as happened in 2014 when Amazon developed an algorithm to vet job applicants鈥 resum茅s. The company trained its machines using 10 years of hiring decisions. But in 2015, they acknowledged that, in tests, the system was giving unearned preference to resum茅s from male applicants. They tweaked the system to force it to ignore gender information, but ultimately shut down the project before actually putting it to use as they could not be sure their algorithm wasn鈥檛 perpetrating other forms of discrimination.
 
Mitigating sexist source material can involve technological and methodological adjustments. 鈥淚f we can understand exactly what underlying assumptions the corpus has that cause these biases to be learned, we can either select corpora without those biases or correct it during the training process,鈥 says Ethayarajh.
 
It鈥檚 common practice for researchers to design an algorithm that corrects prejudicial assumptions automatically. By adjusting the weight of the numbers it assigns to each word, the computer can avoid making sexist or racist associations.
 
But what exactly are the assumptions that need correcting? What does a fair-minded AI really look like? Debates over privilege, bigotry, diversity and systemic bias are far from settled. Should a hiring algorithm have a stance on affirmative action? Should a self-driving car take special care if another vehicle has a 鈥淏aby on Board鈥 sticker? How should an AI-driven analysis of legal documents factor in the historical treatment of Indigenous Peoples? Contentious societal issues don鈥檛 disappear merely because machines take over certain recommendations or decisions.
 
Many people view Canada鈥檚 flawed but relatively successful model of multiculturalism as a chance to lead in fair AI research.
 
鈥淐anada certainly does have an opportunity,鈥 says Ronald Baecker, a professor emeritus of computer science and the author of Computers and Society: Modern Perspectives. He sees a role for government to redress the societal inequities, injustices and biases associated with AI by, for example, setting up protections for employees who choose to speak out against biased or unfair AI-driven products. 鈥淭here鈥檚 a need for more thinking and legislation with respect to the concept of what I would call 鈥榗onscientious objection鈥 by high-tech employees.鈥
 
He also believes that the computer scientists developing smart technologies should be required to study the societal impact of such work. 鈥淚t鈥檚 important that professionals who work in AI recognize their responsibility,鈥 he says. 鈥淲e鈥檙e dealing with life-and-death situations in increasingly important activities where AI is being used.鈥
 
Algorithms that help judges set bail and sentence criminals can absorb long-standing biases in the legal system, such as treating racialized people as if they are more likely to commit additional crimes. The algorithms might flag people from certain communities as posing too high a risk to receive a bank loan. They also might be better at diagnosing skin cancer in white people than in people with darker skin, as a result of having been trained on skewed source material.
 
The stakes are incredibly high in health care, where inequitable algorithms could push people who have been poorly served in the past even further into the margins.
 
In her work at 香港六合彩资料 and at the Vector Institute for Artificial Intelligence, Ghassemi, like other researchers, takes pains to identify potential bias and inequity in her algorithms. She compares the recommendations and predictions of her diagnostic tools against real-world outcomes, measuring their accuracy for different genders, races, ages and socio-economic factors.
 
鈥淟ike all advances that leapfrog societies forward, there are large risks that we must decide to accept or not to accept,鈥 says Marzyeh Ghassemi, an assistant professor in the department of computer science who focuses on health-care applications for artificial intelligence  
 
In theory, Canada offers a head start for researchers interested in health-care applications that reflect values of fairness, diversity and inclusion. Our universal health-care system creates a repository of electronic health records that provides a wealth of medical data that could be used to train AI-driven applications. This potential drew Ghassemi to Toronto. But the technology, information, formatting and rules to access these records vary from province to province, making it complicated to create the kind of data sets that can move research forward.
 
Ghassemi was also surprised to learn that these records only rarely include data about race. This means if she鈥檚 using an algorithm to determine how well a given treatment serves different sectors of society, she could identify disparities between men and women, for example, but not between white people and racialized people. As a result, in her teaching and research, she鈥檚 using publicly available American data that contains information about race.
 
鈥淎uditing my own models [using American data], I can show when something has higher inaccuracy for people with different ethnicities,鈥 she says. 鈥淚 can鈥檛 make this assessment in Canada. There鈥檚 no way for me to check.鈥
 
Ghassemi is interested in creating AI applications that are fair in their own right 鈥 and that also can help human beings counteract their own biases. 鈥淚f we can provide tools based on large diverse populations, we鈥檙e giving doctors something that will help them make better choices,鈥 she says.
 
Women, for example, are significantly underdiagnosed for heart conditions. An AI could flag such a danger for a doctor who might overlook it. 鈥淭hat鈥檚 a place where a technological solution can help, because doctors are humans, and humans are biased,鈥 she says.
 
Ethayarajh concurs with Ghassemi and Baecker that Canada has an important opportunity to press its advantage on fairness and bias in artificial intelligence research.
 
鈥淚 think AI researchers here are very aware of the problem,鈥 Ethayarajh says. 鈥淚 think a part of that is, if you look around the office, you see a lot of different faces. The people working on these models will be end-users of these models. More broadly, I think there is a very strong cultural focus on fairness that makes this an important area for researchers in this country.鈥
 
This story originally appeared in . 
 

Topics

The Bulletin Brief logo

Subscribe to The Bulletin Brief

香港六合彩资料 Magazine