ϲ

Four AI trends to watch in 2024

“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions”
A person dressed like a monk stands in front of a sign that reads The Future is AI on a crowded street in Davos

AI was a hot topic at this week’s annual meeting of the World Economic Forum in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket via Getty Images)

As artificial intelligence continues to develop rapidly, the world is watching with excitement and apprehension – as evidenced by the .

ϲ researchers are using AI to advance scientific discovery and , exploring how to mitigate potential harms and finding new ways to ensure the technology aligns with human values

“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions,” says Monique Crichlow, executive director of the Schwartz Reisman Institute for Technology and Society (SRI). “Researchers at SRI and across the university are tackling how to build and regulate AI systems for safer outcomes, as well as the social impacts of these powerful technologies.”

“From health-care delivery to accessible financial and legal services, AI has the potential to benefit society in many ways and tackle inequality around the world. But we have real work to do in 2024 to ensure that happens safely.”

As AI continues to reshape industries and challenge many aspects of society, here are four emerging themes ϲ researchers are keeping their eyes on in 2024:


1. AI regulation is on its way

""
U.S. Vice President Kamala Harris applauds as U.S. President Joe Biden signs an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023 (photo by Brendan Simialowski/AFP/Getty Images) 

As a technology with a wide range of potential applications, AI has the potential to impact all aspects of society – and regulators around the world are scrambling to catch up.

Set to pass later this year, the (AIDA) is the Canadian government’s first attempt to comprehensively regulate AI. Similar attempts by include the European Union’s and the in the United States.

But .

In the coming year, legislators and policymakers in Canada will tackle many questions, including what counts as fair use when it comes to training data and what privacy means in the 21st century. Is it illegal for companies to train AI systems on copyrighted data, as from the New York Times alleges? Who owns the rights to AI-generated artworks? Will Canada’s new privacy bill sufficiently ?

On top of this, AI’s entry into other sectors and industries will increasingly affect and transform how we regulate other products and services. As Gillian Hadfield, a professor in the Faculty of Law and the Schwartz Reisman Chair in Technology and Society, Policy Researcher Jamie Sandhu and Faculty of Law doctorial candidate Noam Kolt explore in  (formerly the Canadian Institute for Advanced Research), a focus on regulating AI through its harms and risks alone “obscures the bigger picture” of how these systems will transform other industries and society as a whole. For example: are current car safety regulations adequate to account for self-driving vehicles powered by AI?

2. The use of generative AI will continue to stir up controversy

""
Microsoft Bing Image Creator is displayed on a smartphone (photo by Jonathan Raa/NurPhoto/Getty Images)

From AI-generated text and pictures to videos and music, use of generative AI has exploded over the past year – and so have questions surrounding issues such as academic integrity, misinformation and the displacement of creative workers.

In the classroom, teachers are seeking to understand how . Instructors will need to find new ways to embrace these tools – or perhaps opt to reject them altogether – and students will continue to discover new ways to learn alongside these systems.

At the same time, AI systems  by some counts – more than the entire 150-year history of photography. Online content will increasingly lack human authorship, and some researchers have proposed that by 2026 . Risks around disinformation will increase, and new methods to label content as trustworthy will be essential.

Many workers – including writers, translators, illustrators and designers – are worried about job losses. But a tidal wave of machine-generated text could also have negative impacts on AI development. In a recent study, Nicolas Papernot, an assistant professor in the Edward S. Rogers Sr. department of electrical and computer engineering in Faculty of Applied Science & Engineering and an SRI faculty affiliate, and his co-authors found training AI on machine-generated text led to the system becoming less reliable and subject to “model collapse.”

3. Public perception and trust of AI is shifting

""
A person walks past a temporary AI stall in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket/Getty Images)

Can we trust AI? Is our data secure?

Emerging research on public trust of AI is shedding light on changing preferences, desires and viewpoints. Peter Loewen – the director of the , SRI’s associate director and the director of the Munk School’s  (PEARL) – is developing an index measuring public perceptions of and attitudes towards AI technologies.

Loewen’s team conducted a representative survey of more than 23,000 people across 21 countries, examining attitudes towards regulation, AI development, perceived personal and societal economic impacts, specific emerging technologies such as ChatGPT and the use of AI by government. They plan to release their results soon.

Meanwhile, 2024 is being called with more than 50 countries headed to the polls, and thanks to AI. How will citizens know which information, candidates, and policies to trust? 

In response, some researchers are investigating the foundations of trust itself. Beth Coleman, an associate professor at ϲ Mississauga’s Institute of Communication, Culture, Information and Technology and the Faculty of Information who is an SRI research lead, is leading on the role of trust in interactions between humans and AI systems, examining how trust is conceptualized, earned and maintained in our interactions with the pivotal technology of our time.

4. AI will increasingly transform labour, markets and industries 

""
A protester in London holds a placard during a rally in Leicester Square (photo by Vuk Valcic/SOPA Images/LightRocket via Getty Images)

Kristina McElheran, an assistant professor in the Rotman School of Management and an SRI researcher, and her collaborators may have recently found  – but there remains a real possibility that labour, markets and industries will undergo massive transformation.

ϲ researchers who have published books on how AI will transform industry include: Rotman faculty members Ajay Agrawal, Joshua Gans and Avi Goldfarb, whose argues that “old ways of doing things will be upended” as AI predictions improve; and the Faculty of Law’s Benjamin Alarie and Abdi Aidid, who propose in that AI will improve legal services by increasing ease of access and fairness for individuals.

In 2024, institutions – public and private – will be creating more guidelines and rules around how AI systems can or cannot be used in their operations. Disruptors will be challenging the hierarchy of the current marketplace. 

The coming year promises to be transformative for AI as it continues to find new applications across society. Experts and citizens must stay alert to the changes AI will bring and continue to advocate that ethical and responsible practices guide the development of this powerful technology.

Schwartz Reisman