Mendoza School of Business

NSF grant will help address AI’s diversity problem

John Lalor and a multi-university research team received $1.2M from NSF to improve large language models for all demographics.

Published: November 14, 2024 / Author: Courtney Ryan



computer ai graphic background with a chatbot dialog box on it.

Artificial Intelligence systems known as large language models (LLMs) have advanced rapidly in recent years, evolving from innovative technology researchers use to public-facing tools adopted by practically every major company. When you ask Google a question, you are engaging an LLM. When you sign up for internet service or schedule a doctor’s appointment via an automated chatbot, you may be working with an LLM.

headshot

John Lalor

LLMs can perform various tasks thanks to massive datasets, which engineers use to train these models to compute and generate language. The technology is revolutionary, but because it’s based on human knowledge, it’s also vulnerable to the same biases and inaccuracies that humans fall prey to.

“The scale of data collection has increased significantly, with targeted human feedback to guide that training, but that means human annotators decide if the output satisfies their needs or answers their question,” said John Lalor, assistant professor of IT, Analytics, and Operations at the University of Notre Dame’s Mendoza College of Business. “It’s dependent on the individual doing the annotation.”

Lalor likes to use football as an example of how such biases might play out. If you ask the question on the Notre Dame campus, “Who is the best football team?” the likely answer is the Fighting Irish. But in a U.S. newsroom, the answer will be a top-ranked NFL team. Ask this same question outside of America and a number of soccer teams could be the reply.

“If the models are tuned to answer a question in a North American-centric context, then those kinds of details might be embedded in the model beyond the specific question that was asked,” explained Lalor. This one-size-fits-all approach becomes a more serious issue when the LLM isn’t answering questions about football but instead is meant to help a diverse population access valuable information.

Lalor is part of a team that was awarded a $1.2 million National Science Foundation grant to address this gap between personalization and diversity in LLM’s training data. Titled “Hard Data to the Model: Personalized, Diverse Preferences for Language Models,” the project’s team includes Jordan Boyd-Graber from University of Maryland, Alvin Grissom II from Haverford College, and Robin Jia and Swabha Swayamdipta from University of Southern California. Notre Dame will receive a total of $339,000 from the four-year grant, enabling Lalor to bring on a postdoctoral scholar to help pursue this research and oversee Ph.D. in Analytics students who will assist with the project.

Lalor and the research team will address the current shortcomings by expanding data collection and curation to include under-represented groups and developing novel methods to fine-tune LLMs by creating adversarial prompts in order to generate variations in answers.

“Projects like these take a lot of effort to pull off, especially when you’re trying to coordinate across multiple universities,” said Lalor. “With this grant, we now have the support to devote resources to this broader project rather than chasing what the next incremental innovation is. The work we’re doing at what we call the ‘sociotechnical intersection’ is fascinating and fast-moving; making contributions as these developments are happening. It’s super exciting.”