ΑΙhub.org
 

Code^Shift lab aims to confront bias in AI and machine learning


by
13 July 2021



share this:
bias - letters on table

By Caitlin Clark

The algorithms underpinning artificial intelligence and machine learning increasingly influence our daily lives. They can be used to decide everything from which video we’re recommended to watch next on YouTube to who should be arrested based on facial recognition software.

But these algorithms, and the data used to train them, often replicate the harmful social biases of the engineers who build them. Eliminating this bias from technology is the focus of Code^Shift, a new data science lab at Texas A&M University that brings together faculty members and researchers from a variety of disciplines across campus.

It’s an increasingly critical initiative, said Lab Director Srividya Ramasubramanian, as more of the world becomes automated. Machines, rather than humans, are making many of the decisions around us, including some that are high-risk.

“Code^Shift tries to shift our thinking about the world of code or coding in terms of how we can be thinking of data more broadly in terms of equity, social healing, inclusive futures and transformation,” said Ramasubramanian, professor of communication in the College of Liberal Arts. “A lot of trauma and a lot of violence has been caused, including by media and technologies, and first we need to acknowledge that, and then work toward reparations and a space of healing individually and collectively.”

Bias in artificial intelligence can have major impacts. In just one recent example, a man has sued the Detroit Police Department after he was arrested and jailed for shoplifting after being falsely identified by the department’s facial recognition technology. The American Civil Liberties Union calls it the first case of its kind in the United States.

Code^Shift will attempt to confront this issue using a collaborative research model that includes Texas A&M experts in social science, data science, engineering and several other disciplines. Ramasubramanian said eight different colleges are represented, and more than 100 people attended the lab’s virtual launch last month.

Experts will work together on research, grant proposals and raising awareness in the broader public of the issue of bias in machine learning and artificial intelligence. Curriculum may also be developed to educate professionals in the tech industry, such as workshops and short courses on anti-racism literacy, gender studies and other topics that are sometimes not covered in STEM fields.

The lab’s name references coding, which is foundational to today’s digital world. It’s also a play on code-switching – the way people change the languages they use or how they express themselves in conversation depending on the context.

As an immigrant, Ramasubramanian says she’s familiar with “living in two worlds.” She offers several examples of computer-based biases she’s encountered in everyday life, including an experience attempting to wash her hands in an airport bathroom.

Standing at the sink, Ramasubramanian recalls, she held her hands under the faucet. As she moved them back and forth and the taps stayed dry, she realized that the sensors used to turn the water on could not recognize her hands. It was the same case with the soap dispenser.

“It was something I never thought much about, but later on I was reading an article about this topic that said many people with darker skin tones were not recognized by many systems,” she said.

Similarly, when Ramasubramanian began to work remotely during the COVID-19 pandemic, she noticed that her skin and hair color made her disappear against the virtual Zoom backgrounds. Voice recognition software she attempted to use for dictation could not understand her accent.

“The system is treating me as the ‘other’ and different in many, many ways,” she said. “And in return, there are serious consequences of who feels excluded, and that’s not being captured.”

Co-director Lu Tang, an assistant professor in the College of Liberal Arts who examines health disparity in underserved populations, says her research shows that Black patients, for example, must have much more severe symptoms that non-Black patients in order to be assigned certain diagnoses in computer software used in hospitals.

She said this is just one instance of the disparities embedded in technology. Tang’s research also focuses on how machine learning algorithms used on social media platforms are more likely to expose people to misinformation about health.

“If I inhabit a social media space where a lot of my friends hold certain erroneous attitudes about things like vaccines or COVID-19, I will repeatedly be exposed to the same information without being exposed to different information,” she said.

Tang also is interested in what she calls the “filter bubble” – the phenomenon of where an algorithm leads a user on TikTok, YouTube or other platforms based on content they’ve watched in the past or what other people with similar viewing behaviors are watching at that moment. Watching just one video containing vaccine misinformation could prompt the algorithm to continue recommending similar videos. Tang said the filter bubble is another added layer that influences the content that people are exposed to.

“I think to really understand this society and how we are living today, we as social scientists and humanities scholars need to acknowledge and understand the way computers are influencing the way society is run today,” Tang said. “I feel like working with computer science engineers is a way for us to combine our strengths to understand a lot of the problems we have in this society.”

Computer Science and Engineering Assistant Professor Theodora Chaspari, another co-director of Code^Shift, agrees that minds from different disciplines are needed to design better systems.

To build an inclusive system, she said, engineers need to include representative data from all populations and social groups. This could help facial recognition algorithms better recognize faces of all races, she said, because a system “cannot really identify a face until it has seen many, many faces.” But engineers may not understand more subtle sources of bias, she said, which is why social and life sciences experts are needed to help with the thoughtful design of more equitable algorithms.

The goal of Code^Shift is to help bridge the gap between systems and people, Chaspari said. The lab will do this by raising awareness through not only research, but education.

“We’re trying to teach our students about fairness and bias in engineering and artificial intelligence,” Chaspari said. “They’re pretty new concepts, but are very important for the new, young engineers who will come in the next years.”

So far, Code^Shift has held small group discussion on topics like climate justice, patient justice, gender equity and LGBTQ+ issues. A recent workshop focused on health equity and the ways in which big data and machine learning can be used to take into account social structures and inequalities.

Ramasubramanian said a full grant proposal to the Texas A&M Institute of Data Science Thematic Data Science Labs Program is also being developed. The lab’s directors hope to connect with more colleges and make information accessible to more people.

They say collaboration is critical to the initiative. The people who create algorithms often come from small groups, Ramasubramanian said, and are not necessarily collaborating with social scientists. Code^Shift asks for more accountability in how systems are created: who has access to the data, who’s deciding how to use it, and how is it being shared?

“To me, we should also be leaders in thinking about the ethical, social, health and other impacts of data,” she said.

To join the Code^Shift mailing list or learn more about collaborating with the lab, contact Ramasubramanian.

AIhub focus issue on reduced inequalities

tags: ,


Texas A&M

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence