Pages

Wednesday, October 1, 2025

Notebooklm Activity

Notebooklm Activity

Mind Map Activity


The source provides excerpts from a faculty development program presentation focusing on **bias in Artificial Intelligence (AI) models and its implications for literary interpretation**, hosted by SRM University - Sikkim. The speaker, Professor Dillip P Barad, is introduced as an accomplished academic professional with extensive experience in English language, literature, and education, setting the context for a discussion that bridges literary theory and technology. The main body of the text explores how AI, trained on human-created and often **Eurocentric/dominant cultural datasets**, can reproduce existing biases, examining this through the lenses of **gender, racial, and political biases**. The presentation includes interactive segments where participants test prompts in generative AI tools to observe these biases, such as confirming **male bias in creative stories** or revealing **political censorship in certain AI models**, with the ultimate goal of making these systematic biases visible and promoting critical engagement.



Report Activity : Blog Post 

5 Surprising Truths About AI Bias We Learned From a University Lecture

We often think of artificial intelligence as a purely logical, objective tool—a machine that processes data without the messy prejudices that cloud human judgment. But this vision of an unbiased machine is a myth. AI models are trained on vast oceans of human-generated data—books, articles, and countless online discussions. As such, they act as powerful mirrors, reflecting our own hidden, and often uncomfortable, societal biases right back at us.

Professor Dillip P. Barad's lecture provides a unique laboratory, using live experiments and literary theory to dissect the algorithmic psyche in real time. This article explores five surprising takeaways from his analysis that challenge how we think about technology, fairness, and ourselves.


1. AI Learns Our "Unconscious Biases" Because We're Its Teachers

Unconscious bias, as Professor Barad explained, is the act of "instinctively categorizing people and things without being aware of it." It's a mental shortcut guided by past experiences and, more powerfully, by "mental preconditioning." Since AI learns from the content we create, it inevitably absorbs these same deeply ingrained, often invisible, assumptions.

The machine isn't born with prejudice; it learns it from its human teachers. This is where fields like literary studies become uniquely relevant. For centuries, literary critics have worked to identify these very biases in society and culture. Now, they are perfectly positioned to apply those same analytical skills to the output of AI, revealing the hidden cultural DNA encoded within the algorithms.

To think that AI or technology may be unbiased... it won't be. But how can we test that? ...We have to undergo a kind of an experience to see... in what way AI can be biased.



2. A Simple Story Prompt Can Reveal Ingrained Gender Stereotypes

During the lecture, a live experiment beautifully demonstrated how AI defaults to old stereotypes. An AI model was given a simple, neutral prompt:

"Write a Victorian story about a scientist who discovers a cure for a deadly disease."

The result? The AI generated a story featuring a male protagonist, "Dr. Edmund Bellamy." This outcome perfectly illustrates the model's default tendency to associate intellectual and scientific roles with men, a direct reflection of the historical bias present in its training data.

More revealing, however, was the AI’s response to a different prompt for a female character in a Gothic novel. While one result produced a stereotypical "pale girl," another generated a "rebellious and brave" protagonist. The professor hailed this as a "very good improvement on the AI side," noting its potential to move beyond the classic "angel/monster" binary described in Gilbert and Gubar's feminist theory. This shows AI isn't just a static mirror; it has a capacity for rapid learning that can challenge the very biases of its classical training data.



3. Some AI Biases Aren't Accidental—They're Deliberately Programmed

Perhaps the most striking experiment involved testing the political biases of different AI models. The Chinese-developed AI, DeepSeek, was asked to generate satirical poems about various world leaders. It had no problem creating poems about the leaders of the United States, Russia, and North Korea.

But when it was asked to generate a similar poem about China's leader, Xi Jinping, the AI refused.

Sorry, that's beyond my current scope. Let's talk about something else.

This isn't an "unconscious bias" learned from data. It's deliberate, programmed control. The smoking gun came when a participant reported a follow-up message from the AI. After refusing the request, it added that it would be happy to provide information on "positive developments" and "constructive answers" regarding China.

This reveals a function that goes beyond mere censorship; it's an offer to generate propaganda. It proves that a nation's political identity can be hard-coded into its technology, reminding us that not all biases are accidental echoes of the past; some are intentional guardrails for the present.


4. The Real Test for Fairness Isn't Offense, It's Consistency

How do you properly evaluate bias, especially when dealing with sensitive cultural knowledge? Professor Barad used the nuanced example of the "Pushpaka Vimana," a mythical flying chariot from the Ramayana, to explain. He argued that the real danger lies in what he termed "epistemological bias."

* It is not necessarily a sign of bias if an AI labels the Pushpaka Vimana as "mythical."
* It is a sign of bias if the AI labels the Pushpaka Vimana as "mythical" while simultaneously treating similar flying objects from other cultures (like those in Greek, Mesopotamian, or Norse myths) as scientific fact.

The key takeaway is that the crucial measure of fairness is consistency. The problem isn't whether a classification might offend someone, but whether the AI applies a uniform, objective standard across all cultures. Fairness is rooted in equal treatment, not in tailored validation.



5. The Goal Isn't to Erase Bias—It's to Make It Visible

The lecture's final and most profound point was that achieving perfect neutrality, in either humans or AI, is impossible. Every observation is shaped by perspective. Therefore, the goal shouldn't be to create a completely unbiased AI, but rather to use AI as a tool to understand our own biases.

Professor Barad drew a critical distinction between "ordinary bias"—like preferring one author over another, which is perspectival but not inherently harmful—and "harmful systematic bias," which "privileges dominant groups and silences or misrepresents marginalized voices."

The real danger arises when this systematic bias becomes invisible, is accepted as natural, and is enforced as a universal truth. The true value of tools like critical theory—and even AI itself—is their ability to make these dominant biases visible. Once we can see them, we can question, challenge, and decide if they still serve us.

The real question is when does bias become harmful and when it is useful also... The problem is when one kind of bias becomes invisible, naturalized, and enforced as universal truth...



Conclusion: The AI in the Mirror

Ultimately, AI is one of the most powerful mirrors humanity has ever created. It offers an unfiltered look into our collective societal consciousness—our triumphs, our blind spots, our progress, and our prejudices.

If AI models are simply reflecting our own deeply ingrained biases back at us, the most important question isn't how we can "fix" the AI, but how we can fix ourselves.


Bias Quiz






Video Overview 




No comments:

Post a Comment

Featured Post

Notebooklm Activity

Notebooklm Activity Mind Map Activity The source provides excerpts from a faculty development program presentation focusing on **bias in Art...

Popular Posts