Artful intelligence

How can we develop artificial intelligence that respects our humanity? By engaging with the humanities, says Michele Elam.

By Anna Morrison

Story tags:

Image of a statue displayed on a computer screen with futuristic AI hands.

Illustration: Michelle Henry

Share this story

Michele Elam recalls the moment when, as an untenured assistant professor at another university, she was asked to sign a waiver by a photographer capturing images of her for her department’s website.

“I remember feeling I didn’t have a right to say no or insist the photos could only be used for certain purposes,” she says. 

Two years later, her father-in-law spotted her face on the side of a Boston bus—in an ad presenting her as a non-English-speaking student returning to school. Unbeknownst to Elam, the photographer had sold the photos to Getty Images. The bus was just the beginning.

Though never affiliated with Harvard, she found herself on the university’s LGBTQ alumni website, in ads for selling stocks and bonds, and even holding a deck of cards in an ad for a dealer school.

Elam was most disturbed when she discovered her skin tone had been digitally altered to suit advertiser preferences. As she sought to curb the sale of these images, she spoke to a marketing executive who told her she had a “racially ambiguous” look that would help sell products: “White people won’t be threatened by you. People of color will feel like you’ve got a little something in you,” the executive said.

I found myself on Harvard’s alumni website, selling stocks and bonds, and even holding playing cards to promote a card dealer school.”
Michele Elam
Michele Elam stands in a classroom talking with a group of students who are seated.

Michele Elam has developed undergraduate courses that explore how the arts influence how technology is designed and implemented. Photo: Jess Alvarenga

A person’s image is so intimate that it was difficult for Elam to wrap her head around not controlling how hers could be used. This experience left her wary of being photographed or recorded to this day—but she recognizes that advances in artificial intelligence now pose more pervasive risks. Even if your image only appears on Facebook, you may not have the exclusive rights you imagine.

“They don’t ask you to sign waivers anymore,” Elam says. “There are AI platforms that grab faces from the web and use them as grist for creating something that looks very realistic, but that person technically does not exist.” 

Elam, Stanford’s William Robertson Coe Professor of Humanities and the former director of the African and African American Studies program in the School of Humanities and Sciences, recently analyzed platforms vending AI-generated images of people in an issue of the journal Daedalus dedicated to AI. “It seems innocent: It’s framed as an inexpensive way to have diversity and a legal workaround to copyright laws. But there’s a drop-down menu where you can build a person, and it looks like race is simply the sum of these parts,” she says. 

They don’t need to ask you to sign waivers anymore.
There are AI platforms that grab faces from the web and use them as grist for creating something that looks very realistic, but that person does not exist.”
Michele Elam
Michele Elam speaking

While AI-generated stock photography may not seem like the most insidious use of technology, Elam contends that these platforms’ reductive treatment of race—with a drop-down menu allowing users to specify features like skin tone, eye shape, and hair texture—is symptomatic of a larger concern about how AI understands race. 

Elam saw an opportunity to further explore this concern when she was invited to join the Stanford Institute for Human-Centered AI (Stanford HAI), where she was formerly an associate director and is now a senior fellow. AI systems are trained on massive datasets with static values, so they understand race primarily as data points corresponding to physical features. By contrast, humanities scholars view race as a social construct with lived realities shaped by cultural, economic, historical, and political pressures that change over time. Elam argues the gulf between these perspectives demonstrates how AI systems can dehumanize individuals by reducing them to a list of physical characteristics—with political and economic implications once those systems are deployed. As people, we know ourselves to be more than the sum of our physical parts. Can AI be designed to view us that way, too?

For AI to make that leap, Elam believes artists, scholars, and technologists must reconcile their disciplines’ disparate perspectives on what it means to be human. As she sees it, AI’s failure thus far to grasp complex, dynamic concepts such as race or gender isn’t just a technical hurdle—it’s a cultural and political dilemma demanding a humanistic approach.

Colorful illustration of statues layered on a computer screen with geometric shapes and lines.

From pixels to people

What does it mean to adopt a humanistic approach to AI? This question lies at the heart of Stanford HAI, which links computer scientists, surgeons, musicians, lawyers, philosophers, and other experts working at the intersection of AI and a wide variety of fields. By fostering collaborations, facilitating dialogues, and offering guidance, HAI serves everyone from educators grappling with AI’s implications for homework assignments to government policymakers.

At the institute’s quarterly arts and technology salon, in her classes, and at weekly seminars, Elam poses questions crucial to defining human-centered AI. As we consider emerging technologies that putatively optimize human experiences, she urges us to examine those technologies’ hidden assumptions about “normal” or standardized human behavior.

For example, some AI systems in autonomous vehicles do not recognize wheelchair users as humans. Similarly, facial recognition software—often trained predominantly on lighter skin tones—tends to be less accurate for people with darker skin, leading to wrongful arrests of Black people. Through narrow assumptions about the “normal” human, Elam argues, AI can exacerbate existing inequalities.

When algorithms are found to negatively impact underrepresented groups, it’s often because the experiences of those groups were overlooked during development. This underscores the need for diversity in the technology sector—and for technologists trained in the practice of considering other perspectives and experiences, Elam contends.

Before joining HAI, she attended an outside conference about storytelling and AI. Many of the technologists there saw storytelling as a way to “control the narrative” about AI, she recalls—a marketing concern. For Elam, meaningful storytelling is not so tightly restricted.

The best literary fiction invites readers to stop, reflect, and revisit what they read. Rather than presenting a single and easily digestible interpretation, literature and art create openings for multiple perspectives. For this and many other reasons, Elam believes the humanities can help technologists expand their understanding of what it means to be human, imagining experiences beyond their own.

The humanities ask us not simply to internalize and repeat stereotypes and mindsets. Especially when dealing with technologies that shape our private and social realities so powerfully, it’s crucial that we engage with many different stories and perspectives.”

Illustration: Michelle Henry

The work of art

Last May, Elam co-led a Stanford HAI symposium titled “Creativity in the Age of AI: AI Impacting Arts, Arts Impacting AI,” which brought together technologists, scholars, and artists. The conversations were fraught at times. In one panel, the professional illustrator Steven Zapata described the difficulty of finding work when clients can simply opt for image-generating services like DALL-E—which were trained on his art without credit, consent, or compensation. Hearing such concerns is crucial for both established and emerging technologists, Elam believes.

Elam challenges some technologists’ claims that generative AI will enhance artists’ creative endeavors by sparing them the demanding process of crafting the art.

“This implies artists are burdened by the reflection, care, and time in their work,” she says. “For many, the meaning and the value of their work is expressed through the process, not incidental to it.” Tools, she notes, aren’t always designed with our needs in mind, let alone those of the social good.

There are artists creating powerful work with AI, but Elam argues they do not replace their creative process with a prompt. Instead, their art explores new ways to engage with AI and comment on technology and humanity.

Elam points to the work of Catie Cuan, PhD ’23, a dancer-technologist with a doctoral degree in robotics who participated in HAI’s “Creativity in the Age of AI” symposium. Cuan performs with industrial robots through what she calls “choreo-robotics,” inviting viewers to consider how dancing with these machines alters our relationship with them. Cuan was inspired to challenge conventional notions of human-robot interactions after seeing how terrified her father was by the medical machines aiding his recovery after a stroke. Cuan knows that robots are only going to play a bigger role in our future—her work invites her audiences to imagine a tech-positive future without fear.

Closeup of a woman's hand on a laptop computer.

Elam believes that in order for AI to grasp complex, dynamic concepts such as race or gender, artists, scholars, and technologists first must reconcile their disciplines’ disparate perspectives on what it means to be human. Photo: Jess Alvarenga

Pressing pause

Elam’s involvement with HAI inspired her to develop new undergraduate courses—courses that explore how varied perspectives, including indigenous, feminist, and decolonial worldviews, encourage change in technology design and implementation. In her course Black Mirror: A.I.Activism, she challenges students to step back from their chosen disciplines and consider other ways of viewing the world. 

“At this point in their lives, they’re gaining expertise in a particular field—whether it’s anthropology or computer science. What they’re acquiring is one type of knowledge. It’s one way of looking at the world, and it may rub against other approaches,” says Elam. She adds that engaging with other disciplines requires students to work through their discomfort and embrace humility.

For many students, the concerns explored in this course bear directly on their future careers. When speakers from the arts and technology sectors visit, the students “want to know what this looks like in the real world, when they show up as lowly interns at Apple or Google and talk about ethical design,” Elam says. In essence, this is the same question at the root of HAI’s interdisciplinary mission: How can we, as individuals, draw on these other ways of knowing to create technology that benefits all humans?

Looking back on her stock photography ordeal, Elam notes that social and institutional pressures often make us feel we don’t have the right to pause, ask questions, or advocate for ourselves. She hopes her class and her work with HAI empower students to reflect rather than succumb to these pressures.

This technology impacts us in the most intimate ways.
Of course we should feel empowered to have something to say about it.”
Michele Elam

The opportunity

The Stanford Institute for Human-Centered Artificial Intelligence connects experts from diverse fields to address the complex ethical, social, and technical challenges posed by AI. Through its innovative initiatives, Stanford HAI not only advances the boundaries of AI research and development but strives to ensure that AI technologies align with human values and contribute positively to society.

Share this story

Related stories:Explore more