Voice actors at Comic-Con warn of AI peril facing culture and the arts
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. The maker of ChatGPT is now diving into the world of AI-generated video. Meet Sora — OpenAI’s new text-to-video generator. The tool, which the San Francisco-based company unveiled on Thursday, Feb. 16, 2024 uses generative artificial intelligence to instantly create short videos based on written commands. | AP

SAN DIEGO—As an estimated 150,000 people descended on the city for Comic-Con to celebrate fandom and the arts, there were a number of panels covering heavy topics concerning the future of the entertainment industry and the real-world political turmoil affecting the country. One such panel was the “Creators in the Age of A.I.” discussion. Top creative professionals from numerous disciplines, including voiceover, graphic design, visual art, writing, and more discussed how artificial intelligence—and those controlling it—are impacting creators worldwide. At the center of the meeting was the threat AI poses to the future of the industry if left unchecked and powered by capitalist greed. 

Making up the panel were experts from the National Association of Voice Actors (NAVA), including the host, NAVA Media Affairs Director Linsay Rousseau, panelists Tim Friedlander, president and co-founder of NAVA, voice director Philip Bache, president of QueerVox, J. P. Karliak, concept artist and costume designer, Phillip Boutté, Jr., and NAVA Director of Operations Matthew Parham. The central focus of the panel was the search for ways creators can protect themselves from A.I. misuse, along with a discussion on new legislation and court rulings that positively impact artists, as well as the steps creators can take to protect their work. 

There was no mincing of words when it came to the looming dangers of artificial intelligence on entertainment and the quality of art and storytelling. 

In the lead-up to the convention, the battle over President Donald Trump’s so-called “Big Beautiful Bill” was raging in Washington, a law with unprecedented features that benefit the wealthy while slashing benefits for working people. The bill’s key items are a $4.5 trillion, 10-year tax cut for corporations and the rich, along with massive cuts in Medicaid, the Affordable Care Act, worker health and safety, education, and other domestic programs to pay for it.

What many may not have been aware of was that the bill initially included the AI Enforcement Pause, which would have required states and local governments to stop enforcing most AI-specific regulatory laws for at least the next five years. The U.S. Senate Committee on Commerce, Science, and Transportation had proposed tying the AI Enforcement Pause to states’ receipt of federal broadband funding. NAVA President Friedlander explained that this was a significant battle fought—and won—by the organization to get the clause removed from the legislation. He followed up by explaining that the strategy was a grassroots one, embedded in the community of creatives and allies. 

“[It was] an uprising of the creative community. Letters were sent, petitions signed. [We are in] a climate and culture where we have the ability to spark change through social media. [We have to] keep standing up and protecting the industry we all love,” Friedlander asserted. 

The panelists discussed the specifics of the damage AI is doing not only to the entertainment industry but to society and information consumption as a whole. AI programs comb through the internet for information they can consume and reconfigure, a process that speakers warned is leading society to a rapid descent into mediocrity and misinformation.

NAVA voice panel from left, Linsay Rousseau, Tim Friedlander, J. P. Karliak, Phillip Boutté, Jr., Matthew Parham, and Philip Bache | Chauncey K. Robinson/People’s World

Phillip Boutté, Jr., who has worked on projects such as Black Panther, explained how in a time when many people go to search engines like Google to find answers, AI programs are being used to give answers to questions asked—often with mixed and downright inaccurate results. “AI muddles research,” Boutté said, “It makes it harder to figure out what is real and what’s not.” He went on to explain how AI has a tendency to copy existing work and then insist that it is original, noting that while researching projects, he’s had AI programs present his own work from previous projects back to him, claiming it was AI-generated. 

Boutté’s story is not an isolated case. A recent study conducted by the Columbia Journalism Review (CJR) found that chatbots “provided incorrect answers to more than 60% of queries” when asked to identify the “corresponding article’s headline, original publisher, publication date, and URL.” Another name for this is called AI “hallucinations,” in which AI chatbots give wrong answers which they themselves create. Recent research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, raising concerns over accuracy and reliability. Yet large platforms and corporations, like Google, X (formerly known as Twitter), and Amazon, to name a few, have begun incorporating AI in their products, pushing them onto consumers. 

Boutté noted that “many consumers don’t see” just how much AI is seeping into and affecting their everyday lives—especially in the art and information they take in. 

It was also noted that in the age of culture wars and political division, AI has built-in bias that is detrimental to marginalized communities. Because AI programs comb the internet for their “knowledge,” they often take in harmful stereotypes and misinformation on numerous topics. They then repeat that faulty or sometimes downright racist content to users, thus perpetuating those views among people unable to discern what’s factual or not. This situation, coupled with the Trump administration’s maneuvers to ensure that inclusive language and DEI (Diversity, Equity, and Inclusion) stay out of AI, may mean artificial intelligence remains a perpetuator of bigotry rather than innovation.

J.P. Karliak of QueerVox highlighted Trump’s executive order titled “Preventing Woke AI in the Federal Government,” which states: “When ideological biases or social agendas are built into AI models, they can distort the quality and accuracy of the output. One of the most pervasive and destructive of these ideologies is so-called ‘diversity, equity, and inclusion’ (DEI).” Karliak said that in the AI context, the suppression of DEI “includes the suppression or distortion of factual information about race or sex.”

The Trump order pushes the claim that DEI (somehow) “poses an existential threat to reliable AI.” Karliak noted that as long as executive orders or laws like this exist and AI goes unregulated, they will continue to be a barrier to learning inclusion.

President Donald Trump holds a signed executive order after speaking during an AI summit at the Andrew W. Mellon Auditorium, July 23, 2025, in Washington. | AP/Julia Demaree Nikhinson

Parham asserted that AI has become a “colonizing force” in creativity and noted that different communities—such as LGBTQ, Black, people of color, and veterans—must come together and stay alert. “That’s why they [those in power] try to sneak [AI deregulation] into bills, so we don’t organize to fight back,” he warned.

When speaking of what can be done to combat the harm of unregulated AI, panelists noted that there are several key pieces of legislation on the table that people can support and that AI “art” (generative work) should be called out and labeled as often as possible. They noted that people are being “fooled” by AI, and that it should be labeled just like the food we ingest so that consumers can make informed decisions. 

NAVA is currently waging a battle in support of the Transparency and Responsibility in AI Networks (TRAIN) Act, a bipartisan bill that helps creators protect their work from the unauthorized use by artificial intelligence programs. The TRAIN Act would require AI companies to disclose whether copyrighted works were used to inform AI models, holding developers accountable for using creators’ work. 

In a statement that went out Monday following the panel, Friedlander noted that “accountability is key to ensuring artists aren’t exploited by AI.” He said cooperation is needed “to lay the groundwork for legislation that protects human voices and ensures a safe and fair environment for AI use.” Friedlander called the TRAIN Act “an important step toward moving the AI narrative forward with transparency and responsibility.” 

The push for the legislation is led by U.S. Sen. Peter Welch, D-Vt., who told the press:“This is simple: If your work is used to train AI, there should be a way for you, the copyright holder, to determine that it’s been used by a training model, and you should get compensated if it was.”

Although the panelists explained that they were not in favor of banning AI completely, they noted that the technology would never replace the weight, influence, and depth of authentic art and creativity. Philip Bache proclaimed that although AI is being touted as the way of the future, it can only tell us where we have been, as art helps to tell us where we are going. “Art has helped humanity as a whole. AI can only consume what is already out there.” 

We hope you appreciated this article. At People’s World, we believe news and information should be free and accessible to all, but we need your help. Our journalism is free of corporate influence and paywalls because we are totally reader-supported. Only you, our readers and supporters, make this possible. If you enjoy reading People’s World and the stories we bring you, please support our work by donating or becoming a monthly sustainer today. Thank you!


CONTRIBUTOR

Chauncey K. Robinson
Chauncey K. Robinson

Chauncey K. Robinson is an award winning journalist and film critic. Born and raised in Newark, New Jersey, she has a strong love for storytelling and history. She believes narrative greatly influences the way we see the world, which is why she's all about dissecting and analyzing stories and culture to help inform and empower the people.