AI has A Dark Side in Gaming and We All Should Talk About It

The concept of Black Box Problem in AI

Earlier this month, actors represented by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) ended in a major victory as, after 11 months of a global-reaching movement, actors approved new contractual guidelines on artificial intelligence technology by major publishers (thanks, NYT!).

According to the new terms, publishing companies like Electronic Arts (EA) and Activision, among others, are now required to obtain written consent of performers to create digital replicas—both visual and vocal. Also, studios who engage in these activities will have to pay the actors whose replicas are based upon a compensatory rate that’s comparable to in-person work.

Major victory all around, more power to them. But AI in gaming is still a very heated topic, and while we fully expect this victory to be the seed that will lead everyone to a better market practice, the truth is: AI tech is still predatory to video games, and it warrants more discussion.

Note: Affiliate Disclosure: At PlayRatedGames, our content is made possible by our readers. If you purchase a game or product through links on our site, we may receive a small commission. This support helps us continue publishing honest, independent reviews. Our recommendations are based solely on what we believe offers real value to players — never influenced by affiliate partnerships.

AI in Gaming: Still the Problem that Needs Everyone’s Input

The Human Cost of Digital Replication

SAG-AFTRA actors protest against the unethical us of AI in gaming
Credit: Michael Buckner/Variety via Getty Images

The integration of AI in gaming has profoundly impacted the labor landscape, particularly concerning the rights and livelihoods of creative professionals. All of this, for one simple reason: AI can replicate human performance—actors’ looks, voices and mannerisms—with such precision it has ignited significant disputes and forced a re-evaluation of traditional employment models.

Namely: why would a publisher hire an actor to motion capture or voice over a character, when it could spend money to create a digital version of said actor and use it forever, never paying anyone a dime? And the thing is, legally speaking, they could.

The nearly year-long SAG-AFTRA strike was sparked by that notion: starting in 2024, it was driven by deep concerns over AI potentially replacing human performers without explicit consent or fair compensation. Luckily, as we stated above, the strike met its objective, and now we have the start of a legal framework for the use of AI in gaming–one that companies will adhere to, hopefully.

However, this is one victory in a sea of problematic episodes: beyond performer concerns, the broader gaming industry has seen significant workforce restructuring, with AI often cited as a factor in widespread layoffs. Game studios collectively reduced workforces by 10% in the past year, with direct job displacement due to AI being a prominent concern among developers.8

King, the Xbox-owned Candy Crush developer, is a prime example: the company laid off somewhere around 200 employees and, according to some sources, some of these were replaced by AI tools. And that’s not even the worst part: some of the tools, allegedly, where developed by the very workforce King laid off, according to an anonymous former employee: “Most of level design has been wiped… Now those AI tools are basically replacing the teams. Similarly the copywriting team is completely removing people since we now have AI tools that those individuals have been creating. The fact AI tools are replacing people is absolutely disgusting but it’s all about efficiency and profits even though the company is doing great overall.” (thanks, 80.lv)

Oh, and all of that was before the most recent layoff rounds, where 9,000-ish people across the globe lost their jobs at Microsoft, and several studios closed down and their games got cancelled. Granted, the latest round of job terminations had little to do with AI in gaming, but “streamlining processes” is, more often than not, used as an excuse to fire people in both cases.

The impact of AI is not uniform: writers, artists, actors, QA testers… Pretty much every department that may have a little to do with creation might see “AI in gaming” as the one threat that will take their jobs, all for the sake of reducing costs and maximizing profit on an ever more expensive industry.

A Copyright Conundrum

AI in Gaming
Credit: Wizards of the Coast

The advent of generative AI in video game development has introduced profound complexities surrounding intellectual property rights and creative integrity, as traditional copyright law struggles to adapt.

Under current US Copyright Office policy, AI-generated content is generally not eligible for copyright protection unless it demonstrates “sufficient human authorship,” which is very difficult to confirm by regular means. 

In other words, AI-generated assets, without substantial human creative input, may not be protectable and could fall into the public domain.

A major ethical and legal flashpoint is the training of AI models on vast datasets, often “scraped” from the internet, including copyrighted material without consent or compensation. While AI developers might argue “fair use”, content creators contend it’s copyright infringement, leading to class-action lawsuits.

The following cases, for example, are not gaming-related, but the premise could be applied either way:

  • Ilya Shkipin and Dungeons & Dragons: In mid-2023, artist Ilya Shkipin admitted using AI to “enhance” illustrations for a D&D book. Fans quickly spotted spotty visuals, calling out the problems and showing outrage, leading parent company Wizards of the Coast to review artist guidelines and, finally, prohibit generative AI for D&D artwork.
  • Universal Music Group vs. Suno AI and Udio AI: In June 2024, UMG, Sony Music, and other major labels sued AI music generators Suno AI and Udio AI, alleging training on copyrighted sound recordings without licenses and producing “substantially similar” music. This pivotal case could redefine legal boundaries for AI in music and IP protection.

As for AI in gaming, several brands decided to take the lead and try to get ahead of the problem: Steam, for instance, demands proof of rights to all IP within a submitted game—even the ones with AI applications used on them. It doesn’t outright ban AI as an umbrella-wide decision, but if you, a game developer, uses any AI asset on the creation of your game, those assets need to be under your control, copyright-wise.

But the topic still merits discussion: copyright laws are tailored for human creators, so how do we apply the “legality” of that to AI-produced works? Does it belong to the AI itself? The company that created that AI application? The user?

How would you feel if an image from your game suddenly became OpenAI property because you used a prompt on DALL-E to build it?

The Player is NOT Safe

Children playing video games
Credit: FreePik

And then, there’s the player side. One would be forgiven to think that the user would be the least affected by all of this. Since that person is merely buying a game, then there’s no concern over how it was made, right? Players gonna play, as Taylor Swift once said.

Well, one would be forgiven. But also, one would be wrong.

Video games routinely collect vast amounts of personal information: usernames, spending choices, consumer habits, even the way someone plays a game. All of that is fed into an algorithm that will analyze this data. Companies say it’s for a more personalized, tailor-made gaming experience.

At the same rate, however, there’s nothing stopping companies from using this data to create assets that are eerily similar to the player. “Profiling” is not something you see only on Criminal Minds, after all. Emotionally-tailored algorithms, used to infer player emotions for personalized experiences, collects highly sensitive data like vocal intonations, facial expressions, and heart rate. This data, if identifiable, is protected by regulations like Europe’s GDPR or Brazil’s LGPD—both legal frameworks that try to curtail the unrestricted use of AI in gaming or other media platforms.

But when the monetization ability far outweighs any legal blowback, this activity becomes too tempting for profit-seeking companies not to use it, even risking privacy breach and regulatory scrutiny. The lack of transparency in AI algorithms means players often don’t understand how their data is used or how AI-driven features function, eroding trust and raising legal concerns.

Bias, Discrimination, and Reinforced Harmful Stereotypes

Bad tropes from AI in Gaming
Credit: CD Projekt Red

Whether in or out of video games, AI does not understand nuance. It will reflect the users’ proclivities down to a T. Why? Well, these tools are not designed to say “no”. And this is where AI in gaming can become more and more problematic.

You see, one of gaming’s most problematic stereotypes is the hypersexualization of women characters. And since most AI in gaming uses huge datasets for training, and most datasets are comprised of very public information on the internet (up to and including forum posts and X/Twitter rants, for instance), these systems will pick up on possibly misogynistic speech without recognizing it as misogyny.

For instance, some AI image generators produce hypersexualized women in positions that are rather unbecoming (think Stellar Blade, which is a pretty good game whose protagonist was overshadowed by skin tight outfits and way-too-weird butt shots on camera) while disproportionately portray men in smarter, more powerful roles (scientists, engineers, war heroes and so on), reflecting training data biases. There’s the lack of diversity some people like to call “woke”.

Not only that, AI video game tools has its own integrity at risk, because they have have found their way into cheating, such as aimbots, PC vision prediction algorithms that read footage on a screen—the list goes on, and they’re all near impossible to detect.

This undermines fair play, eroding trust and diminishing enjoyment for legitimate players. In response, game developers deploy AI-powered anti-cheat systems (Valve’s VACNet and Riot’s Vanguard, for instance), which use machine learning to detect suspicious activity. These systems are in a constant “AI arms race,” adapting to counter new cheat software.

And in the other end of spectrum, the player needs to keep up with the technology, more often than not investing money they don’t have on ever more powerful computer parts, just so they can have the average proper experience. When it comes to AI in gaming, these tools take a heavy toll and demand a lot of hi-end PC specs.

Why Do We Put Up With S**t?

AI in Gaming
Credit: Ubisoft

But if AI in gaming can be so bad, why do we, as humans, journalists, gamers, and consumers, continue to use it and tolerate it? Well, the short answer is: AI gives us the good stuff, and gaming is nothing if not a very interactive way of achieving that dopamine fix.

Nothing particularly wrong with that, of course: as with any other habit, we crave for the satisfaction of beating that difficult boss, or the joy of knowing that you got lucky enough to pick up a great game at a reduced price, or even brag about that friend you beat on your more recent Call of Duty deathmatch.

The answer is a complex interplay of economic incentives, technological momentum, consumer behavior, and the lagging pace of regulation.

Many view AI as an unstoppable tide of innovation towards that end. As one artist noted regarding AI etchics and the tech’s impact on digital art, “A digital artist cannot stop the tide of technological innovation any more than a Daguerreotype photographer could stop digital innovations in cameras and photography. However, artists can find ways to coexist with this technology…. This technology is not going away. We have to engage so that we can help create policies and practices that are beneficial to us.” 

This “adapt or die” mentality, while pragmatic, can lead to a passive acceptance of potentially harmful technologies. The idea that “AI in games will do what humans do, except for cheaper” reflects a widespread belief that efficiency equates to ethical advancement, overlooking the broader societal implications.

To put it in a proper example: if you use DALL-E to create a beautiful, hyper-realistic image of anything, does that make you a digital artist? Regardless of what you think, it’s safe to say that actual digital artists—as in, the people who studied it, went to art school, mastered the use of old and modern tools to pick up a trade that took them years to work with, would disagree with that notion. And yet, there’s still the debate of AI art looking “just as good” as a commissioned digital painter’s.

Finally, consumer behavior and a lack of transparency play a significant role. Players are often unaware of the extent of data collection or the subtle manipulative tactics employed by AI in gaming. In fact, this is the kind of question that perhaps not even AI-makers can answer to the fullest.

And yet, the drive for more and more custom-made experiences is a major incentive for companies to push that technology—sometimes at the expense of actual workers and the players, who, unbeknownst to themselves, are sacrificing an absurd amount of personal data just to see a particular boss suddenly being able to adapt to your fighting strategy on an action/adventure game.

This has a name, by the way: “The Black Box Problem.

The thing is, until we have a robust legal framework that regulates AI in gaming (and other media platforms too) across the board with severe penalties for those in violation of said regulations, this trend is likely to continue.

Just bear in mind: it will always come at the expense of someone.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*

Online News Association Member Badge

Proud member of the Online News Association (ONA)