6 min read

Beware the AI Wizards

No one is claiming AI is actual magic. But the power to wield "magicness" is still power. Magician fraudsters are not dangerous simply because they can get people to believe them. The danger lies in getting others to submit and follow.
Person in a wizard mask pointing a wand
Photo by Chase Clark / Unsplash

There's this moment of magic the first time you ask ChatGPT, Claude, or any of the other AI chatbots to do something you don’t know how to do. Two poorly written prompts about unresponsive code, and one minute later, your script is debugged and previewed. You suddenly feel like Professor Dumbledore, a flit of his wand conjuring up a buffet. You can’t cook code for squat, but with this wand, you can whip up a program in a near instant.

Then you think, if so powerful a tool got into the hands of the wrong people? The smorgasbord of destruction it could wreak. . . Gasp!

Or maybe you had a less dramatic reaction than I did.

Regardless, it’s not hard to think of AI as either a savior or destroyer. Popular views feed into this perspective, claiming AI as “the most important – and best – thing our civilization has ever created” on the one hand or as a harbinger of doom on the other.

It can be easy to disengage with the magic of AI under such extremes. After all, so few of us are actual magicians.

Beware the Magicness

AI is not magic. It does have all the telltale signifiers of magicness. The mechanisms underlying AI's output are poorly understood by most, no one quite knows the limits, and modern generative AI is relatively new (ChatGPT was released only about three years ago as of this writing). In the language of equations: Inexplicable + Boundless + Novel = Magicness.

As Arthur C. Clarke more artfully put it:

“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C. Clarke

Magicness is a sense that what we’re experiencing doesn’t conform to how the physical world works. We know that the chatbot giving us workable code obeys the laws of physics. But it feels like it shouldn’t be able to deliver code so quickly and easily.

This magicness itself isn't a problem.

How we and others respond to it could be.

The bridge between magicness and misuse (no ness needed) is not very long. Renowned magician and skeptic James Randi built his reputation on debunking magician fraudsters. He knew that the power to mystify and dupe were close cousins. Not every magician uses their powers for entertainment.

No one is claiming AI is actual magic. But the power to wield magicness is still power. Magician fraudsters are not dangerous simply because they can get people to believe them. The danger lies in getting others to submit and follow.

Beware the Merchants of Magicness

Getting people to submit doesn’t require them to believe you’re a magician if you already have some measure of power. People merely need to let you run the show, and there’s good reason to believe people will do so with AI. First, AI is darn complicated, and many of us prefer what’s simple. Second, the impact of AI is unclear and likely enormous, and we tend to fear that kind of impact. Such overwhelm can easily tax the attention needed for intentional engagement.

Psychologist Daniel Kahneman, one of the most well-respected researchers and theorists of the human mind, proposed that people have two cognitive systems. System 1 is the quick-thinking automatic system we rely on to make snap judgments of events and others. System 2 is the slower, more deliberate system we rely on to reason about concepts.

System 1 is your suggestible sidekick, while System 2 is your skeptical advisor.

You can also think of System 1 as a half-drunk audience member, easily led by the voice at center stage. System 2 is more a sober James Randi, carefully watching for the magician’s sleight of hand. With our AI wizards dazzling us with so much on stage, it’s hard even for the teetotaling Randi to follow what’s actually going on.

Perhaps you’ve heard that an AI prompt consumes massive amounts of energy, that it could treat forms of cancer, that it is fundamentally biased, that it will dramatically impact the job market, that it could lead to the second Green Revolution, or that it improves an individual’s creativity.

That confusion is only a small sample of views on AI. And yet, it’s enough to leave many of us dizzy. An enterprising magician could easily lead us along under these conditions.

This puts most of us in a precarious position, with our inner advisor dazzled by so much that we haven’t the capacity for intentional engagement. Worrying, because our merchants of magicness may not wield real magic, but they do wield real power. And many of them are intentionally engaging with AI. If you rightfully trust their intentions, then there’s no worry. If you suspect differently, then there are many worries.

Beware of Overwhelm and Misdirection

This brings us back to what our most vocal merchants are selling us: A future where AI is either our savior or our destroyer. These would seem to be very different ends, but they both exist in a future where AI’s power is both awesome and beyond our understanding. It misdirects us, drawing our attention away from what we can control and know to what we cannot. That sense of helplessness is disempowering and is the perfect opportunity for our merchants to steer attention to where they’d prefer.

Pay no mind to where the data comes from or how we use it.

AI can solve climate change, but don’t ask us how much energy is used to that end.

What is the solution to such overwhelm and misdirection?

In his book Co-Intelligence, Professor Ethan Mollick offers one suggestion for how to better handle the destroyer mindset: “Rather than being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI could bring.” Putting out the little fires may be tough, but it’s easier than taking out the world-ending blaze.

In one example of this, Dr. Joy Buolamwini, founder of the Algorithmic Justice League, describes an organizing effort led by the Brooklyn tenants of one apartment complex against the misuse of AI and facial recognition programs. In Unmasking AI, Dr. Buolamwini details how facial recognition programs are known to be biased against recognizing the faces of Black women. Using such technology to determine entry as building management was planning to do, could have literally prevented Black women from entering their own homes. 

The tenants didn’t try to take on an apocalypse on their own. They put a relatively little fire out together.

As for the savior mindset? We need only understand that the positive effect of AI—like the effect of nearly all tools, as any good scientist will tell you—depends. In 2023, economists Shakked Noy and Whitney Zhang showed that using ChatGPT improved writing productivity, but mostly among those who weren’t that great to begin with.

Even more sobering to me are the findings of Anil Doshi and Oliver Hauser who asked groups of participants to write creative stories while preventing or providing access to AI. Mirroring the results of the prior study, those who were previously assessed as less creative benefited the most. But those creative AI-enabled stories? They tended to be more similar to each other than the human-driven ones.

AI is unlikely to enable an age of creativity when its use restricts the richness of human expression.

Perhaps you read the above and thought, “Oh. Of course.” Much of the actuality of what AI does is grounded in the normalcy of discrimination and dependencies. Normal, not because it is right, but because it describes a world you are familiar with. The magicness of AI is fine to feel, but no, AI is not magic.

Be Aware

We are living in a world where power is afforded to those who shape reality. Who distort and deny in the hopes of gaining yet more power. The simplest form of resistance is to not believe them.

Transdisciplinary scholar Dr. Ruha Benjamin asks that we not relegate the power of deciding what AI will be used for to a select few. Instead of being swayed away by extreme perspectives on AI, she invites us to engage thoughtfully based on our own realities. Using the concept of an UStopia, a word coined by Margaret Atwood, she invites us to do so collectively.

“Whereas utopias are the stuff of dreams, dystopia the stuff of nightmares, UStopias are what we create together when we’re wide awake.” - Dr. Ruha Benjamin

There are already a host of people and organizations that exist to support that engagement.

For those wanting to keep up with AI and tech news, Casey Newton’s Platformer offers a mainstream entry point for understanding how AI and democracy intersect. The Center for Humane Technology’s podcast, Your Undivided Attention, provides a more in-depth analysis of the relationship between well-being and tech. And YK Hong’s writing on liberation and tech justice is a great next step to help contextualize Big Tech within the broader history (and present-day) of oppression while offering some practical tech tips.

Following organizations like Dr. Buolamwini’s Algorithmic Justice League, the Montral AI Ethic Institute, and the AI Now Institute will help bring you into a bigger movement for less bias and more just AI and tech. And if you’re wondering if community-based approaches to AI research are even possible, the Distributed AI Research Institute (DAIR Institute) might give you some hope.

The AI “wizards” might be happy to have people cower before the power of AI magic. But no, AI is not magic. That does not mean you need to stop dreaming or believing that magic exists somewhere. Just make sure it doesn’t cause you to turn away from your own power.