beatrice's blog

AI, like all technology, is double-sided

We've all heard both extremes when it comes to AI. "AI is the best thing to happen since the internet! It'll save us all!" and "AI is the worst thing to happen since industrialization. It'll be our downfall."

The reality is that it'll probably be somewhere in the middle, with one foot in both camps. All technology has pros and cons:

I'm not sure there's a technology that's 100% good or 100% bad. Even weapons can be in the gray zone: they can be used for destruction, or they can be used for defense. It all just depends on the individual situation and people involved.

The same goes for AI, I think. AI might help neurodivergent people read, write, summarize, code, discover, or create. But it also steals original work from artists, fails to give proper credit, hallucinates inaccurate information, and uses a ton of water on an already severely mismanaged planet.

I don't fault anyone for falling on either side of that conversation. We all have different experiences, perspectives, and values.

For me personally, the part that causes me to be solidly on the "net negative" side of the AI conversation is the fact that I think it will lead us to becoming less human. The special thing about homo sapiens is our ability to create, tell, and understand stories--both made-up and that actually happened. Money is a story, religion is a story, gossip is stories, culture is stories, politics are a story. We speak, read, think about, and write about stories all the time, every day. We think independently, come up with new and original ideas, and build on existing ones. We argue, we problem-solve, we invent. We see other points of view. And we do it together as a social animal. There are other incredibly intelligent species on our home planet that can do amazing things that we can't do. But they can't do that.

My fear is that we've been outsourcing some of the skills that make us unique for some time now, and AI has the potential to exponentially increase that. I think humans, particularly student-age humans, should learn how to read long-form content, analyze it, think about it, form our own opinion about it, and be able to articulate our opinion. I think we should learn how to write well in many different contexts. I think we should learn how to search for information and evaluate the validity of that information. I think we should have spaces (like Bear Blog) to hear other people's experiences, opinions, and arguments, and then decide if we agree or not, and be able to articulate why. I think we should use this magnificent, accidental brain of ours instead of outsourcing it's power to computer models.

I've heard people say, 'well we'll still use our brains in an AI future but instead of writing code ourselves or reading long-form content, we'll use our brains to engineer AI prompts and manage its outputs.' But I still think in that scenario, we're outsourcing our brainpower to a thing--that we invented!

There's a floating theory in the NASA/SETI community that perhaps the reason why we've never seen evidence of other intelligent civilizations out there is because intelligence itself is double-sided. In a blog post, Marc Kaufman wrote, "This potential explanation is among the most unsettling: that intelligent and technologically advanced beings are likely to ultimately destroy themselves. Along with the creativity, the prowess, and the gumption, intelligence brings with it an inherent instinct for unsustainable expansion and unintentional self-destruction."[^2]

I believe mass-consumption generative AI is a net negative for society because it moves us further away from our ontology and closer toward our own destruction.

But everything is double-sided, right?