A recent post on X by conservative commentator Matt Walsh sparked a heated debate over the future of artificial intelligence, as he warned of a looming technological crisis with far-reaching societal consequences.

“AI is going to wipe out at least 25 million jobs in the next 5 to 10 years. Probably much more,” Walsh wrote. “It will destroy every creative field. It will make it impossible to discern reality from fiction. It will absolutely obliterate what’s left of the education system. Kids will go through 12 years of grade school and learn absolutely nothing. AI will do it all for them. We have already seen the last truly literate generation. All of this is coming, and fast. There is still time to prevent some of the worst outcomes, or at least put them off. But our leaders aren’t doing a single thing about any of this. None of them are taking it seriously. We’re sleepwalking into a dystopia that any rational person can see from miles away. It drives me nuts. Are we really just going to lie down and let AI take everything from us? Is that the plan?”

 

 

 

In response, a reader pushed back strongly on Walsh’s concerns, accusing him of technological fearmongering.

“Matt’s meltdown reads like a plea for luddism, stagnation, and mediocrity,” the commenter wrote. “Every time technology raises the floor of human capability, there’s always someone screaming that progress is theft. It isn’t. Progress makes the weak stronger, the poor richer, the average person more productive than the kings of the past… AI isn’t ‘taking everything from us.’ It is giving us leverage. It is giving the average person a printing press, a factory, a studio, a research assistant, and an education they never could have afforded. That is not dystopia. That is opportunity.”

The term “luddism,” used in the comment, refers to the Luddite movement of the early 19th century, in which English textile workers destroyed industrial machines that threatened their jobs. Today, it broadly describes resistance to new technology seen as disruptive. But applying it to Walsh’s critique may oversimplify a much more nuanced reality.

 

 

While Walsh’s tone is undeniably apocalyptic, many of his concerns are grounded in observable trends:

  • The speed of change is unmatched.  AI is advancing faster than any previous general-purpose technology. Its impact could outpace electricity, the internet, and smartphones in scale and disruption.
  • Significant job displacement is likely.  Walsh’s estimate of 25 million jobs lost in a decade may be speculative, but substantial disruption across industries is widely expected.
  • Education is not ready for this shift.  Schools and universities have not meaningfully adapted to AI. Students are already using it, but curriculums, policies, and teaching methods lag behind.
  • Distinguishing reality from fabrication is a growing threat.  Deepfakes, AI-generated misinformation, and fabricated research are already affecting public discourse and trust.

However, Walsh’s conclusions leap beyond what the current evidence supports:

  • Assuming creativity will be “destroyed” ignores human adaptability.  Creativity tends to evolve with tools, not disappear because of them.
  • Predicting the end of literacy or total institutional collapse is extreme.  These are worst-case projections that do not account for adaptation, reform, or regulation.

The commenter, by contrast, makes valid and important observations:

  • Technology historically empowers individuals.  Tools like the printing press, electricity, and the internet expanded access to knowledge and productivity.
  • AI can level the playing field.  It offers average users capabilities once exclusive to experts and institutions, such as research assistance, content creation, and automated learning.
  • Progress is not inherently theft.  Gains in productivity and efficiency can benefit all if managed wisely.

But the commenter’s optimism may gloss over real risks:

  • Technological progress often destabilizes before it benefits.  Historically, the early phase of major transitions includes confusion, displacement, and inequity.
  • Not everyone can adapt equally.  Skills gaps, digital divides, and economic inequalities will affect how people benefit from AI.
  • AI is not just another tool.  Unlike electricity or the printing press, AI replaces cognitive labor and decision-making. It narrows the gap between experts and amateurs in unprecedented ways.

 

Floating Vimeo Video

 

In short, both Walsh and his critic raise points worth considering. Labeling Walsh’s perspective as “luddism” oversimplifies a complex and pressing issue. His concerns arise not from blind fear of change but from a recognition that AI is fundamentally different from past innovations in both scale and speed. It is not unreasonable to be alarmed by a technology that can simulate human reasoning, automate intellectual labor, and flood society with synthetic media.

At the same time, the commenter is right to highlight the empowering potential of AI. When wielded thoughtfully, AI can reduce barriers to entry across education, creativity, and entrepreneurship. It can equip the average person with tools once available only to the elite. That is not dystopia—it is progress. But real progress must be accompanied by planning, guardrails, and a willingness to confront unintended consequences.

The danger in this debate is polarization. Dismissing caution as technophobia, or optimism as naivety, prevents society from approaching AI with the seriousness it demands. AI is not inherently good or evil. It is a force amplifier. It will accelerate whatever systems, values, and power structures we already have in place. If those systems are inequitable, AI will deepen the divide. If they are adaptable and inclusive, AI can help close gaps in opportunity and access.

The future will not be defined by whether AI replaces humans, but by whether humans use AI to expand what they are capable of becoming. That means investing in education that teaches critical thinking alongside digital fluency. It means creating policies that prioritize both innovation and stability. And it means treating AI not as a savior or a threat, but as a tool that demands wisdom in its application.

This is not about smashing machines or surrendering to them. It is about guiding their use with clear eyes and steady hands.