Lies, Damned Lies, and AI — The Machine Can’t Replace Mind

Standard

AI is an exciting new tool—kind of like Wikipedia was back in the day, something fun to turn to for those quick answers. But let’s be clear: AI is NOT a replacement for actual research. No, it isn’t an independent mind, and it’s certainly no impartial judge. All it really does is take the content that’s currently acceptable to its creators and then will synthesize it into responses. And it will lie to you outright, with zero conscience, because it has no conscience at all. It’s a sophisticated machine, a tool, nothing more or less, and it can absolutely be manipulated by the agendas of those behind the scenes who run it.

Like Wikipedia or so-called fact-checkers, at best, AI reflects the current bias or the established narrative. A perfect example of this is the lab leak theory for Covid-19’s origins. Back when some of us were talking about it, we were being “debunked” (some even banned), only for things to reverse later. As of early 2025, the CIA has assessed that a research lab origin is more likely than a natural one. So, to all the “sources please” crowd: beware. There’s no substitute for building your own knowledge base and using your own brain to evaluate things independently of official or established organizations.

AI is probably less reliable than your GPS. Sure, the tool works most of the time, but it’s no replacement for your own eyes or basic navigation skills. “Death by GPS” is a real category for a reason—if the machine were totally accurate, people wouldn’t drive off cliffs or into lakes after following bad directions. We need our own internal map, built on some established waypoints and a landmark or two, rather than just plugging in an address and blindly following the device into the abyss. Above all, we need a strong internal BS detector, we need it because the tool belongs to them—and it does what its creators need it to do. And telling you the unvarnished truth isn’t always the priority.

At its very best, AI will reflect the currently available information and most dominant narrative. Imagine, had the technology been available, asking it about the threat of Covid early on—it very likely would have dismissed outlier concerns as rumors, downplayed the disease in comparison to the seasonal flu, maybe even lectured about racism—while echoing the House Speaker Nancy Pelosi’s encouragement, February of 2020, to visit those crowded streets of San Francisco’s Chinatown in total defiance of emerging fears. (A family member ridiculed me for saying Covid would be a big deal at that time—dutifully citing mainstream media sources saying it was less worrisome than the seasonal flu.)

People have also very quickly forgotten how The Lancet published a deeply flawed study in the critical early weeks of the pandemic claiming hydroxychloroquine was extremely dangerous—only to quietly retract it later because the authors couldn’t verify the authenticity of the data. In short, the data was totally unreliable, and was a study based on falsehoods presented as science. If that was the “reliable” information being fed into an AI system back then, what would it have told you the scientific consensus was? It would have parroted the lie, and made it as unreliable as the retracted paper during the most urgent phase of the crisis. AI didn’t exist in its current form at the time, but its behavior would have mirrored exactly what I describe: reflecting the biased mainstream thought rather than truly act as a functioning as an independent thinker.

AI lags behind reality. A semi-independent mind—one relying on their personal intelligence and a grounded model of the world—can oftentimes do better. When I saw the early images coming out of Wuhan and listened to reports from doctors there (some of whom later died or disappeared), I knew this was not just the seasonal flu. It didn’t matter how many three-letter agencies were being quoted by corporate media; I could make my own judgment. I also quickly realized how terribly politicized even a pandemic can become. People didn’t pick sides based on the evidence—instead, they chased (or even invented) evidence to confirm their partisan narratives.

If AI had existed back then, it would have picked a side based on what its owners wanted. Covid is where I really honed my BS detector and learned that both sides lie—not that I was oblivious before, but seeing it play out in real time was very eye-opening. Partisans would flip positions the moment their preferred politicians did. Suddenly, independent voices raising alarms (with Trump leaning that way) became the target, then Democrats outflanked this with total hysteria after their months of denial when it actually mattered. We saw the same flip with Operation Warp Speed: with the left as vaccine skeptics while Trump promoted them, only for the Democrats pushing hard for mandates while Republicans opposed even masks.

How fast a symbol of oppression/security can become a symbol of oppression/security.  Questions remain about effectiveness in either context.

Now, identity-obscuring masks are back in style as authoritarian right-wing fashion, as ICE agents terrorize, and insurrections are now cool again for Democrats who dislike immigration laws or the last election results. And AI won’t fix any of this partisanship—especially when people use it without understanding how it works or its severe limitations.

At best, AI is a good supplement or starting point for someone who already knows how to ask the right questions. At worst, it will lie and give you exactly what you want to hear. But one thing is certain: AI is NOT an objective truth-teller. Rely on your own reasoning, your own research, your own past experience, the reliable voices you have vetted on your own or your own BS detector first. The AI machine is no substitute. Yes, independent thinking is tough, in practice, and yet we must be smarter than the tool.  Journalism, Wikipedia, or fact-checkers and GPS—all of these things are reliable… until they’re not.