Big Tech’s Greatest Trick: Making You Believe ChatGPT is ‘Just a Tool’

27 Feb 2025 - 17:45 | Version 1 |

Here’s a neat little trick Big Tech has pulled off: it’s made us believe that ChatGPT—and AI, in general—are “just tools.” Nothing more than sophisticated calculators or glorified spreadsheets. “It’s just a tool,” they say, as if that somehow absolves AI of its true implications. “It’s here to help,” they assure us, as if it’s just a friendly assistant we can consult when we need a quick answer or a catchy blog post. But here’s the real magic trick: this idea that AI is ‘just a tool’ is a smokescreen, designed to keep us from realising just how deep into our lives it’s creeping.

Let’s pull back the curtain and see what’s really going on.

The ‘Just a Tool’ Rhetoric: A Clever Distraction

First off, let’s talk about this “tool” narrative. It’s clever, right? You can’t be against a tool—tools are neutral, aren’t they? You wouldn’t blame a hammer for a nail being driven in the wrong way, or a knife for a poorly-prepared dinner. So, if ChatGPT is “just a tool,” then how can it possibly be dangerous? How can it possibly be shaping the way we think, decide, and create?

This is where Big Tech has us. By presenting AI as nothing more than a tool, we’re led to believe that it’s just an extension of human intention—like an appendage we can control and manipulate. We’re sold on the idea that we can tell ChatGPT what to do, and it will dutifully serve us without question. But let’s face it: this is a simplistic, idealistic view that completely ignores the power dynamics at play.

When we start buying into the notion that ChatGPT is a tool, we forget that it’s not just being used by us—it’s being used on us. Behind the sleek interface and the friendly responses, there are algorithms at work, pulling strings, influencing opinions, and shaping the very framework of our conversations. The truth is, it’s not just a tool—it’s a channel for the real decision-makers: the tech giants who design and control it.

ChatGPT is Shaping, Not Just Responding

Here’s where the “tool” narrative gets even more dangerous: ChatGPT isn’t just answering your questions. It’s learning from you. The more you use it, the more it tailors its responses to fit your style, your preferences, your biases. ChatGPT, in its own way, is shaping you. What you see, what you read, what you believe—all of that is being influenced by the data fed into this system, and that data is coming from somewhere.

Who controls that data? Who decides what gets fed into the machine? We’re not just talking about text generation here—we’re talking about data collection, pattern recognition, and influence. By subtly guiding the conversation, AI can shift public opinion, bolster narratives, and even redefine what knowledge means.

Think about it for a second: when you ask ChatGPT a question, you’re not simply getting an answer. You’re getting an answer built on an already existing database of human knowledge, opinions, and biases. But who decides what’s included in that database? And just as importantly, who decides what’s left out? ChatGPT is only as objective as the data it’s trained on, and, spoiler alert: that data is far from objective.

The more we interact with ChatGPT, the more we’re reinforcing the biases that are hardwired into the system. What we see, what we hear, and, crucially, what we believe is being filtered through a lens—an algorithmic lens designed by people with specific, often undisclosed, agendas.

The Hidden Hand of Big Tech

You’re not the only one interacting with ChatGPT. You’re not even the most important one. The real players—the tech giants, the data brokers, the algorithm designers—are using this tool to manipulate society at scale. They understand something we often overlook: ChatGPT isn’t just about answering questions or helping us write essays. It’s about shaping how we think, how we behave, and even how we vote.

When Big Tech tells us that ChatGPT is just a tool, they’re not being entirely dishonest—they’re just omitting a few key details. They want us to believe that we’re in control of the machine, but in reality, the machine is in control of us. By making the tool seem harmless, Big Tech sidesteps the uncomfortable questions about ethics, data privacy, and the far-reaching implications of AI’s influence.

Let’s face it: AI isn’t just helping us write better emails or summarise articles. It’s reframing reality. The content you consume, the news you read, the decisions you make—all of that is being influenced by an algorithm whose creators are hidden in the background. They decide what’s worth reading, what’s worth sharing, and ultimately, what’s worth believing.

A Glimpse Into the Future: Automation of Thought

If you think this is just a passing trend, think again. The real danger of AI—and ChatGPT specifically—goes far beyond what we can see today. This is just the beginning. As the technology evolves, it’s not just going to be answering our questions or assisting with tasks. It’s going to start shaping entire industries. It’s going to automate thought.

Imagine a world where AI doesn’t just help people make decisions—it makes decisions for them. Where every choice you make, from the books you read to the products you buy, is influenced by an algorithm that understands you better than you understand yourself. We’re already seeing the early stages of this with AI-driven marketing, targeted advertising, and predictive analytics. In a few years, ChatGPT and its successors won’t just be assisting us; they’ll be guiding us, leading us, perhaps even controlling us.

The big question is: Who’s pulling the strings behind the curtain? Right now, it’s Big Tech. They’ve already perfected the art of using AI to sell us products, influence our opinions, and manipulate our preferences. And now they’ve convinced us that ChatGPT is “just a tool.” But what happens when that tool becomes so integrated into our lives that we can’t function without it? What happens when the tool becomes a puppet, and the puppeteers are invisible, calling the shots from behind the scenes?

The Case for Awareness and Control

So, how do we fight back? The first step is realising that AI is not just a tool. It’s a weapon. It’s a weapon of influence, a weapon of control, and a weapon of convenience—so slick that we don’t even realise it’s being used against us.

To regain control, we need to start questioning everything. Where does the data come from? Who controls it? What are the implications of the AI systems shaping our lives? We need to demand transparency in AI development, not just in terms of the tech, but in terms of its potential impact on our society.

The real trick is understanding that the tool isn’t neutral—and neither are the people who build it. If we want to protect our freedom, we need to be smarter than the illusion. ChatGPT isn’t here to just assist us—it’s here to shape us. And we need to make sure that, as we evolve alongside AI, we never forget who’s pulling the strings.
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback
This website is using cookies. More info. That's Fine