Is your AI co-pilot helping you think better or just making more noise?
When I was in kindergarten, one of my favorite stories to read was a short picture book about a farmer overwhelmed by the noise on his farm. The endless stream of moos, baas, clucks, woofs, and hee-haws was driving him crazy. The chaos got so out of control that he plugged his ears and screamed, “TOO MUCH NOISE!”
It was funny to a 5-year-old…
Almost as funny as opening an AI-addled LinkedIn feed 25 years later (okay 45 years…) that’s burdened by emojis, hyphens, and “Congrats [insert name]” … the moos, baas, and clucks of our time.
The Glaring Problem with AI for Content Generation
“Write me an article on re-shoring the supply chain! Great, how about one on accelerating inventory turns? Navigating unexpected import tariffs? Developing our next generation of manufacturing leaders?”
Yes, you can give your favorite Gen AI tool just about any prompt and it will spit out an article within a few minutes. Usually, the content is directionally accurate. It’s also usually incredibly useless and boring. It’s just noise.
We’re seeing a growing number of experts turn to AI to help develop their thinking. This is not necessarily a bad thing. We use Chat GPT, Claude, Perplexity and a variety of other specialty AI tools every day at Rattleback in our work and with our clients. And they can be incredibly useful.
Unfortunately, we’re seeing some experts using them the wrong way. When we ask them a question about the issue we’re exploring together, they’ll respond, “Let me ask my co-pilot!” A few minutes later they send us a copy-and-pasted AI response.
They’re asking AI the questions. Skimming the responses. Nodding their heads. “Yes, that sounds about right.” Then dropping it into an outline or article with little further thought.
But, just as a quick reminder, a LLM is essentially a giant prediction machine. It predicts the next word based on the words that preceded it. So, when we ask it questions, its answer is just a bunch of words strung together based on everything it’s ingested (whatever that might be).
In essence, when we ask it a question, the response it provides is a loose approximation of the collective wisdom of everyone on the topic. It is designed to summarize conventional thinking – this is the exact opposite of thought leadership.
A Better Way to Use AI in Content Generation
There is, of course, a better way. Like so many of our readers, we use AI virtually every day. We use it to do secondary research. Ideate topics. Discern conventional wisdom. Challenge our thinking. Identify holes in arguments. Find best practices examples. Generate rough outlines. Shorten prose….
The one thing we don’t do is use it to replace critical thought.
We don’t ask it the questions. Rather, we tell it what we’re trying to do. Then, we ask AI … “what questions do you have of me?”
This approach inverts the relationship. It shifts AI from being an answer-engine to being a question-engine – making it an intellectual sparring partner. In essence, it helps us identify where everyone is zigging … so that we can zag … with a differentiated POV.
For an agency that makes a living helping clients develop their best thinking, AI functions like a helpful third partner in our relationships – subject matter expert (our clients), argument shapers (our strategists and editors), and an AI companion (a rapid researcher and thought partner).
In short, used correctly, it makes everyone’s thinking sharper and smarter than it would’ve been without AI.
It helps us cut through the noise … instead of adding to it.
If there’s anything we can do to help your firm on its thought leadership journey, give us a shout.