AI’s Role in Telecom: Useless?, Not Really, Just Misunderstood.
![]() |
Image by geralt (Pixabay) |
When I first read the Light Reading article
“AI
looks increasingly useless in telecom and anywhere else,” I had
to pause. Not because the argument was new—I’ve seen plenty of skepticism
about AI—but because of the tone.
It doesn’t just question AI’s utility, but it
paints a picture of a lobotomized society, drifting into an “AI
psychosis” where people see machines as sentient companions. Boy, it’s an
arresting way to start, but also, perhaps, too convenient a metaphor.
The author compares our intellectual
reliance on AI to muscles wasting away from disuse, citing early studies that
show people who lean too much on generative AI may grow less critical, less precise, and even a little sloppy. It’s a provocative analogy, but one that, in my
view, overreaches.
Yes, there are legitimate concerns:
copy-pasting AI outputs without scrutiny is a real problem, and treating
chatbots as friends, or worse, as oracles, can be dangerous, but to equate this
with a medical condition, “AI psychosis,” sounds to me a bit like hyperbole
bordering on clickbait. What these studies show is that AI can lull us into
laziness—something not unusual according to a 2018
study—not that it’s turning our brains to mush.
With that said, the piece does hit a nerve
when it turns to GPT-5; the disappointment is palpable. The model still confuses simple tasks, like
circling vowels instead of consonants. It’s easy to laugh at or despair over; after
all the hype about artificial general intelligence, the reality feels
underwhelming.
But here’s where the critique gets shaky.
To judge AI’s entire worth on its inability to distinguish vowels is like
judging a new plane design on whether the coffee tastes good at cruising
altitude. Large language models, at least today, are not “thinking machines.” They’re statistical pattern-matchers, not philosophers, so expecting AGI from them and then calling them “useless” when they don’t
deliver is a category error.
Still, the disappointment speaks to
something real: the AI industry itself has been complicit in setting impossible
expectations; Silicon Valley’s narrative of imminent machine sentience all but
guaranteed backlash once the limits of LLMs have become clear, and nowhere is
this hype-to-reality gap more glaring than in telecom.
Here, the article makes one of its sharpest
points. Telcos have been splashing AI across their marketing decks for years,
promising network revolutions, service personalization, and zero-touch
operations. Yet when you zoom in, what has actually changed? Has AI improved
revenues, slashed costs, or reshaped customer experience in a way anyone can
feel? Not really.
What has grown, instead, is data-center traffic and the profits of the vendors selling the hardware and software, but for
operators themselves, yes, the AI promise has been more mirage than oasis.
Now, I’ll push back a little. There are
real AI deployments in telecom, though they don’t make headlines: predictive
maintenance, energy optimization, or smarter chatbots that reduce call center
load. These may not be revolutionary, but they are practical, and in an
industry where margins are razor-thin, even incremental gains matter. Light
Reading’s critique, in my view, is right to say the AI hype hasn’t matched reality but wrong to imply there’s no value at all.
I think the problem isn’t uselessness; it’s mismatch: AI is being sold as a silver bullet when in reality it’s just
another tool.
The article also links the AI debate to job
cuts, noting how companies like BT, Ericsson, and Nokia have shed thousands of
workers. Here again, the author resists the easy narrative; these layoffs
aren’t simply the fault of generative AI, they’re tied more broadly to automation, outsourcing, and cost-cutting, and that’s true.
But ignoring AI’s accelerating role in
automating network tasks feels somehow shortsighted; the trajectory is clear:
whether it’s “agentic AI” or zero-touch networks, more telecom functions are
being handed over to software, so today, AI might not be the main reason for
pink slips, but tomorrow, it very well could be.
And then comes perhaps the most damning
claim: despite $30–40 billion poured into generative AI, 95% of organizations
report zero return on investment. Well, if accurate, it’s certainly brutal, but
here’s the thing: ROI in AI, as with other new tech, is tricky. The value
doesn’t always show up in quarterly reports. It shows up in long-term
efficiency, in new product development, and in the ability to scale without adding
as many humans as possible. So, if executives expect instant profits, they’ll
be disappointed, but to call it “zero return” is to miss the subtler,
slower-burn gains AI can bring.
Interestingly, the article’s skepticism
resonates with voices outside the newsroom too. Some Redditors chime in: AI
isn’t useless, they argue, but it’s expensive and often deployed for the wrong
problems. In telecom especially, privacy concerns, costs, and unclear business
cases keep projects from maturing.
This grassroots perspective gives the
critique more weight: it’s not just analysts saying AI hasn’t delivered, but
practitioners too.
So where does that leave us? Well, somewhere
between hysteria and hype. AI is not the panacea we were once sold; it won’t
magically balance their books or create dazzling new services overnight, but
neither is it a useless toy that circles consonants instead of vowels. It’s a
tool, sometimes blunt, sometimes sharp, that works when pointed at specific,
well-defined problems.
So, perhaps the real issue isn’t AI itself
but the stories we spin around it. When we market it as AGI, disappointment is
inevitable.
When we claim it will revolutionize
industries in a single budget cycle, we set ourselves up for disillusionment, and
when we dismiss it as “useless,” we risk overlooking the incremental,
unglamorous value it quietly creates.
To me, the Light Reading article, and others
like it, read less like a balanced critique and more like a warning flare, a
reminder that unchecked hype breeds backlash that’s useful, but the truth is
messier.
AI isn’t useless; it’s misunderstood, misapplied, and yes, frequently mis-sold, and until the tech world at large gets more honest about that, we’ll keep swinging between mania and despair.
Comments
Post a Comment