Copilots Gone Wild: Will Your AI Assistant Lead You Astray?
- By Winston Thomas
- July 28, 2024
AI copilots (or AI assistants) aren't just the future; they're already weaving into the tech fabric.
As a giant step away from the rote-learning chatbots that spew screen grabs and FAQs, they are more than gentle IT whisperers. With their machine learning edge, they could handle strategic tasks, solve entirely new problems, do boring tasks and circumnavigate an evolving crisis. Simply put, they democratize IT knowledge — all in plain English.
So, it's not a surprise that vendors have dropped copilots in the past six months faster than you can say "copilot." Hurray for efficiency! IT is now accessible to anyone.
But let's not get carried away by the hype. Beneath the shimmering veneer of efficiency and empowerment lurks a tangled web of problems that could very well unravel the delicate balance of IT ecosystems.
The copilot paradox
If you believed the current copilot marketing spiel, you would think that even your grandma can whip up complex scripts or troubleshoot a network hiccup with just a few plain English prompts. We know reality can be very different.
The problem is not the language. These AI assistants might be fluent in English, but they're not speaking the same language regarding technical jargon, specs, ideas or vendor vocabulary.
It's not inconceivable that every vendor has its pet copilot, trained on proprietary data and optimized to favor its products. It's a Tower of Babel situation where standardization is a moving target. It's like trying to understand a conversation between a dozen people speaking different dialects of geek. It's similar to what is going on in cloud tech just multiplied several times.
But for IT professionals, it also creates unnecessary conundrums. Many copilots are marketed as a single AI assistant solution to remove the technology's pain and make you more efficient. The problem is that today's environment has more than a few vendors and sometimes even competing products stacked next to each other.
Imagine a situation where a frazzled developer, juggling multiple projects and platforms, now has to appease a multitude of copilots with their unique quirks. This makes knowledge transfer (a major copilot promise) a challenge and forces that developer to play favorites, locking them into specific vendor ecosystems.
Calculating the human intelligence cost of AI efficiency
You may have heard the mantra: Copilots are supposed to augment human expertise, not replace it. That's true if you are already a human expert. What if you're just starting?
In five years, there's an undeniable risk that these AI assistants will dumb down the IT expertise landscape. With instant answers and preemptive solutions at their fingertips, are we inadvertently depriving the next generation of IT pros of the invaluable experience of learning from mistakes?
I think of it like learning to ride a bike with training wheels. You'll stay upright, but you won't develop the balance and reflexes needed to tackle real-world challenges. Similarly, copilots might shield users from the messy, frustrating, but ultimately rewarding process of troubleshooting and problem-solving. You may end up with an expert prompter who is a lazy learner instead of a domain expert who immediately identifies the problem.
There is also the issue of overlooking potential problems or inefficiencies. Sure, copilots are improving daily as they are always learning, but they are not perfect. That's because the real world, use cases and data aren't.
So, who do you blame when there's a security breach caused by a misconfiguration that a security copilot overlooked (because it was not trained on it)? The AI? The user who blindly trusted it? The vendor who created it? The lines of accountability blur, leaving organizations vulnerable.
Navigating the copilot minefield
So what's the solution? First, there's not throwing the baby out with the bathwater. Copilots have definite potential to revolutionize IT efficiency and adoption. They are also here to stay. But we must approach them with a healthy dose of skepticism and a proactive strategy.
Here's what I think needs to happen though:
- Focus on interoperability: Vendors must stop building walled gardens and start working together. Open APIs are a first step, but ultimately, we must create a common language for copilots (maybe based on industry or domain expertise). This will make knowledge transfer seamless and prevent vendor lock-in. This is important as it is inevitable that every user (not just IT users) will soon be working in an ecosystem of AI copilots.
- Balance Human-AI synergy: Copilots should be seen as tools to augment human expertise, not crutches to replace it. We must foster a continuous learning culture and encourage users to challenge the AI's recommendations. The copilot algorithm will thank you for the challenge as it learns faster. Yes, it's critical thinking 101, but for technology — it's vital if you do not want outliers or unlearned events blindsiding you.
- Make accountability non-negotiable: Organizations need to establish clear lines of responsibility for AI-assisted actions. This means creating robust governance frameworks and holding humans and AI accountable for their decisions. For many organizations I have talked to, this is only at the initial stages. Most legal teams have yet to dig into the real issues when working with multiple copilots.
The bottom line
Copilots have the power to shape how knowledge transfers and how we work. But we need to acknowledge that they are not a silver bullet.
In the future, and with a more AI-savvy workforce, it's not inconceivable that users will want to sequence different copilots to carry out more complex tasks. There's already a call for creating libraries of reusable prompts to automate multiple copilots. These are developments that companies and vendors need to champion.
There are also other concerns. What happens to the various copilots when an employee moves to another employer? Keep in mind that these were trained on personal two-way interactions. And can these copilots, continually evolving with the employees' input, be subject to privacy laws and shut down since some interactions may point to personal information?
The answers are still murky. But what's clear is that we need to start navigating the pitfalls by asking the hard questions now to harness the copilot's full potential without sacrificing the human element that makes IT thrive.
Else, we all live in a world filled with competing copilots and no pilot.
Image credit: iStockphoto/Motortion
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.