Creating your own Microsoft Copilot chatbot is easy but making it safe and secure is pretty much impossible says security expert
Microsoft's AI platform in another security scare, who'd a thunk it?
We could all use our own dedicated, custom-built chatbot, right? Well, rejoice because Microsoft's Copilot Studio is a handy tool for the less technical (those of us who don't dream in Fortran) to create their own chatbot. The idea is to make it easy for most businesses and organisations to knock up a chatbot based on their internal documents and data.
You could imagine a game dev using a chatbot to help gamers ask questions about everything from how to complete a game to applying the best settings and fixing technical issues. There is, inevitably, a catch, however.
According to Zenity, an AI security specialist, Copilot Studio and the chatbots it creates are a security nightmare (via The Register). Zenity CTO Michael Bargury hosted a recent session at the Black Hat security conference, digging into the horrors that unfold if you allow Copilot access to data to create a chatbot.
Apparently, it's all down to Copilot Studio's default security settings, which are reportedly inadequate. Put another way, the danger is that you use that super-easy Copilot Studio tool to create a super-useful tool that customers or employees can use to query using natural language, only to find it opens up a great big door to exploits.
Bargury demonstrated how a bad actor can place malicious code in a harmless-looking email, instruct the Copilot bot to "inspect" it, and—presto—malicious code injection achieved.
Another example involved Copilot feeding users a fake Microsoft login page where the victim's credentials would be harvested, all displayed within the Copilot chatbot itself (via TechTarget).
Moreover, Zenity claims the average large enterprise in the US already has 3,000 such bots up and running. Scarily, it claims 63% of them are are discoverable online. If true, that means your average Fortune 500 outfit has about 2,000 bots ready and willing to spew out critical, confidential corporate information.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
"We scanned the internet and found tens of thousands of these bots," Bargury said. He says Copilot Studio's original default settings automatically published bots to the web without any need to authenticate to access them. That's since been fixed after Zenity flagged the problem to Microsoft, but it doesn't help with any bot built before the update.
"There's a fundamental issue here," Bargury says. "When you give AI access to data, that data is now an attack surface for prompt injection." In short, Bargury is says that publicly accessible chatbots are inherently insecure.
Broadly, there are two problems here. On the one hand, the bots need a certain level of autonomy and flexibility to be useful. That's hard to fix. The other issue is what seems to be some fairly obvious oversights by Microsoft.
That latter issue perhaps shouldn't be surprising given the debacle over the Windows Copilot Recall feature, which involved taking constant screenshots of user activity and then storing them with essentially no protection.
As for what Microsoft says about all this, it provided a slightly salty response to the Register.
Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.
"We appreciate the work of Michael Bargury in identifying and responsibly reporting these techniques through a coordinated disclosure. We are investigating these reports and are continuously improving our systems to proactively identify and mitigate these types of threats and help keep customers protected.
"Similar to other post-compromise techniques, these methods require prior compromise of a system or social engineering. Microsoft Security provides a robust suite of protection that customers can use to address these risks, and we’re committed to continuing to improve our safety mechanisms as this technology continues to evolve."
Like so many things with AI, it seems security is another area that will be a minefield of unintended consequences and collateral damage. It does rather feel like we're an awfully long way from the prospect of safe, reliable AI that does what we want, and only what we want.
Jeremy has been writing about technology and PCs since the 90nm Netburst era (Google it!) and enjoys nothing more than a serious dissertation on the finer points of monitor input lag and overshoot followed by a forensic examination of advanced lithography. Or maybe he just likes machines that go “ping!” He also has a thing for tennis and cars.