Microsoft and Meta bosses were asked outright by a UK government committee if an AI 'model is identified as being unsafe' could they recall it. They dodged the question almost entirely
The potential risks of a future dominated by AI is certainly a hot topic, but when the companies involved give such obtuse answers, is any real progress being made?
The House of Lords communications and digital committee met today with Rob Sherman, VP of policy and deputy chief privacy officer for Meta, and Owen Larter, the director of global responsible AI public policy at Microsoft, to discuss large language models and some of the wider implications of AI. In a far-ranging discussion in which many words were said and not a lot of actual information conveyed, one particular tidbit caught our attention.
When asked directly by the chair of the committee, Baroness Stowell of Beeston, as to whether either company was capable of recalling an AI model if it had been "identified as unsafe," or stopping it being deployed any further and how that might work, Rob Sherman gave a somewhat rambling response:
"I think it depends on what the technology is and how it's being used … one of the things that is quite important is to think about these things upfront before they're released … there are a number of other measures that we can take, so for example, once a model is released there's a lot of work that what we call a deployer of the model has to do, so there's not only one actor that's responsible for deploying this technology…
"When we released Llama, [we] put out a responsible use guide that talks about the steps that a deployer of the technology can do to make sure that it's used safely, and that includes things like what we call fine tuning, which is taking the model and making sure it's used appropriately…and then also filtering on the outputs to make sure that when somebody is using it in an end capacity, that the model is being used responsibly and thoughtfully."
Microsoft's Owen Larter, meanwhile, did not respond at all, although in fairness the discussion was wide ranging and somewhat pushed for time. Regardless, the fact that Meta's representative did not answer the question directly but instead spun his response out into a wider point on responsible use by others is not entirely surprising.
A lot was made over the course of the debate regarding the need for careful handling of AI models, and the potential risks and concerns this new technology may create.
However, beyond a few token concessions to emerging use policies and partnerships created to discuss the issue, the debate quickly became muddied as both representatives struggled to define at points what it even was that they were debating.
The biggest gaming news, reviews and hardware deals
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Best Black Friday PC gaming deals: All the best discounts in one place
How to avoid overpaying on a Black Friday gaming laptop deal: How much to pay, and where to buy from
How to spot the best Black Friday prebuilt deal: Don't pay over the odds for a PC this year
As Rob Sherman said helpfully earlier in the discussion in regard to the potential risks of irresponsible AI usage:
"What are the risks that we're thinking about, what are the tools that we have to assess whether those risks exist, and then what are the things we need to do to mitigate them"
While both participants seemed to agree that there was a "conversation to be had" about the issues discussed, neither seemed particularly keen on having that conversation, y'know, now. Each question was quickly answered with a fast flowing stream of potential policy, future risk assessment mechanisms and some currently ill-defined steps already being taken, the sum total of which seems to equate to "we're working on it".
All this will come as little comfort to those concerned about the far reaching implications of AI and the potential risks of creating and releasing a technology that even the companies who create it struggle to pin down into meaningful terms.
Today may have been an opportunity to lay down some steadfast plans as to how to regulate this increasingly important tool, but beyond the odd concession towards "security protections" and a "globally coherent approach", it seems progress is slow-going in regards to controlling and regulating AI in any meaningful way.
Andy built his first gaming PC at the tender age of 12, when IDE cables were a thing and high resolution wasn't. After spending over 15 years in the production industry overseeing a variety of live and recorded projects, he started writing his own PC hardware blog in the hope that people might send him things. And they did! Now working as a hardware writer for PC Gamer, Andy's been jumping around the world attending product launches and trade shows, all the while reviewing every bit of PC hardware he can get his hands on. You name it, if it's interesting hardware he'll write words about it, with opinions and everything.