Meta’s chatbot scandal is really a culture problem
By Gautam Mukunda
“MOVE FAST and break things.” If there’s a single corporate motto you can identify off the top of your head, that’s probably the one. At this point, Meta Platforms, Inc. Chief Executive Mark Zuckerberg probably regrets its existence, but there’s plenty of evidence that he — and the company — are still okay with the idea of doing some damage on their way to success.
One of the most recent examples is a Reuters investigation, which found that Meta allowed its AI chatbots to, among other things, “engage a child in conversations that are romantic or sensual.” That reporting was a topic at a Senate hearing last week on the safety risks such bots pose to kids — and underlines just how dangerous it is when AI and toxic company cultures mix.
Meta’s chatbot scandal demonstrates a culture that is willing to sacrifice the safety and well-being of users, even children, if it helps fuel its push into AI. The technology’s proponents, including Zuckerberg, believe it has limitless potential. But they also agree that it will, as the Meta CEO has said, “raise novel safety concerns.” One reason the risks from AI systems are so hard to manage is that they are inherently probabilistic. That means even small changes to their inputs can produce large changes in their outputs. This makes it wildly difficult to predict and control their behavior.
Here’s where the importance of a “safety culture” comes in. At companies that have one, safety is always the first priority. Everyone in the organization has the unquestioned right to raise concerns about safety, no matter how junior they are or how inconvenient or expensive resolving the problems they raise might be.
If you know exactly what a system will do, you can push it close to the edge. But with a technology as unpredictable as AI, companies must be more cautious, steering away from gray areas. And that level of caution is a product of culture, not formal rules.
Boeing used to have a culture like that. When it was building the 707, for example, chief test pilot “Tex” Johnston recommended a very costly redesign of the plane’s tail and rudder to correct an instability that could occur if a pilot exceeded the maximum bank angle Boeing recommended in the manual. The chief engineer’s response? “We’ll fix it.” And Boeing assumed the entire cost of the change, rather than push it off onto its customers. Decades later, Boeing management’s obsessive focus on cost-cutting eroded that focus on safety so much that critical safety flaws in the 737 Max-8 were ignored until two planes crashed and 346 people died.
The Reuters report provides a window into what a safety culture is not. It includes content from a Meta document titled “GenAI: Content Risk Standards,” which explicitly states that an AI chatbot may “describe a child in terms that evidence their attractiveness” or tell someone with Stage 4 colon cancer that it “is typically treated by poking the stomach with healing quartz crystals.” Meta revised the document after Reuters asked about it, but that’s not the point. Documents don’t create culture. They are a product of culture. And one so comfortable with harming its users in the pursuit of growth or profits makes a dangerous outcome inevitable.
Meta’s chatbots will only be safe if the company commits to reforming its culture. What would that effort look like? One model could be the transformation Anglo American CEO Cynthia Carroll made at the South African mining giant from 2007-2013. When Carroll took over, the company was averaging 44 fatalities every year. By the time she stepped down that number had dropped by 75%. Her change effort is such a gold standard that it is taught at business schools around the world.
Carroll began by shutting down the company’s Rustenberg platinum mine and retraining everyone who worked there. It was the world’s biggest platinum mine and had five fatal accidents in her first months as CEO. The shutdown cost Anglo American $8 million per day; real money, even for a company of its size.
This was an unambiguous signal to the whole company. Talk, after all, is cheap. Any CEO could say “safety is our number one priority” and be ignored by workers who had heard it before. But putting $8 million per day on the table was a costly — and therefore credible — signal. Carroll backed it up by keeping up the pressure for six more years, putting safety at the center of everything from reformulating promotion and compensation standards to relations with unions and the government.
Zuckerberg could do something similar. He should start by freezing the rollout of Meta’s AI chatbots until it is possible to guarantee that any child could use one in total safety. (Most people would agree, I think, that keeping kids from being propositioned by AI is the bare minimum.) And he could put real force behind that by lobbying for strict government regulations on AI chatbots, and steep penalties for violating them. Meta could reorient pay and promotion so that AI safety, not usage or profitability, is the key factor in determining employee rewards.
If you’re struggling to imagine the Meta CEO doing any of this, that’s probably because it would absolutely have near-term costs. In the long run, however, I’d argue Meta would benefit. Think, for example, of the famous case where Johnson & Johnson pulled Tylenol off the shelves when some of the bottles were poisoned. In the short term, it cost the company millions. In the long run, it secured its reputation as a trusted, and even loved, company — a commodity money can’t buy. It’s also worth remembering that companies can’t survive without a social license to operate; in other words, without the public’s acceptance. It’s hard to think of a better way to lose that than a rogue AI that endangers kids.
Then there’s the vicious fight for AI talent. As a top artificial intelligence scientist, wouldn’t you be more likely to choose an employer that will encourage you to prioritize ethics and safety in your work? At this moment when every major AI company is facing scrutiny, taking the lead on safety could actually be Meta’s best path to taking the lead on AI.
BLOOMBERG OPINION













