youNo real metamorphosis. On September 10, when Mark Zuckerberg took the stage at the Chase Center, an 18,000-seat venue in San Francisco, the audience was stunned. The founder of Facebook and CEO of the Meta empire (Instagram, WhatsApp) looked fatter, curly-haired and, to be honest, relaxed, far from the robotic appearance that has made him the embodiment of the misdeeds of technology.
“Zuck,” who spoke during the recording of the financial podcast Acquired, wore a baggy T-shirt marked with an expression in Greek: “Learning through suffering”. At 40, the billionaire believes he has endured a lot. Blamed for the polarisation of society, the decline of democracy and the despair of teenagers, he already considers himself to have been a victim of self-flagellation. And he will no longer apologize, he announced.
Mark Zuckerberg, however, has never been seriously concerned. Indeed, the Federal Trade Commission (FTC, the American competition authority) has initiated antitrust proceedings to dismantle Meta. But Congress has never succeeded in limiting the power of the platforms. Facebook and Instagram have never had more followers and Meta’s share price is at an all-time high. “Uninhibited Zuck” quietly raises cattle on his ranch in Kauai, Hawaii, a property with thirty bedrooms, thirty bathrooms and an underground bunker.
SB 1047
Unsurprisingly, the CEO opposes, like most of his peers, the state of California’s attempt to regulate the artificial intelligence (AI) sector. A fierce battle. After years of letting tech giants “Outsourcing risks to the public while keeping the benefits for themselves”According to Dan Hendrycks, director of the Center for AI Safety in San Francisco, authorities are trying to take the initiative. But the industry is complaining of interference in innovation.
In late August, the California Legislature overwhelmingly approved SB 1047, introduced by Democratic Representative Scott Wiener. Dubbed “law for safe innovation in pioneering AI models”, The text is the most ambitious proposed in the United States. It only targets the most powerful language models, but requires them to develop security plans. A major innovation in relation to the immunity enjoyed by platforms: AI giants would have to answer for their actions in the event of a catastrophe causing the death of a large number of people or a cyberattack costing more than 500 million dollars (450 million euros) in damages. “We should not have to pay in human lives”Dan Hendrycks insisted on September 12, during a debate organized by the California branch of the Carnegie Endowment for International Peace, discussing the risk of developing biological weapons.
You have 31.8% of this article left to read. The rest is reserved for subscribers.