Senators Grill Meta’s Zuckerberg on Alleged ‘Leak’ of LLaMA AI Model
Meta’s CEO Mark Zuckerberg faced questioning from U.S. Senators Richard Blumenthal and Josh Hawley regarding the alleged “leak” of Meta’s artificial intelligence model, LLaMA. The senators expressed concerns about the potential dangers and criminal applications associated with this leaked AI model.
In a letter dated June 6, Blumenthal and Hawley criticized Zuckerberg’s decision to open source LLaMA, claiming that Meta’s release of the AI model lacked sufficient safeguards and was overly permissive in nature.
While the senators acknowledged the advantages of open-source software, they argued that Meta’s failure to thoroughly consider the widespread dissemination of LLaMA and its potential consequences ultimately did a disservice to the public.
Initially, LLaMA had a limited release for researchers. However, it was fully leaked by a user from the image board site 4chan in late February. The senators noted that the model quickly became available on BitTorrent, allowing anyone worldwide to access it without any form of monitoring or oversight.
Blumenthal and Hawley expressed their concern that LLaMA could be readily exploited by spammers and cybercriminals, enabling fraudulent activities and the distribution of inappropriate content.
To emphasize the contrast, the senators compared LLaMA to OpenAI’s ChatGPT-4 and Google’s Bard, both closed-source models. They highlighted LLaMA’s propensity to generate abusive content, while ChatGPT-4 adheres to ethical guidelines and refuses certain requests. Additionally, they noted instances where users “jailbreak” ChatGPT to produce unauthorized responses.
Within the letter, the senators posed several questions to Zuckerberg. They inquired about the existence of risk assessments before LLaMA’s release, Meta’s efforts to prevent or mitigate any resulting damage, and the company’s utilization of user data for AI research.
Reports indicate that OpenAI is developing its own open-source AI model, likely in response to the progress made by other open-source models. A leaked document authored by a senior software engineer at Google showcased these advancements.
Open-sourcing an AI model’s code allows others to modify it for specific purposes and encourages developers to contribute their own enhancements.