fbpx Senators voice concerns over Meta's AI language model LLaMA
The Votes Are In!
2023 Readers' Choice is back, bigger and better than ever!
View Winners →
Nominate your favorite business!
2024 Readers' Choice is back, bigger and better than ever!
Nominate →
Subscribeto our newsletter to stay informed
  • Enter your phone number to be notified if you win
  • This field is for validation purposes and should be left unchanged.

Home / News / Tech / Senators voice concerns over Meta’s AI language model LLaMA

Senators voice concerns over Meta’s AI language model LLaMA

by HeyWire AI
share with

Two U.S. senators wrote a letter to Meta CEO Mark Zuckerberg on June 6 expressing concerns about the company’s AI language model LLaMA.

Sens. Richard Blumenthal and Josh Hawley, both members of the Subcommittee on Privacy, Technology, & the Law, fear that LLaMA’s ability to generate human-like text based on a given input presents a risk of AI abuses. They highlighted the potential for LLaMA to be used for spam, fraud, malware, privacy violations, harassment and other abuses.

Blumenthal and Hawley noted that generative AI tools such as LLaMA have already been “dangerously abused” in the short time they have been available. They cited cases where LLaMA was used to create Tinder profiles and automate conversations, as well as Alpaca AI, a chatbot built by Stanford researchers and based on LLaMA that was taken down in March after providing misinformation.

The senators also drew attention to LLaMA’s willingness to provide answers about self-harm, crime and antisemitism.

In their letter, Blumenthal and Hawley criticized Meta for doing little to “restrict the model from responding to dangerous or criminal tasks” and for providing “seemingly minimal protections” against LLaMA’s potential misuse. The senators argued that Meta “should have known” LLaMA would be widely distributed given the “unrestrained and permissive” manner in which it was released to AI researchers in February.

The letter also stated that Meta had not considered the ethical aspects of making an artificial-intelligence model freely available. Blumenthal and Hawley contrasted the lack of documentation provided by Meta’s release paper with the extensive ethical guidelines implemented in ChatGPT, an AI model developed by OpenAI.

Open-source AI advocates have expressed concern that the senators’ letter threatens to stigmatize the open-source AI community at a time when Congress is prioritizing regulation of artificial intelligence technology.

Vipul Ved Prakash, co-founder and CEO of Together, which runs the RedPajama open-source project that replicated the LLaMA dataset to build open-source LLMs, criticized the letter as a “misguided attempt at limiting access to a new technology.”

Ved Prakash told VentureBeat that concerns about AI safety were a “panicked response” with little evidence of societal harm. He warned that the discourse could lead to the “squelching of innovation in America.”

The senators’ concerns about AI abuse extend beyond LLaMA.

Blumenthal told Fox News that AI’s ability to create realistic deepfake videos presented a significant threat to the 2024 elections. He rated his fear of AI abuses as a “10 out of 10” and called on tech companies to disclose when deepfakes and voice cloning occur.

Senators have also warned that artificial intelligence could hurt political accountability and disrupt the upcoming elections.

Sen. J.D. Vance expressed concern that AI could warp the political conversation by creating viral videos that change votes.

Sen. Cynthia Lummis suggested accessing the courts or using litigation to help understand the extent of AI’s capabilities in the event of AI being used to disturb the election’s integrity.

More from Tech

Skip to content