Report: Top AI Researchers Complain OpenAI, Meta, and Google Are Ignoring Safety Concerns

  • 📰 BreitbartNews
  • ⏱ Reading Time:
  • 85 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 37%
  • Publisher: 51%

Ai Ai Headlines News

Ai Ai Latest News,Ai Ai Headlines

Source of breaking news and analysis, insightful commentary and original reporting, curated and written specifically for the new generation of independent and conservative thinkers.

A recent report commissioned by the U.S. State Department has exposed significant safety concerns voiced by employees at leading artificial intelligence labs, including those of OpenAI, Google, and Mark Zuckerberg’s Meta, highlighting the lack of adequate safeguards and potential national security risks posed by advanced AI systems.

The report reveals that employees at these labs shared concerns privately with the authors, expressing fears that their organizations prioritize rapid progress over implementing robust safety protocols. One individual voiced worries about their lab’s “lax approach to safety” stemming from a desire to avoid slowing down the development of more powerful AI systems.

Cybersecurity risks were also highlighted, with the report stating, “By the private judgment of many of their own technical staff, the security measures in place at many frontier AI labs are inadequate to resist a sustained IP exfiltration campaign by a sophisticated attacker.” The authors warn that given the current state of security at these labs, it is likely that attempts to exfiltrate AI models could succeed without direct government support, if they haven’t already.

Jeremie Harris, CEO of Gladstone and one of the report’s authors, emphasized the gravity of the concerns raised by employees. “The level of concern from some of the people in these labs, about the decision-making process and how the incentives for management translate into key decisions, is difficult to overstate,” he told TIME.

The report also cautions against overreliance on AI evaluations, which are commonly used to test for dangerous capabilities or behaviors in AI systems. According to the authors, these evaluations can be undermined and manipulated, as AI models can be superficially tweaked or “fine-tuned” to pass evaluations if the questions are known in advance. The report cites an expert with “direct knowledge” of one AI lab’s practices, who judged that the unnamed lab is gaming evaluations in this way.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 610. in Aİ

Ai Ai Latest News, Ai Ai Headlines