AI leaders pledge responsible tech development

AI leaders including Meta, Google, Microsoft and Samsung Electronics have joined forces to guard against the dangers of artificial intelligence.

Joining 14 other leading AI and tech companies who have signed up to the fresh ‘Frontier AI Safety Commitments’, the companies will each publish safety frameworks on how they will measure the risks of their frontier AI models.

These companies include Amazon, Anthropic, Cohere, G42, IBM, Inflection AI and Mistral AI, alongside, Naver, OpenAI, Technology Innovation Institute, xAI and

Subscribe to Mobile Marketing Magazine

Click here to get the latest marketing news free in your inbox every Thursday

Meanwhile, the frameworks aim to outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.

As a result, they have pledged to “not develop or deploy a model or system at all” if mitigations cannot keep risks below the thresholds. 

Meta, Global Affairs President, Nick Clegg, said: “Ensuring that safety and innovation go hand and hand is more critical than ever as industry makes massive strides in developing AI technology.

“To that end, since Bletchley last year, we’ve launched our latest state-of-the-art open-source model, Llama 3, as well as new open-source safety tooling to ensure developers using our models have what they need to deploy them safely. As we’ve long said, democratising access to this technology is essential to both advance innovation and deliver value to the most people possible.”

Microsoft Vice Chair and President, Brad Smith, continued: “The tech industry must continue to adapt policies and practices, as well as frameworks, to keep pace with science and societal expectations.”

Meanwhile, OpenAI VP of Global Affairs, Anna Makanju stated that “the field of AI safety is quickly evolving” and the leading AI company is “glad to endorse the commitments’ emphasis on refining approaches alongside the science.

Whereas, Google DeepMind General Counsel and Head of Governance, Tom Lue, concluded ” these commitments will help establish important best practices on frontier AI safety among leading developers.”