Govts, tech firms vow to cooperate against AI risks at Seoul summit


At the conclusion of a global summit in Seoul, more than a dozen countries alongside major tech firms pledged on Wednesday to collaborate in addressing the potential risks posed by artificial intelligence (AI), emphasizing its capacity to evade human control.

AI safety took center stage during the two-day gathering, with over two dozen nations, including the United States and France, committing to joint efforts against emerging threats from advanced AI technologies, identifying “severe risks” that include the potential for AI systems to aid non-state actors in activities related to chemical or biological weapons, as stated in a joint statement by the participating nations.

Moreover, concerns were raised regarding AI models that could potentially bypass human oversight through various means such as safeguard circumvention, manipulation, deception, or autonomous replication and adaptation.

The ministers’ statement followed a preceding commitment by major AI companies, including OpenAI and Google DeepMind, to transparently share their risk assessment methodologies and avoid deploying systems where risks exceed acceptable limits.

The Seoul summit, co-hosted by South Korea and Britain, aimed to build upon the consensus established at the inaugural AI safety summit last year. Michelle Donelan, UK technology secretary, stressed the urgency of matching the accelerating pace of AI development to effectively mitigate associated risks, emphasizing the need for broader societal resilience to AI-related risks.

Additionally, the summit witnessed the adoption of the Seoul AI Business Pledge by a consortium of tech companies, including Samsung Electronics and IBM, demonstrating a collective commitment to responsible AI development.

Christina Montgomery, IBM’s Chief Privacy and Trust Officer, underscored the importance of implementing safeguards to prevent AI misuse, highlighting the need for thoughtful consideration of AI’s societal implications.

While AI proponents tout its potential to revolutionize various sectors, concerns persist regarding its potential misuse, including election manipulation and deepfake disinformation. Calls for international standards to govern AI development and usage have been echoed, recognizing the significant regulatory challenges posed by AI’s rapid advancement.

Experts at the summit emphasized the need for inclusive AI development, addressing concerns about inequality in AI benefits. Rumman Chowdhury, an AI ethics expert, highlighted the disproportionate distribution of AI benefits and emphasized the importance of inclusivity in AI development to ensure equitable outcomes.

In conclusion, the summit underscored the imperative of collaborative efforts among governments, tech firms, and academic experts to navigate the complex challenges posed by AI while ensuring its responsible and inclusive development for the benefit of all.