In late September, policymakers, scientists, and leaders of non-governmental organizations met during the 80th session of the United Nations General Assembly to discuss a pressing contemporary issue: the implications of artificial intelligence (AI) for global safety and security.
This second meeting of AI Safety Connect (AISC) addressed the growing need for international and cross-sector collaboration to mitigate the risks associated with advanced AI systems. Keynote speeches from ACM Turing laureate Yoshua Bengio and University of California, Santa Cruz Professor of Physics Anthony Aguirre outlined the hazards of agentic AI, while panels on “red lines,” private sector approaches to risk, and the rise of nonprofit AI research institutes shone a light on the challenges of AI governance in an age of rapid innovation.
AISC was started by Nicolas Miailhe and Cyrus Hodes, co-founders of The Future Society. Miailhe and Hodes envision AISC as an event series to facilitate dialogue between nonprofit organizations, academic and industry laboratories, governments, and investors. AISC’s first meeting took place as the Artificial Intelligence Action Summit in Paris in early February 2025 and included speakers from a global array of universities, nonprofits, and tech companies.
The second meeting welcomed NGOs and governments in addition to academia, nonprofits, and the private sector, with panels featuring representatives from the United Nations Development Programme (UNDP) and the governments of Singapore, Canada, and Brazil. As Hodes told participants, AISC aims to foster “shared understanding” and establish “responsibility mechanisms” across a broad range of sectors.
Earning Trust Across the Globe
The event opened with remarks from political leaders and representatives from the UNDP and International Telecommunication Union. Speakers agreed that AI innovation need not come at the cost of critical oversight measures.
Elina Valtonen, Finland’s Minister of Foreign Affairs, said AI “must always be deployed in accordance with international human rights and humanitarian law.”
Minyoung Han, Korea’s Director General for Climate Change, Energy, Environment and Scientific Affairs, described Korea’s efforts to “ensure that AI drives prosperity for all,” primarily by emphasizing safety as a cornerstone of research and development.
Building on the theme of civic innovation, Josephine Teo, Singapore’s Minister for Digital Development and Information, declared that public trust “has to be earned” rather than assumed. “People need to be assured that the AI we’re letting them use is developed and deployed in an ethical manner,” Teo said.
S. Krishnan, India’s Secretary for the Ministry of Electronics and Information Technology, echoed this sentiment, asserting that “AI should serve as a force for democratization” within and across national borders. Krishnan’s vision for democratic AI emphasizes transparent and widely-accessible resources, including databases, algorithms, and public education.
The Stakes of Unchecked Development
Following opening remarks, Bengio and Aguirre took to the stage to share their perspectives on the stakes of an AI-driven future.
Bengio outlined three major targets for safety research: market and power concentration; abuse by bad actors; and the potential for AI to advance beyond the threshold of human control. Each of these areas is associated with potentially catastrophic outcomes, he said. The first seeks to prevent “radically disproportionate” power discrepancies that could ensue from unregulated AI. The second addresses AI’s potential to be used in cybersecurity attacks, disinformation campaigns, and the construction of chemical and biological weapons. The third considers the current limitations of scientific research; as Bengio pointed out, today’s researchers “don’t know how to design AI that won’t harm people.”
Aguirre’s keynote took a similar warning tone. “Certain systems should not be built or deployed,” he said, voicing his support for the implementation of “red lines,” or hard constraints on AI agent capabilities, which was the subject of Nobel Peace Prize Laureate Maria Ressa’s opening address to the General Assembly. According to Aguirre, scientists and politicians should see themselves as equal participants in the red lines agenda. While scientists must commit to sharing their knowledge with politicians, politicians must work towards agreements on acceptable boundaries. This is not a minority position because the global call for red lines represents “a very strong consensus,” he said.
While the prospect of out-of-control AI may seem frightening, Aguirre emphasized that open dialogue can lead to a future where AI’s benefits don’t come with severe costs. “I think there’s a tendency to consider the development of AI like a train on a single track, and our only option is merely to decide how fast or slow we steam ahead,” he said. To the contrary, he said the trajectory of AI is “a proliferation of paths” from which to choose.
Red Lines, Model Oversight, and Safety Institutes
Aguirre continued the red lines discussion with Niki Iliadis, Director of Global AI Governance at The Future Society, and Charbel-Raphaël Segerie, Executive Director of France’s Center for AI Security. Iliadis posed the question of whether red lines would “stop innovation,” as many fear. Aguirre responded that “Unregulated, unbridled AI development” is not equivalent to progress. “We can choose a safer, more responsible, pro-human and trustworthy way,” he argued, drawing comparisons to the history of nuclear technology, where global actors pulled together to prevent the direst outcomes. “International diplomacy has shown in the past that it can do wonders,” Aguirre said. “And I think this is an opportunity for us to do it again.”
The following panel, “Safety on the Frontier of AI,” looked at technical and institutional frameworks to promote safety in general purpose models, such as the LLMs in wide use today. Chris Meserole, Executive Director of the Frontier Model Forum (FMF), described how the organization consults with its member firms—Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI—to identify thresholds of unacceptable risk, with an emphasis on biological and cybersecurity hazards. Coco Zhou, Senior Vice President and Partner of China’s UCloud, picked up on the theme of cybersecurity, saying, “The primary concerns of our customers right now is the leakage of private data,” which is why UCloud designs its cloud-based supercomputing services to accommodate both privacy and open knowledge sharing. Natasha Crampton, Microsoft’s first Chief Responsible AI Officer and a director at the FMF, framed Meserole and Zhou’s participation in this conversation as increasingly necessary, describing forums like AISC and the UN’s Global Dialogue on AI Governance as “young mechanisms” that will require input in the future from “civil society, academia, industry, and government.”
The closing panel focused on the rise of nonprofit AI safety institutes, or AISIs. In response to a question about priority-setting, Deval Pandya, Vice President of AI Engineering at the Vector Institute and an advisor to the Canadian AI Safety Institute (CASI), said that CASI was focusing on synthetic content because the topic hadn’t received due attention from similar organizations. Pandya’s response pointed to the growing links between AISIs across the world: as new organizations emerge, they’re joining forces to identify areas of weakness in their respective agendas. On top of helping scientists account for blind spots, AISI networks allow organizations to find partners for external research review, a priority voiced by Wan Sie Lee, Director of Singapore’s Infocomm Media Development Authority, and Qian Xiao, Vice Dean of Tsinghua University’s Institute for AI International Governance.
While the risks associated with AI are serious, AISC represents a growing community of stakeholders committed to working together to address them. This year saw the publication of the first comprehensive overview of current AI safety literature, which included input from over 100 researchers from 33 countries.
Next year’s India AI Impact Summit, which will host AISC’s third event, aims at broadening the safety community’s international horizons. During his closing speech, Miailhe shared that he and AISC co-founder Hodes “felt a duty to exercise social responsibility” by “building bridges across communities” in this space.
Emma Stamm is a writer, researcher, and educator based in New York City. Her interests are in philosophy and the social study of science and technology.



Join the Discussion (0)
Become a Member or Sign In to Post a Comment