
April 20, 2026
Author: Lea Gimpel, Director of Policy and AI Lead, DPGA Secretariat
Despite a world characterised by uncertainty and the unravelling of established norms, safe and trustworthy AI emerges as a collective governance challenge that requires an inclusive approach. Diverse global actors must come together to balance innovation with ecological limits, promote power equity, and uphold the public interest. Against this backdrop, open technologies such as digital public goods play a key role in enabling global cooperation, agency, transparency, and, ultimately, trust. The DPGA Secretariat is happy to share that we submitted our response to the United Nations’ Call for Submissions to inform the first Global Dialogue on AI Governance, a platform established by the UN General Assembly to facilitate international cooperation, share best practices and address AI governance challenges such as increased fragmentation and the geopolitical question of who controls AI infrastructure.
The Global Digital Compact has formally recognised DPGs as the backbone of an equitable digital future. Under Objective 1, the GDC explicitly commits to:
"...Increas[ing] investment in and the development of digital public goods, including open-source software, open data, open artificial intelligence models, and open content, to promote inclusive digital transformation and achieve the Sustainable Development Goals."
The DPGA Secretariat understands the UN Global Dialogue not just as a forum for discussion, but also as a critical vehicle for implementing the GDC commitments. By connecting ongoing initiatives such as the DPG Standard, which is used to vet every DPG, and our work on open source software and open data for responsible AI development (DPG4AI) with pressing AI governance issues such as safety, security and bridging AI divides, we aim for the dialogue to turn these high-level objectives into practical action through a clear implementation roadmap.
A key precondition is that we move beyond the assumption that meaningful progress depends on matching the massive, energy-intensive computing scale of big tech. An alternative narrative is frugal AI: smaller, specialised, and highly efficient models that are trained on local data and optimised for lower-end hardware and limited computational infrastructure. This approach is also more easily compatible with the DPG Standard’s high openness requirements for datasets and models.
However, democratising AI development is not just about access to existing models; it is about reclaiming the agency to build technology that fits local economic, social and ecological realities. Digital public goods across the AI development lifecycle, along with open source AI models, are indispensable tools for achieving this goal, particularly because they are more cost-efficient than closed-source APIs when deployed at government scale.
By prioritising efficiency over sheer scale, countries can reduce their dependence on external hyperscalers, lower their carbon footprint, and work towards a vision of AI development and deployment under sovereign control. However, this approach needs to be complemented by rigorous trust and safety tooling.
A central theme of our recent reflections from the AI Impact Summit in India was that openness is not a silver bullet. While the DPGA champions open source software and models that help attain the SDGs, they must always be complemented by measures to avoid doing harm and adhere to privacy and other applicable laws, as baked into the DPG Standard. Lastly, not every problem needs AI as a solution - on the contrary, treating AI as the “default” often creates more problems than it solves, because it entrenches existing structural inequalities.
The DPGA Secretariat’s submission highlights the role of DPGs for equitable AI development, but also cautions that openness must be paired with robust governance to address deeper, systemic issues:
Our submission to the Global AI Dialogue is rooted in the belief that AI governance must address these topics urgently and should build on existing work and initiatives—such as the DPG Standard—that ensure technology is safe, inclusive, and designed for public benefit. Reiterating our stance from the AI Impact Summit, by pairing openness with deep public-sector-led investment in research, safety, and trusted data infrastructures, we can move toward a global AI ecosystem that empowers everyone to build and own AI on their own terms.