IPU SG says if AI is a question of power, Parliaments must be part of the answer.
Opinion piece by Martin Chungong, IPU Secretary General
The India AI Impact Summit, held this week in New Delhi, is the first global summit on artificial intelligence (AI) of this scale hosted in the Global South. This signals that the debate about AI – how it is developed, who benefits from it and who governs it – can no longer be conducted among a narrow circle of wealthy nations and technology companies. It must be open to everyone.
Much of the public conversation about AI still centres on its extraordinary potential: breakthroughs in healthcare, gains in productivity, new tools for education and climate science, to name but a few. India itself offers compelling examples of what AI can deliver for development, from multilingual public services to precision agriculture. These opportunities are real and should be pursued.
But there is another dimension of AI that deserves equal attention: the question of power. Algorithmic systems are already shaping who receives public services, who qualifies for credit and who is flagged for surveillance. AI-generated content has featured in election campaigns on multiple continents. Deepfakes have been used to discredit political figures, disproportionately targeting women. Those who design, train and deploy these systems wield growing influence over the information environment of democracy itself.
And that power is concentrating fast. A handful of technology corporations now command market capitalizations exceeding the equity markets of major industrialized nations. Meanwhile, in the AI supply chain, millions of workers in the Global South are paid a pittance to annotate the data on which these systems are built. Too often, the benefits of AI are concentrated in a few hands, while many of the human and economic costs fall on those with the least power to shape the technology. This is the reality of the digital divide and it risks leaving much of the world as a consumer of AI systems over whose design and rules it has little say.
When the systems that increasingly govern people’s daily lives – their access to information, services, and economic opportunities – are designed and controlled by a small number of actors without meaningful public oversight, the social contract is under strain. The choices being made today about how AI is developed, deployed and regulated are inherently political. They involve trade-offs between innovation and safety, efficiency and equity, profit and the public interest. In any functioning democracy, those trade-offs should be debated openly, decided transparently and be subject to accountability.
Yet democratic governance is not keeping pace. There is a widening gap between the speed of AI development and the capacity of institutions and regulatory frameworks to shape it. International AI governance remains fragmented and short on binding commitments. Geopolitical competition risks fracturing it further, leaving many countries without a meaningful voice. No single nation can govern AI alone, but neither can governance be left to voluntary pledges and industry self-regulation.
This is where parliaments are essential. Parliaments make the laws that govern society and hold power to account, but beyond these foundational roles parliaments bring something distinctive to the AI debate: proximity to the people affected. Members of parliament hear directly from workers displaced by automation, communities subjected to algorithmic decision-making and parents navigating their children’s exposure to AI-driven platforms. This is what connects policy to lived experience and it is what has been missing from much of the AI governance conversation to date.
Last November, more than 200 participants at the first international parliamentary conference on responsible AI declared plainly: “We do not accept the concentration of power in the hands of a few actors.” They called for agreed red lines, an equal voice for the Global South and active parliamentary engagement in international AI governance. At national level, over 60 parliaments worldwide have taken some form of action on AI in the past two years, from passing legislation to oversight inquiries and public hearings.
These foundations need to be built on – faster and with greater coordination across borders. Parliaments must engage with the emerging body of international initiatives, ensure coherence between domestic law and evolving global standards and hold their governments to account for the commitments they make at summits like this one. The Inter-Parliamentary Union is supporting that work: tracking parliamentary AI initiatives across the world, developing practical tools and facilitating knowledge exchange between legislatures.
AI will be one of the defining forces of this century. Whether it strengthens democracy or erodes it depends on the governance choices we make now. If we get this right, AI can become a powerful instrument for growth and development, inclusive and more responsive governance. If we get it wrong, it risks entrenching the concentration of power, weakening accountability and deepening the divides – between nations and within them – that already strain the democratic fabric of our societies.
Events like the India AI Impact Summit set the direction of travel and signal political commitment, but their value will be measured by what follows. Parliaments – as the institutions closest to the people and most directly charged with safeguarding the public interest – must be central to that effort. The commitments made must be translated into national law, subjected to parliamentary scrutiny and opened to genuine public debate. Engaging parliaments as a standing feature of international AI governance is how we ensure that AI governance remains grounded in democratic accountability and responsive to the needs of people of all societies.