Build First, Safeguard Later: digital public infrastructure is in vogue, but do we have the right protections in place?

By Etienne Koeppel, Shruti Trikanad, and Caragh Aylett-Bullock

In the global rush to build digital public infrastructure (DPI), meaningful safeguards have too often been treated as an afterthought, reduced to weak and voluntary principles. This deployment-first approach is dangerous, exposing human rights and civic participation to significant and avoidable harms. 

As a contested term, DPI typically refers to the foundational digital systems and platforms that enable the delivery of services, facilitate data exchange, and support governance. It includes elements such as digital identity systems, payment platforms, and data exchange protocols, designed to be accessible and interoperable for both public and private sectors. 

The current geopolitical juncture presents a fertile environment for the proliferation of DPI deployments. Globally, technological autonomy and industrial competitiveness are on the political agenda. Whether in the development community, industrial policy circles, or national governments, many experts have emphasised the successes of India’s Aadhaar digital ID system and Brazil’s Pix digital payment system, utilising these as examples to push DPI as the panacea for economic growth, digital sovereignty, and government efficiency. As a result, DPI deployment is expanding at great speed and shows no sign of slowing down. According to UCL’s Institute for Innovation and Public Purpose’s DPI Map, as of 2025, at least 64 countries have digital ID systems, 97 have digital payment systems, and 103 have data exchange systems. The 50-in-5 initiative, led by prominent development actors and a coalition of civil society, industry, and government partners, promises to roll out comprehensive DPI systems in 50 new countries by 2028. 

Yet, DPI has been linked to human rights harms and discrimination around the world, particularly within marginalised communities. As we explore below, examples abound, from exclusionary digital identity systems in Kenya and India, to discriminatory automated social benefits in Serbia and Jordan

Against this backdrop of contradictory outcomes, a concerning trend is emerging. While DPI deployments and their associated threats to privacy, equity, and freedom of expression are exponentially increasing, effective safeguarding policies and accountable governance responses are lagging; too often relegated to vague, voluntary, and normative commitments. This discrepancy needs urgent attention.

 

Shapeless concept: DPI is many things

To begin with, there is little clarity on what constitutes digital public infrastructure. There is no single way to define DPI, and as such, the term is frequently used to group together a wide range of initiatives, from identity systems and payment platforms to health and education databases. These differ significantly in their technical architecture, governance models, actors, uses, risks, and implications for public accountability. Each of these initiatives requires distinct safeguards and oversight mechanisms. For instance, India’s Aadhaar ID is a mandatory identity system linked to welfare benefits and subsidies, whereas its United Payment Interface (UPI) is a digital payment system that enables instant transactions between individual users, merchants, and banks. 

Even similar DPI initiatives manifest differently in different places. While Pix in Brazil and UPI in India are both digital payment systems, they have developed in contrasting ways. Indeed, Pix managed to challenge the American duopoly of Visa and Mastercard in Brazil’s payments market while UPI entrenched a duopoly of its own: today, 85% of all digital payments in India happen through either PhonePe, an application owned by Walmart, or Google Pay, owned by Alphabet. Evidently, they function as vastly different infrastructures that require specific governance solutions. Treating them as part of the same model suggests that the same safeguards could apply uniformly, when in fact different governance models and redress frameworks are needed. In order to ensure effective regulation, there is a need for principles with a localised, contextual nuance. 

 

Embedded Harms: exclusion, surveillance, and Big Tech dependency

Different DPI systems come with different human rights harms, most of which are embedded within the systems themselves. DPI  systems tend to collect and merge large amounts of personal data from across government databases, increasing the risk of unlawful surveillance and undermining the right to privacy. Equally, merging these databases entrenches errors, which then become more difficult to identify and address. For example, in Serbia, errors in a World Bank-funded semi-automated welfare system prevented thousands of Roma people from obtaining welfare benefits.

Further, fully digitised services, or those that require a form of digital identity for basic access, exclude people who do not have ready access to the internet or never had a legal identity to begin with. This has a disproportionate impact on individuals who already experience discrimination, including elderly people, those living with a disability, and people who are unhoused or stateless. For these reasons, the rollout of the Digital ID system in Kenya will likely exclude millions

DPI can also further the concentration of Big Tech’s power over our digital environment. DPI systems are not being deployed in infrastructure vacuums, but within digital ecosystems increasingly dominated by large private infrastructure providers. For instance, IDEMIA, a leading global digital ID provider, uses an automated biometric identity system (ABIS) to enroll or match individuals in its databases that runs on Amazon Web Services (AWS). The main open source alternative, Modular Open Source Identity Platform (MOSIP), also tends to run on AWS. Additionally, the construction of massive data centre projects in countries rolling out DPI, such as Microsoft’s recent $1 billion investment in Kenya or AWS’s $5 billion data centre expansion in Indonesia, is a potential cause for concern. While it does not in itself suggest that DPI data is stored with Big Tech, it can shape the conditions under which public digital systems operate. Over time, the concentration of cloud infrastructure can influence procurement choices, interoperability standards, and governments’ bargaining power, raising legitimate questions about long-term control of public data. As for the applications built on top of digital ID frameworks, Big Tech has a first-mover advantage and shiny, easy-to-use products, as is the case with Google Pay and Google Wallet, which makes it harder for local alternatives to emerge and compete. Such reliance entrenches hyperscalers at every layer of the DPI stack – from infrastructure to applications – ironically limiting the public’s control over its own data and information flows, and increasing the global dominance of a small number of Big Tech companies.

 

Vague and voluntary: high-level principles are insufficient on their own

What emerges from this is a troubling contradiction: even as DPI systems expand rapidly, reshaping access to essential services and concentrating technical power in the hands of a few companies, the regulatory and accountability frameworks around them remain underdeveloped. Governments and multilateral bodies have leaned on voluntary, principle-based approaches that promise safeguards but rarely deliver them. Examples include the recent Freedom Online Coalition’s Rights-Respecting DPI Principles, and also DPI Safeguards initiative by the United Nations Office for Digital and Emerging Technologies, the ‘Quad’ (the governments of the United States, Australia, Japan, and India) principles, the Organisation for Economic Co-operation and Development (OECD) Public Governance Policy Paper, and GovStack’s DPI Principles.

While these principles provide a normative starting point for policymaking, on their own, they fall short of meaningfully addressing the human rights and social harms of DPI. This is because they are reductive, high-level, and, crucially, voluntary. 

Voluntary frameworks enable governments and institutions to make aspirational commitments without being held to specific benchmarks, timelines, or mechanisms for redress. As such, they allow signatory states to commit to privacy and transparency in principle, while in practice subordinating such safeguards to the demands of data-intensive governance, surveillance, and efficiency. 

Without the necessary elements of context and enforceability, such principles fail to address the structural power asymmetries between governments, private actors, and individuals that are both shaped by and embedded within DPI. Worse yet, high-level principles may serve to legitimise and accelerate the deployment of DPI systems, despite the absence of clear evidence supporting their necessity or effectiveness. In doing so, such principles invite the commercial interests of private vendors into core layers of public service delivery, causing a dilution of state accountability, narrowing access for marginalised populations, undermining human rights, and fundamentally altering the relationship between the state and its people.  

 

Towards Rights-Respecting DPI: Context-Specific and Enforceable Safeguards 

To move beyond aspirational rhetoric, these principles must be turned into enforceable, technology- and context-sensitive safeguards that are adequately resourced and protected from political interference. Effective oversight should not equate to expanding unchecked state power. While enforceable safeguards are essential, the question of who enforces them is equally critical. Safeguards must be designed to constrain, not empower, state overreach. This requires involving the communities who are most at risk and who experience the harms of such technologies, and advocating, at a national level, for comprehensive regulation and independent oversight mechanisms. 

Concretely, national data protection authorities should be legally mandated to review DPI deployments, provided that such authorities operate with genuine functional independence, protected budgets, transparent appointments, and a defined legal mandate to oversee government systems. Their review processes should be supported by panels of civil society and academic experts with relevant human rights, social, and technical expertise. Where these conditions are lacking, equivalent independent oversight bodies must undertake the task. Legislative committees and judicial bodies can provide additional checks, requiring governments to justify DPI deployments through transparent reporting and rights-based scrutiny and empowering civil society groups to conduct open auditing of vendors and algorithms. In Kenya, following petitions by grassroots organisations, the High Court suspended the implementation of the national digital ID system, which was found to be exclusionary and lacking adequate data protection. These could be complemented by mandatory human rights impact assessments before and after roll-out. At the local level, participatory oversight mechanisms, such as citizen assemblies, community audits, and public scorecards done by civil society groups, can reveal harms invisible to central authorities and make avenues for redress more tangible.

Crucially, meaningful oversight must also extend beyond the national level. Intergovernmental organisations and development institutions such as the World Bank and Gates Foundation have played a central role in promoting and financing DPI deployments, particularly in developing countries, where their technical and financial influence often shapes national policy choices. With this influence comes responsibility: these actors must ensure that any DPI system they enable is designed and implemented in a manner that is rights-respecting, sustainable, and safely governed long after their involvement ends. This requires integrating civil society’s meaningful participation, particularly from organisations within affected countries, into every stage of project design, procurement, implementation, and evaluation. Where such participation cannot be guaranteed, development actors should reconsider their engagement altogether.

To meet these responsibilities, development institutions must establish credible avenues for intervention and redress throughout the project lifecycle. Mechanisms such as the World Bank’s Inspection Panel offer one model, but only where such processes are genuinely accessible to communities and capable of providing meaningful remedies. 

Importantly, because DPI systems are deeply intertwined with domestic governance functions, long-term involvement by external entities is neither appropriate nor desirable. In these contexts, development actors should prioritise supporting local civil society and grassroots movements in pursuing necessary legal, regulatory, and institutional reform, monitoring deployments, raising concerns, and seeking redress.

Throughout the deployment and maintenance of DPI, human rights should remain a central priority for governments and development actors. At the very least, offline and low-tech alternatives must be preserved to limit the systematic exclusion from accessing essential services and targeted discrimination that DPI technologies enable. This recommendation is echoed throughout civil society and is well-summarised in the Human Rights for ID Coalition’s Common Position on Mandatory Digital ID

 

Conclusion

The global enthusiasm for DPI has far outpaced any serious reckoning with its documented harms. Vague, voluntary principles risk functioning as retroactive endorsements, offering the appearance of accountability without the substance. Rights-respecting DPI cannot be achieved through broad, voluntary, and normative commitments alone, and high-level principles that are unenforceable and technology- and context-agnostic risk ending up in a growing pile of documents that states and companies ignore. 

Without concrete measures such as clear definitions, independent oversight, meaningful public participation, and redress mechanisms, DPI will continue to replicate and exacerbate existing forms of inequality while introducing new forms of technological control and dependence. 

Yet this moment also presents an opportunity: before DPI becomes further entrenched, we can take these principles further and co-create systems that are genuinely safe, inclusive, and rooted in the lived experiences of those most affected.  The answer lies in working with national civil society watchdogs, mobilising independent data protection and privacy regulators, seizing judicial and parliamentary oversight mechanisms, and actively tapping the expertise of local communities who are experiencing and resisting the harms of DPI. Together, we can codify responsive safeguards and build alternatives that truly serve the public interest and advance human dignity and fundamental rights.

About Etienne Koeppel

Etienne Koeppel is a political economist specialised in the intersection of technology, human rights, and social justice. Hailing from ARTICLE 19 and Freedom House, his interests lie in digital public infrastructure, artificial intelligence, and tech accountability. He holds a Master’s from the London School of Economics and Political Science, where he examined the effects of surveillance and polarised information ecosystems on ethnic minorities. When he’s not researching tech harms, he advises digital rights organisations on strategy and fundraising.

About Shruti Trikanad

Shruti Trikanad is a lawyer and digital researcher working at the intersection of digital identity, digital public infrastructure, and human rights. She spent four years at the Centre for Internet and Society, India, where she researched the design and governance of digital ID and digital government systems across Africa, Southeast Asia, and South Asia. Her work examines the political economy of digital ID, with a particular focus on the clashing incentives that emerge when external funders and development partners promote “digital ID for development” agendas.

About Caragh Aylett-Bullock

Caragh Aylett-Bullock is a researcher in the Digital Policy team at Demos, focusing on the relationship between technology and society. She was previously at Amnesty International, where she led research into AI legislation and policy in Latin America, contributed to advocacy on the EU AI Act, and built a toolkit for investigating government algorithms. She is currently a doctoral researcher at Goldsmiths, University of London, where she investigates the development and expansion of digital ID systems.