At the World Economic Forum 2026 in Davos, held from 19 to 23 January under the theme “A Spirit of Dialogue,” the most urgent conversations were not about what technology might achieve next, but about who holds the authority to shape it. As digital systems take on roles once reserved for governments and public institutions, questions of control, accountability, and legitimacy moved from the margins to the centre of the agenda.
Behind closed doors and on public stages alike, digital governance emerged as a defining issue for political and business leaders. Artificial intelligence, cloud infrastructure, and digital finance were discussed less as engines of growth and more as forms of power. These systems now influence how societies function, how public services operate, and how markets behave, making governance a matter of consequence rather than theory.
The tone marked a clear departure from earlier gatherings. Optimism gave way to scrutiny as leaders spoke directly about limits, responsibility, and public consent. Large-scale AI deployment, many agreed, now depends on whether societies believe the benefits justify the costs. Rising energy use and growing concentration of influence have sharpened public expectations and reduced tolerance for vague promises.
Microsoft chief executive Satya Nadella gave voice to this shift with an unambiguous warning. If AI fails to deliver clear and widely shared value, society may withdraw its support. Healthcare and education, he said, will be decisive measures, as they reveal whether AI strengthens public trust or widens existing divides.

5 Key Takeaways
1. Digital governance as power and legitimacy: At Davos, digital governance was no longer treated as a future objective but as an immediate question of authority and trust. Discussions focused on who controls the digital systems underpinning economies and public services. Participants agreed that legitimacy now matters as much as innovation, particularly as AI concentrates influence and consumes significant energy resources.
2. Public acceptance as the key constraint on AI: Leaders acknowledged that technical progress alone cannot sustain AI adoption. Public support depends on visible and shared benefits. As Satya Nadella noted, confidence can fade quickly when citizens do not experience tangible improvements, especially in healthcare and education. Without clear value, even advanced systems risk losing social permission to operate.
3. Accountability overtaking experimentation: Organisations are moving beyond pilots toward operational governance. Enterprise-wide AI deployment has proven slower and more complex, requiring data readiness, workforce training, and defined human oversight. As AI shapes real-world outcomes, executives are focusing less on experimentation and more on accountability when decisions go wrong.
4. Sovereignty tested by infrastructure realities: Geopolitical pressures have pushed digital sovereignty from theory into practice. Governments seek control over data and compute, yet localisation mandates and subsidies have delivered mixed results. Cloud and AI markets remain highly concentrated, shifting the debate toward practical questions of where infrastructure resides and which legal regimes apply.
5. Momentum toward distributed AI and shared rules: As limits of centralised AI models become clearer, attention is turning to distributed systems that process data closer to its source. This approach improves resilience, latency, sustainability, and regulatory control. However, it also highlights the need for shared frameworks that enable cross-border infrastructure use without undermining national authority, as fragmentation remains a growing risk.
World Economic Forum 2026: From Experimentation to Accountability
Business leaders echoed this tone throughout World Economic Forum 2026. Many said that rolling out AI across an entire company has proved slower and harder than early trials suggested. The challenge often lies in preparing data, training staff, and restructuring internal processes so humans review and approve AI-driven decisions. Technical capability alone rarely determines success.
As a result, companies now spend less time testing tools and more time defining who is accountable when those tools shape outcomes. For many executives, governance has become an operational issue rather than a policy aspiration.
Sovereignty, infrastructure, and the limits of control
At the same time, geopolitical realities have pushed digital sovereignty from theory into practice. Governments and firms alike now accept that they must retain meaningful control over the technologies supporting their prosperity. Yet the tools traditionally used to assert sovereignty, heavy compliance regimes, “buy-local” mandates and large subsidies, have often delivered mixed results. In Europe’s cloud market, for example, the combined share of local providers has continued to shrink, while a small number of US-based hyperscalers dominate demand. Similar concentration is evident in AI, where the largest investments still flow to massive data centres owned by a handful of global players, primarily headquartered in the United States or China.
This tension, between sovereignty and competitiveness, surfaced repeatedly in Davos. In a session on “Digital Embassies for Sovereign AI,” participants framed sovereignty as both a legal and engineering puzzle: where data and compute physically reside, and under whose rules they operate. Proposals ranged from national controls over sensitive datasets to more ambitious calls for a standardized international framework, likened to a “Vienna Convention” for data, that would allow countries to use overseas infrastructure without surrendering authority.
The language of “digital embassies” itself drew scrutiny. As Diplo executive director Jovan Kurbalija has argued, many initiatives labelled this way are better understood as resilience infrastructure, designed to preserve critical data and continuity of government in crises, rather than diplomatic outposts in the traditional sense. The debate highlighted a broader challenge: how to adapt familiar concepts of sovereignty and jurisdiction to a digital world built on distributed systems and cross-border flows.
World Economic Forum 2026 : A shift toward distributed AI and shared rules
Some speakers at World Economic Forum 2026 supported shared rules that would allow countries to use data centres abroad while keeping control over sensitive information. Others stressed that clear legal boundaries matter more than new labels.
At the same time, the structure of AI itself came under scrutiny. The first wave of generative AI relied on ever larger models trained in centralized facilities, but surveys show that only a small share of AI pilots create clear business value. As a result, attention has shifted toward more distributed AI systems. In this model, training may still happen at scale, but fine-tuning and daily use move closer to where data is created, such as hospitals, factories, and transport networks.
This shift has significant implications for sovereignty. Processing data locally reduces dependency on distant infrastructure, improves resilience and allows organisations to apply their own security and compliance standards. It also addresses practical constraints. Latency matters for autonomous systems, while energy and sustainability concerns are forcing a rethink of the “bigger is better” philosophy. Smaller, regional facilities powered by low-carbon energy, and capable of reusing waste heat, can lower emissions and operating costs alike.
Governance gaps and the cost of failure
This technical shift carries political weight. When countries and companies contribute at different layers of AI development, they can play to their strengths without duplicating the entire stack.
Discussions on digital finance reinforced similar themes. Talks on tokenisation and new payment systems returned to familiar trade-offs between speed, control, and consumer protection.
Sessions on online harms and digital finance highlighted how innovation can outstrip safeguards, from scam ecosystems that blend cybercrime with human trafficking to debates over tokenisation and new payment rails that revive familiar trade-offs between efficiency, consumer protection and systemic risk.
Dialogue, fragmentation, and the road ahead
Taken together, the 2026 WEF in Davos mapped a clear shift in emphasis: away from abstract principles and toward practical control over infrastructure, accountability and cross-border rules. Trust in AI, participants broadly agreed, will depend on demonstrating real-world benefits, embedding human oversight into systems, and resolving where data and compute reside, legally as well as physically. Yet the forum also revealed a growing danger of fragmentation, as nations pursue divergent regulatory paths.
The countervailing force is a renewed push for cooperation: harmonised frameworks, shared standards and multistakeholder forums designed to assure that security, rights and resilience do not lag behind deployment. In that sense, Davos’ “spirit of dialogue” was far more than a slogan, but a recognition that in the digital age, control without cooperation is unlikely to endure.
We are here to help governments, financial institutions, and businesses to effectively comply with growing regulatory requirements through technology.






