We’re in the middle of an international AI gold rush. As national leaders wake up to the potentially vast implications of AI tools for their economies and societies, they’re racing to grab their share of its potential value – for much as globalization displaced manufacturing from the West to the East, AI promises to replace jobs in some parts of the world while creating new wealth in others.

In this new gold rush, countries are choosing very different paths: some are letting their digital prospectors run wild, whilst others seek to control their activities in the interests of human rights – or of state power.

The United States, for example, has embraced laissez-faire policies, with Donald Trump pursuing an AI Action Plan that weakens safeguards against discrimination and disinformation. At the other end of the scale, China uses facial recognition technology to monitor and control its people, closely policing both online activity and public spaces.

You may like

Tom Winstanley

Social Links Navigation

CTO and Head of New Ventures at NTT DATA UK&I.

The European Union has sought to limit the power of both private and public sectors, putting in place an AI Act that tightly controls how data and AI technologies can be used. This protects people’s privacy and their ownership of data, but at the cost of constraining innovation: Siemens and SAP, for example, have criticized what they call “overly stifling” legislation.

The problem here is that, if jobs are to be automated with AI technologies, that work can often be undertaken anywhere: authorities placing overly onerous controls on the use of AI within their borders risk seeing AI providers in less tightly regulated jurisdictions hoovering up swathes of their jobs markets.

There is an alternative to the polar options of permitting untrammeled AI at any cost, or slowing innovation to a crawl with wide-reaching regulation. Countries could follow a third path – seeking to protect citizens’ rights, avert discriminatory outcomes and constrain disinformation, while preserving as great a space as possible for the responsible, ethical use of AI in the interests of economic growth and social progress.

A shared vision for human-centric AI

With the US, China and the EU pulling in very different directions, few individual countries have the AI capabilities to blaze this alternative path. There are, however, two highly advanced economies with both strong AI and tech sectors, and a similar mindset on the regulation of technology: the United Kingdom and Japan.

Sharing a natural affinity for balanced technological governance, both are mercantile, globally-trading nations with a strong heritage in technology and innovation. It’s an alignment which was affirmed again most recently through the UK-Japan Digital Partnership: a pact which emphasized a shared commitment to knowledge exchange on AI safety and international AI governance, highlighting a shared human-first approach at the core of each nation’s AI adoption strategy.

This philosophy is perhaps best encapsulated by Japan’s “Society 5.0” vision, defined by the Japanese Cabinet Office as “a human-centered society in which economic development and the resolution of social issues are compatible with each other.”

This view stands apart from other national digital transformation strategies, rejecting the notion that growth and social responsibility must be in conflict; rather, it holds that technological progress can serve both goals in tandem.

You may like

Eastern and western approaches to business can form a powerful cocktail within the technology sector, and this principle of responsible innovation – one that prioritizes ethical AI development while advancing technical excellence – is a compelling foundation for the middle path that the UK and Japan are uniquely poised to champion.

Walking the regulatory tightrope

Yet while the UK and Japan share common values when it comes to AI ethics and innovation, the countries’ offers are not yet distinctive, clear, or aligned enough to win over other nations.

In the UK, rather than create a single, sweeping set of legislation, the government has built a strong foundation through the transposition of existing law (such as GDPR) and is supplementing this with additional guidance and non-statutory frameworks.

The Department for Science, Innovation, and Technology’s guidance for regulators, for example, has set out initial advice on how independent bodies can interpret the UK’s five pro-innovation principles in their own sectors.

It encourages them to develop tools and advice on issues such as safety, fairness, transparency and accountability, while leaving room for industry-specific nuance. By offering guidance rather than imposing a singular regulation, UK standards bodies can signpost the way forwards without overly restricting organizations’ ability to innovate.

Within that framework, regulators can add to these measures by producing rules and guidance to fit their sector’s specific requirements and risks. Regulators can also assist the development and testing of AI by helping to create AI “sandboxes”: controlled environments where new technologies can be tested with fewer regulatory strictures.

These sandboxes give companies the freedom to develop new products or services within an approved, safe environment, and provide regulators with an opportunity to understand how AI technologies work in practice – informing the design and evolution of regulation.

And the UK has been an early mover in this space: in 2023, it launched a multi-regulator AI sandbox to allow developers to trial systems under enhanced regulatory supervision without the risk of fines or liability.

By enabling controlled experimentation and promoting transparency, sandboxes are designed to serve as trust-building tools between businesses and regulators. The insights generated through trials will help regulators determine where guardrails are needed, making them a useful tool for walking the fine line between under-regulation, which invites risk, and over-regulation, which can stifle progress.

Closing the gap between policy and progress

Striking the balance on regulation is just one challenge for the UK; equally significant are the UK’s structural constraints. Limitations around planning, funding, and IT infrastructure – such as delays to data center expansion and high energy costs – remain unresolved and risk undermining otherwise sound strategies.

These issues cut across industrial strategy, planning and investment; and if the UK is to remain globally competitive, they must be viewed and addressed as cross-governmental priorities. The AI Growth Opportunities Plan and AI Growth Zones represent positive steps toward addressing these challenges, but questions remain about their capacity to compete with massive initiatives such as the US’s privately-led Stargate Project or the EU’s public-led AI gigafactories.

Relative to these behemoths, the UK’s investment frameworks still suffer from a credibility gap: stakeholders see ambition, but not yet the funding, infrastructure, or policy cohesion needed to catalyze real change.

The AI Action Plan, for instance, is widely seen as a constructive foundation; but with limited financial backing and no clear mechanism for driving substantial private sector co-investment, faith in its long-term impact remains muted.

Whether by directly investing or by creating conditions that unlock large-scale private capital, the government must convince the market that its ambitions will be matched by execution.

In Japan, the situation is very different. Here, we see investment flowing in from both government and private actors, with Prime Minister Shigeru Ishiba announcing a US$65 billion investment plan in November 2024 focused on subsidies and financial stimulus for its semiconductor and AI sectors. To give just one example of ambitious investment, Softbank has partnered with OpenAI to build a brand-new data center in Osaka worth over US$677m.

Rethinking the global AI leadership standard

The UK-Japan Digital Partnership was meant to translate the two countries’ shared vision into joint action, but to date it has remained largely a forum for dialogue rather than a platform for co-investment. There are, however, signs of momentum. In January 2025, ministers renewed commitments across AI and computer, semiconductor and digital regulation and standards, while their AI Safety Institutes ran the first global joint safety testing exercise in November 2024.

Recent developments in global AI governance highlight the importance of balancing AI ethics with innovation. When regulatory approaches are perceived as overly restrictive, key actors disengage: Meta’s refusal to sign the EU’s AI Code of Practice is a case in point. And when there is insufficient oversight, we risk infringing people’s rights or privacy, supercharging disinformation or aggravating discrimination – and ultimately undermining public trust in ways that hamper our ability to realize the potential of AI.

The task facing Britain is to secure a good share of the AI gold rush, while protecting the public interest. That demands a stable environment favoring the substantial private sector investments required to establish a leadership position in AI, while retaining the trust and confidence of the public. And that in turn means avoiding both the regulatory strait-jackets that characterize some approaches, and the ethical blind spots that accompany others.

In the race for AI supremacy, the ultimate winners may not be those with the largest budgets or those who move the fastest, but those who master the delicate balancing act between regulatory over-caution and the unconstrained pursuit of AI revenues at any cost.

For the UK and Japan, this middle ground promises to deliver not just economic returns, but AI systems that are aligned with responsible use and democratic values. In our view, this is the standard by which enduring leadership in AI will be earned.

Individually, the UK and Japan have great potential to become AI centers of excellence. Together, they could form a new global axis – showing the world how economic growth can be combined with equity, inclusion and the protection of human rights.

We feature the best IT Automation software.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

AloJapan.com