fbpx

The World Is Finally Beginning to Regulate AI

An image of AI being regulated in Australia

World leaders have agreed to a first-of-its-kind international code of conduct for the development of advanced artificial intelligence systems. Representatives from 28 countries gathered at historic Bletchley Park in the UK alongside technology industry leaders for an AI safety summit.

The so-called ‘Bletchly Declaration on AI Safety’ was signed by all countries in attendance – including Australia, the UK, the US, and China – as a means of jointly managing and mitigating the risks of AI while ensuring safe and responsible use.

“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” UK Prime Minister Rishi Sunak said.

It comes just after heads of state from the Group of Seven finalised a similar process in AI regulation that has been ongoing for months. The G7, made up of Canada, France, Germany, Italy, Japan, Britain, the US, and the EU, set out an 11-point code that aims to “promote safe, secure, and trustworthy AI worldwide.”

At the same time, the US has just made a historic declaration on the US of AI within its own border while other nation-states are gearing up for greater changes. With so much change ongoing around the world, it appears the era of AI free-reign is now over and that the era of AI regulation is about to begin.

What Does the Bletchley Declaration Say?

“Artificial Intelligence (AI) presents enormous global opportunities,” the declaration, which was published by the UK Government, reads. “It has the potential to transform and enhance human wellbeing, peace and prosperity”.

“To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible,” it continues.

The agreement is designed to lay the foundations for the responsible use of AI and the avoidance of “serious, even catastrophic” harm. However, it doesn’t go as far as to actually offer guidance in its use and is more a series of principles. It’s up to the signatory countries to put those principles into action.

“Many risks arising from AI are inherently international in nature,” the declaration states. Countries have agreed, in signing it, to share information on the development of AI and to meet regularly to work in collaboration to guide its use.

What Does the G7 AI Code Say?

Separately, the G7 have agreed to an AI code of conduct called the “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.” This is slightly more prescriptive than the general Bletchley Declaration.

It is a ‘risk-based’ code of conduct that is “is meant to help seize the benefits and address the risks and challenges brought by these technologies,” the document reads.

The code is there to urge companies to take appropriate measures to identify, evaluate, and mitigate risks across the lifecycle of AI. It also encourages them to tackle incidents and possible abuses ahead of their product becoming a problem.

Companies ought to publically report on the capabilities, limitations, and the use and misuse potential of their products, the report says. There should also be sufficient investment in security controls.

What Other AI Regulation is There?

Virtually all governments around the world have taken an interest — usually a sceptical one — in the growing power of AI. However, none seem confident in what guardrails should be placed around its development.

The EU has been planning its own ambitious AI regulation act for a while now, which it hopes to finalise by the end of the year.

The UK’s AI Safety Summit is the biggest and most multilateral summit held on the topic, with a number of useful agreements made. The outcome of the meeting is less around AI governance and more around conceptualising what role governments can or should play in AI regulation. New AI safety regulations and the cooperation of the tech industry are crucial outcomes of these discussions, although nothing has been set in stone.

Elsewhere, China, for its part, announced a ‘Global AI Governance Initiative’ on 20 October, which suggests a number of global principles for the development of AI. However, they’re incredibly vague.

In the US, President Joe Biden has ordered federal agencies to begin monitoring the risks of AI and to consider how it might be used to solve problems. Despite the decree, there is no legal basis underpinning the move, however.

As it stands, there has been little sweeping legal foray into AI. But that doesn’t mean changes aren’t coming as governments circle around the issue. The AI Index, compiled by researchers at Stanford University, noted that in 2016, there was just a single law passed in 127 countries that used the phrase ‘artificial intelligence’. In 2022, there were 37 such laws.

What Is Australia Doing About AI Regulation?

Australia is doing just about the same as everyone else — thinking a lot about AI without actually passing any proper reforms.

We are already a founding partner in the Global Partnership on Artificial Intelligence (GPAI), a group that aims to facilitate international cooperation on AI technology and responsible adoption.

At Bletchley, we signed the AI Safety Declaration, with Deputy Prime Minister, Richard Marles, and Minister for Industry and Science, Ed Husic as our representatives.

“There are real and understandable concerns with how this technology could impact our world,” Husic said.

“We need to act now to make sure safety and ethics are in-built. Not a bolt-on feature down the track.”

Like other countries, Australia already has an AI Ethical Framework in place. Released by the Department of Industry, Science, and Resources, it was developed in partnership with the CSIRO as a voluntary, best-practice framework that aims to “achieve safer, more reliable and fairer outcomes for all Australians.” However, like other voluntary frameworks, it’s considered something of a paper tiger.

Finally, in the UK, Australia also signed America’s Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. This is a set of principles for how AI might be used by the military, calling on states to ensure technology is consistent with international and humanitarian laws.

“On the home front, we have asked Australians how they think we should ensure AI is safe and properly regulated in this country,” Husic said.

“We will continue to engage with Australians on this over the coming months.”

Is Regulation of AI a Good Thing?

The era of AI regulation and corporate control is approaching — but those in the industry claim that regulation at this stage is premature and likely to kill off any potential benefits before they’ve even had a chance to develop.

“I do think in the long run (ie decades) AI is an existential risk for people. That said, at this point, regulating AI will only send it overseas and federate and fragment the cutting edge of it to outside US jurisdiction,” tech investor Elad Gil wrote in a blog in September.

It sums up the sentiment that many in the field feel about what will happen with heavy-handed regulation — innovation will go elsewhere, and countries will be left behind.

Of course, the people calling for free reign in the AI sector are those who stand most to benefit from it: Silicon Valley venture capitalists and the like. Others argue that regulation is essential to fix problems that have already begun to emerge with the technology.

Bias, discrimination, the loss of private data, copyright infringement, and job losses are just some of the issues that AI regulation could be used to step in and solve. The Australian Human Rights Commission has written that proper regulation of AI needs specialisation and a use of what we have already.

“Ethical AI is essential to protecting human rights and improving trust in AI,” it writes.

“Ultimately, if Australia is to reap the benefits of AI while mitigating the profound human rights harms … [it] needs to modernise its approach.

“Although [This] will not be easy, it must be done”.

Related: Airbnb Is Now Using AI to Enforce Its Party Ban

Related: Artists Can Now Poison AI Image Generators to Stop Them Ripping Off Their Work

Read more stories from The Latch and subscribe to our email newsletter.