The Manila Times

Big

Additional reporting by Richard Waters

IN May, hundreds of leading figures in artificial intelligence issued a joint statement describing the existential threat the technology they helped to create poses to humanity.

“Mitigating the risk of extinction from AI should be a global priority,” it said, “alongside other societal-scale risks such as pandemics and nuclear war.”

That single sentence invoking the threat of human eradication, signed by hundreds of chief executives and scientists from companies including OpenAI, Google’s DeepMind, Anthropic and Microsoft, made global headlines.

Driving all of these experts to speak up was the promise, but also the risk, of generative AI, a type of the technology that can process and generate vast amounts of data.

The release of ChatGPT by OpenAI in November spurred a rush of feverish excitement as it demonstrated the ability of large language models, the underlying technology behind the chatbot, to conjure up convincing passages of text, able to write an essay or improve your emails.

It created a race between companies in the sector to launch their own generative AI tools for consumers that could generate text and realistic imagery.

The hype around the technology has also led to an increased awareness of its dangers: the potential to create and spread misinformation as democratic elections approach; its ability to replace or transform jobs, especially in the creative industries; and the less immediate risk of it becoming more intelligent than and superseding humans.

Regulators and tech companies have been loud in voicing the need for AI to be controlled, but ideas on how to regulate the models and their creators have diverged widely by region.

The EU has drafted tough measures over the use of AI that would put the onus on tech companies to ensure their models do not break rules. They have moved far more swiftly than the US, where lawmakers are preparing a broad review of AI to first determine what elements of the technology might need to be subject to new regulation and what can be covered by existing laws.

The UK, meanwhile, is attempting to use its new position outside the EU to fashion its own more flexible regime that would regulate the applications of AI by sector rather than the software underlying them. Both the American and British approaches are expected to be more pro-industry than the Brussels law, which has been fiercely criticised by the tech industry.

The most stringent restrictions on AI creators, however, might be introduced by China as it seeks to balance the goals between controlling the information put out by generative models and competing in the technology race with the US.

These wildly divergent approaches risk tying the AI industry up in red tape, as local regimes have to be aligned with other countries so that the technology — which is not limited by borders — can be fully controlled.

Some are attempting to coordinate a common approach. In May, the leaders of the G7 nations commissioned a working group to harmonise regulatory regimes dubbed the Hiroshima AI Process. It wants to ensure legislation is interoperable between member countries. The UK, meanwhile, is hosting a global AI summit in November to discuss how international co-ordination on regulation can mitigate risk.

But each region has its own fixed ideas about how best to regulate AI — and experts warn that, as the technology spreads rapidly into common use, the time to fashion a consensus is already running out.

In July, the OECD warned that occupations at the highest risk of displacement of AI would be highly-skilled, white-collar jobs, accounting for about 27 per cent of employment across member economies. Its report stressed an “urgent need to act” and coordinate responses to “avoid a race to the bottom”.

“We are at a point now where [regulation] is not a luxury,” says Professor David Leslie of The Alan Turing Institute, the UK’s national institute for data science and AI. “It is a need to have more concerted international action here because the consequences of the spread of generative AI are not national, they are global.”

The Brussels effect

The EU has been characteristically first to jump with its AI Act, expected to be fully approved by the end of the year.

The move can be seen as an attempt to set a template for other countries to emulate, in the style of its European General Data Protection Regulation, which has provided a framework for data protection laws around the world.

Work on the AI legislation began several years ago, when policymakers were keen to curb reckless uses of the technology in applications such as facial recognition. “We . . . had the foresight to see that [AI] was ripe for regulation,” says Dragoş Tudorache, an MEP who led the development of the proposals.

“Then we figured out that targeting the risks instead of the technology was the best approach to avoid unnecessary barriers to innovation.”

After years of consultation, however, generative AI came along and transformed their approach. In response to the technology, MEPs proposed a raft of amendments to add to the legislation applying to so-called foundation models, the underlying technology behind generative AI products.

The proposals would make creators of such models liable for how their technology is used, even when another party has embedded it in a different system.

For example, if another company or developer were to license a model, the original maker would still be responsible for any breaches of the law.

“You wouldn’t expect the maker of a typewriter to be responsible for something libellous. We have to figure out a reasonable line there, and for most legal systems, that line is where you have a foreseeable risk of harm,” says Kent Walker, president of global affairs at Google.

Makers of models will also be compelled to identify and disclose the data the systems have been trained on to ensure makers of content such as text or imagery are compensated under the amendments.

The proposals triggered more than 150 businesses to sign a letter to the European Commission, the parliament, and member states in June, warning the proposals could “jeopardise European competitiveness”.

The companies — which ranged from carmaker Renault to brewer Heineken — argued the changes created disproportionate compliance costs for companies developing and implementing the technology.

“We will try to comply, but if we can’t comply, we will cease operating,” Sam Altman, chief executive of OpenAI, separately told reporters in May, off the back of the amendments. He later backtracked, tweeting the company had no plans to leave Europe.

Peter Schwartz, senior vicepresident of strategic planning at software company Salesforce, speaking in a personal capacity, has also warned that the approach could have an impact on how some other US companies operate in the region.

“[Regulating models] would tend to benefit those already in the market...It would close out new entrants and more or less cripple the open-source community,” says Chris Padilla, vice-president of government and regulatory affairs at IBM.

Padilla says policing models could amount to “regulatory over-reach” with “a real risk of collateral damage or unintended consequences,” where smaller companies cannot comply and scale.

By contrast, the UK has outlined what it calls a “pro-innovation” framework for AI regulation

in a long-awaited white paper published in March.

It has now invited stakeholders to share views on its proposals, which would see the government regulating how AI systems are used, rather than policing the technology itself. The UK aims to give existing regulators powers to enforce, and so it is hoped this regime will be more flexible and quicker to implement than alternatives.

But the government has yet to respond to the consultation or issue implementation guidance to the different sector regulators, so it could be years before any regulation actually comes into force.

China vs the US

Despite the fears over legislation in Europe, some say the largest players in the industry are paying more attention to what the world’s rival superpowers are doing.

“The companies that are doing this, the arms race is between the US and China,” says Dame Wendy Hall, co-chair of the government’s AI review in 2017 and regius professor of computer science at the University of Southampton. “Europe, whether you’re talking EU or the UK, has no control over those companies other than if they want to trade in Europe. We are very reliant on what the Chinese or the US governments do in terms of regulating the companies overall.”

China has introduced targeted regulations for various new technologies, including recommendation algorithms and generative AI, and is preparing to draft a broader national AI law in the coming years.

Its priority is controlling information through AI regulation, reflected in the latest generative AI regulations, which require adherence to the “core values of socialism”.

Meanwhile, generative AI providers whose products can “impact public opinion” have to submit for security reviews, according to the regulation that came into effect in August. A handful of Chinese tech companies, including Baidu and ByteDance, received approval and launched their generative AI products to the public two weeks ago.

Such restrictions would also apply to foreign companies, making it challenging to offer contentgenerating AI services to consumers in China.

The US, meanwhile, has so far let the industry self-regulate, with Microsoft, OpenAI, Google, Amazon and Meta signing a set of voluntary commitments at the White House in July.

The commitments include internal and external testing of AI systems before they are released to the public, helping people identify AI-generated content and increased transparency on systems’ capabilities and limitations.

“The very nature of the fact that they are voluntary on the part of the companies [means] they’re not inhibiting the ability to innovate in this important new technology area,” says Nathaniel Fick, the US state department’s ambassador at large for cyber space and digital policy. “Voluntary means fast. We don’t have a decade to put in place a governance structure here, given the pace of technological change. So these commitments are a first step . . . They’re not the last step.”

Congress has signalled it will take a considered yet cautious approach to crafting legislation. In June, Senate majority leader Chuck Schumer unveiled a framework for the regulation of AI that would begin with so-called “insight forums” for legislators to learn about the technology from industry executives, experts and activists.

The administration of President Joe Biden has indicated it is working on an executive order to promote “responsible innovation”, but it is unclear when it will be signed and what measures it will include. However, it is likely to be focused as much on limiting China’s ability to buy AI programs as on setting guardrails for US companies.

Geopolitical tensions are also playing into the UK’s summit in November, as the government has said it will invite “liked-minded countries” to participate. A report by Sifted recently claimed China has been invited, but only six of the 27 member states of the EU. The government declined to comment.

“We need to strike a balance here between national approaches and international harmonisation,” says Fick. “I think that’s always a tension point in these global technologies.”

What companies might do

It will be some time before the AI industry is subject to significant levels of scrutiny. Even the EU’s AI Act, which is closest to being finalised, includes a grace period of about two years after becoming law for companies to comply.

But figuring out compliance between regions will be difficult given the lack of common regulatory ground. Companies will need to examine carefully how to operate in specific markets and whether it will require them to design different models or offer different services to comply in a particular region.

Microsoft and Google would not speculate on whether they would change models in this instance but said they would endeavour to comply with local laws.

Google offered a comparison with how it has previously pulled some services from countries. It only reopened its News offering in Spain last year after shutting down the service nearly a decade ago over legislation that would force the company and other news aggregators to compensate publishers for small snippets of content.

This year, the company postponed the launch of its AI chatbot Bard in the EU until July, after an initial delay caused by the privacy regulator voicing concerns over how it protected user data. It launched in the UK and the US in March. The company made changes to appease the regulator’s concerns.

Until substantive legislation begins to bite, tech companies will continue to largely police themselves. To them, this might seem like the proper order of things — that they are in the best position to agree on new standards for the technology as it emerges and grows, then regulators can codify them when and if it is necessary.

Four of the largest and more influential companies in AI — Anthropic, Google, Microsoft and OpenAI — joined together in July to establish a Frontier Model Forum to work together on how to advance the technology responsibly.

But activists point to how that approach failed during the last big technological revolution, with the emergence of social media.

Legislation governing the likes of Facebook, Instagram and TikTok is still in the process of materialising; the EU’s Digital Services Act is only starting to come into force now, the UK’s online safety bill is still not finalised after six years, and US regulation of the sector has primarily been at state level. In the near absence of regulatory scrutiny, misinformation and harmful content have flourished on the most popular platforms, with few consequences for their owners.

“Clearly, self-regulation has not worked,” says Leslie, of the Alan Turing Institute. “So much of our political and social lives have been shaped by some of the ‘move fast and break things’ attitude of Silicon Valley, which was all for self-regulation. We can’t keep making the same mistakes.”

Financial Times

en-ph

2023-09-18T07:00:00.0000000Z

2023-09-18T07:00:00.0000000Z

https://digitaledition.manilatimes.net/article/282140705982683

The Manila Times