• Welcome to Professional A2DGC Business
  • 011-43061583
  • info@a2dgc.com

AI Regulations- The First Meet In New York

19

Jul

Blog Credit: Trupti Thakur

Image Courtesy: Google

AI Regulations- The First Meet in New York

On 18 July 2023, the UN Security Council will gather in New York for its first formal discussion on the topic of AI.

LONDON (Reuters) – The United Nations Security Council will hold its first formal discussion on artificial intelligence (AI) this week in New York, with Britain to call for an international dialogue about its impact on global peace and security.

Governments around the world are considering how to mitigate the dangers of emerging AI technology, which could reshape the global economy and change the international security landscape.

Britain holds the rotating presidency of the UN Security Council this month and has been seeking a global leadership role in AI regulation.

British Foreign Secretary James Cleverly will chair the discussion on Tuesday.

In June, U.N. Secretary-General Antonio Guterres backed a proposal by some artificial intelligence executives for the creation of an international AI watchdog body like the International Atomic Energy Agency (IAEA).

 

The UN Security Council’s upcoming meeting is part of the United Kingdom-led initiative to assess AI impact on global peace and security. The UK aims to establish itself as a global leader in AI regulation and prioritize a multilateral dialogue on the potential risks associated with AI and seek potential solutions. James Cleverly, the British Foreign Secretary, will chair the meeting.

Some of the expected topics and potential risks that will be discussed include:

  • Use of AI in autonomous weapons
  • The potential use of AI in the control and management of nuclear weapons
  • Ethical implications
  • Economic impact
  • International cooperation and regulation

The meeting will also feature by leading AI experts and from Secretary- General Antonio Guterres, who has been vocal about the risks associated with advanced AI technology in recent months. Guterres has described the concerns raised by scientists and experts as ‘deafening’ and has compared the threat of AI to the risk of nuclear weapons.

In September, he intends to establish an advisory board on AI to formulate UN initiatives. Guterres has also expressed openness to the possibility of a UN agency on AI, drawing inspiration from the International Atomic Energy Agency’s knowledge-focused approach and regulatory capabilities.

Permanent Representative of the United Kingdom to the UN Security Council, Barbara Woodward, has announced its plans to hold a landmark meeting addressing the potential threats posed by AI to global peace and security. The meeting, scheduled for 18 July, is organized by the United Kingdom. The UK’s initiative comes in response to concerns about AI’s possible misuse, such as its application in autonomous weapons systems or its potential role in nuclear weapons control. The aim is to encourage a multilateral approach to managing AI’s implications.

While acknowledging the substantial benefits of AI, such as its potential to enhance UN development programs, improve humanitarian aid operations, and support conflict prevention through data analysis, Ambassador Woodward underscored the need to address the significant security questions raised by AI. The UK aims to foster a comprehensive dialogue among the 15 Security Council members to examine the implications of AI and seek potential solutions.

The meeting will include presentations from global experts in AI and Secretary- Antonio Guterres, who has been outspoken about the dangers associated with advanced AI technology. Guterres has described the concerns raised by scientists and experts as ‘deafening’ and has compared the existential threat of AI to the risk of nuclear war. He plans to establish an advisory board on AI in September to develop initiatives that the UN can undertake. Guterres also expressed openness to the idea of a new UN agency on AI, citing the International Atomic Energy Agency as a potential model due to its knowledge-based approach and regulatory powers.

Guterres announced plans to appoint an advisory board on artificial intelligence in September to prepare initiatives that the U.N. can take. He also said he would react favorably to a new U.N. agency on AI and suggested as a model the International Atomic Energy Agency, which is knowledge-based and has some regulatory powers.

Woodward said the UK wants to encourage “a multilateral approach to managing both the huge opportunities and the risks that artificial intelligence holds for all of us,” stressing that “this is going to take a global effort.”

She stressed that the benefits side is huge, citing AI’s potential to help U.N. development programs, improve humanitarian aid operations, assist peacekeeping operations, and support conflict prevention, including by collecting and analyzing data. “It could potentially help us close the gap between developing countries and developed countries,” she added.

But the risk side raises serious security questions that must also be addressed, Woodward said.

Europe has led the world in efforts to regulate artificial intelligence, which gained urgency with the rise of a new breed of artificial intelligence that gives AI chatbots like ChatGPT the power to generate text, images, video, and audio that resemble human work. On June 14, EU lawmakers signed off on the world’s first set of comprehensive rules for artificial intelligence, clearing a key hurdle as authorities across the globe race to rein in AI.

In May, the head of the artificial intelligence company that makes ChatGPT told a U.S. Senate hearing that government intervention will be critical to mitigating the risks of increasingly powerful AI systems, saying as this technology advances people are concerned about how it could change their lives, and “we are too.”

OpenAI CEO Sam Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

Woodward said the Security Council meeting, to be chaired by UK Foreign Secretary James Cleverly, will provide an opportunity to listen to expert views on AI, which is a very new technology that is developing very fast, and start a discussion among the 15 council members on its implications.

Britain’s Prime Minister Rishi Sunak has announced that the UK will host a summit on AI later this year, “where we’ll be able to have a truly global multilateral discussion,” Woodward said.

 

Efforts to Regulate Artificial Intelligence

 

LONDON (AP) — The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise.

The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary.

“Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.”

The release of Chat GPT, last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.

The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions.

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi.

Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatening their rights or safety. Regulators are concerned about new ethical and societal risks posed by ChatGPT and other general purpose AI systems, which could transform daily life, from jobs and education to copyright and privacy.

The White House recently brought in the heads of tech companies working on AI including Microsoft, Google and ChatGPT creator OpenAI to discuss the risks, while the Federal Trade Commission has warned that it wouldn’t hesitate to crack down.

China has issued draft regulations mandating security assessments for any products using generative AI systems like ChatGPT. Britain’s competition watchdog has opened a review of the AI market, while Italy briefly banned ChatGPT over a privacy breach.

The EU’s sweeping regulations — covering any provider of AI services or products — are expected to be approved by a European Parliament committee Thursday, then head into negotiations between the 27 member countries, Parliament and the EU’s executive Commission.

European rules influencing the rest of the world — the so-called Brussels effect — previously played out after the EU tightened data privacy and mandated common phone-charging cables, though such efforts have been criticized for stifling innovation.

Attitudes could be different this time. Tech leaders including Elon Musk and Apple co-founder Steve Wozniak have called for a six-month pause to consider the risks.

Geoffrey Hinton, a computer scientist known as the “Godfather of AI,” and fellow AI pioneer Yoshua Bengio voiced their concerns last week about unchecked AI development.

Tudorache said such warnings show the EU’s move to start drawing up AI rules in 2021 was “the right call.”

Google, which responded to ChatGPT with its own Bard Chatbot and is rolling out AI tools, declined to comment. The company has told the EU that “AI is too important not to regulate.”

Microsoft, a backer of OpenAI, did not respond to a request for comment. It has welcomed the EU effort as an important step “toward making trustworthy AI the norm in Europe and around the world.”

Mira Murati, chief technology officer at OpenAI, said in an interview last month that she believed governments should be involved in regulating AI technology.

But asked if some of OpenAI’s tools should be classified as posing a higher risk, in the context of proposed European rules, she said it’s “very nuanced.”

“It kind of depends where you apply the technology,” she said, citing as an example a “very high-risk medical use case or legal use case” versus an accounting or advertising application.

OpenAI CEO Sam Altman plans stops in Brussels and other European cities this month in a world tour to talk about the technology with users and developers.

Recently added provisions to the EU’s AI Act would require “foundation” AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

Foundation models, also known as large language models, are a subcategory of general purpose AI that includes systems like ChatGPT. Their algorithms are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs.

“You have to make a significant effort to document the copyrighted material that you use in the training of the algorithm,” paving the way for artists, writers and other content creators to seek redress, Tudorache said.

Officials drawing up AI regulations have to balance risks that the technology poses with the transformative benefits that it promises.

Big tech companies developing AI systems and European national ministries looking to deploy them “are seeking to limit the reach of regulators,” while civil society groups are pushing for more accountability, said EDRi’s Chander.

“We want more information as to how these systems are developed — the levels of environmental and economic resources put into them — but also how and where these systems are used so we can effectively challenge them,” she said.

Under the EU’s risk-based approach, AI uses that threaten people’s safety or rights face strict controls.

Remote Facial Recognition is expected to be banned. So are government “social scoring” systems that judge people based on their behavior. Indiscriminate “scraping” of photos from the internet used for biometric matching and facial recognition is also a no-no.

Predictive policing and emotion recognition technology, aside from therapeutic or medical uses, are also out.

Violations could result in fines of up to 6% of a company’s global annual revenue.

Even after getting final approval, expected by the end of the year or early 2024 at the latest, the AI Act won’t take immediate effect. There will be a grace period for companies and organizations to figure out how to adopt the new rules.

It’s possible that industry will push for more time by arguing that the AI Act’s final version goes farther than the original proposal, said Frederico Oliveira Da Silva, senior legal officer at European consumer group BEUC.

They could argue that “instead of one and a half to two years, we need two to three,” he said.

He noted that ChatGPT only launched six months ago, and it has already thrown up a host of problems and benefits in that time.

If the AI Act doesn’t fully take effect for years, “what will happen in these four years?” Da Silva said. “That’s really our concern, and that’s why we’re asking authorities to be on top of it, just to really focus on this technology.”

 

Blog By: Trupti Thakur

Recent Blog

BharatGenDec 23, 2024
The AI AgentsDec 18, 2024
The SORADec 17, 2024