Blog Credit: Trupti Thakur
Image Courtesy: Google
Hiroshima AI Process (HAP)
Recently, the annual G7 Summit held in Hiroshima, Japan, initiated the Hiroshima AI Process (HAP), which is likely to conclude by December 2023, signaling a significant step towards regulating Artificial Intelligence (AI).
- The G7 Leaders’ Communiqué recognized the importance of inclusive AI governance and set forth a vision of trustworthy AI aligned with shared democratic values.
What is the Hiroshima AI Process?
- About:
- The HAP aims tofacilitate international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI.
- It recognizes the growing prominence of Generative AI (GAI)across countries and sectors and emphasizes the need to address the opportunities and challenges associated with it.
- Working:
- The HAP will operate in cooperation with international organizations such as the Organization for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
- Objectives:
- The HAP aims to govern AI in a way that upholds Democratic values, ensures fairness and accountability, promotes transparency, and prioritizes the safety of AI technologies.
- It seeks to establish procedures that encourage openness, inclusivity, and fairness in AI-related discussions and decision-making processes.
What are the Potential Challenges and Outcomes?
- The HAP faces challenges due to differing approaches among G7 countries in regulating AI risks. However, it aims to facilitate a common understanding on important regulatory issues while preventing complete discord.
- By involving multiple stakeholders, the HAP strives to find a balanced approach to AI governance that considers diverse perspectives and maintains harmony among G7 countries.
- For now, there are three ways in which the HAP can play out,
- It may enable the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values.
- It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution.
- It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.
How can the HAP Resolve the issue of IPR in relation to GAI?
- Currently, there is ambiguity regarding the relationship between AI and IPR (Intellectual Property Rights), leading to conflicting interpretations and legal decisions in different jurisdictions.
- The HAP can contribute by establishing clear rules and principles regarding AI and IPR, helping the G7 countries reach a consensus on this matter.
- One specific area that can be addressed is the application of the “Fair Use” doctrine, which permits certain activities such as teaching, research, and criticism without seeking permission from the copyright owner.
- However, whether using copyrighted material in machine learning qualifies as fair use is a subject of debate.
- By developing a common guideline for G7 countries, the HAP can provide clarity on the permissible use of copyrighted material sin machine learning datasets as fair use, with certain conditions. Additionally, it can distinguish between the use of copyrighted materials for machine learning specifically and other AI-related uses.
- Such efforts can significantly impact the global discourse and practices surrounding the intersection of AI and intellectual property rights.
How is Global AI currently Governed?
- India:
- NITI Aayog, has issued some guiding documents on AI Issues such as the National Strategy for Artificial Intelligence and the Responsible AI for All report.
- Emphasises social and economic inclusion, innovation, and trustworthiness.
- US:
- The US released a Blueprint for an AI Bill of Rights (AI BoR) in 2022, outlining the harms of AI to economic and civil rights and lays down five principles for mitigating these harms.
- The Blueprint, instead of a horizontal approach like the EU, endorses a sectorally specific approach to AI governance, with policy interventions for individual sectors such as health, labor, and education, leaving it to sectoral federal agencies to come out with their plans.
- China:
- In 2022, China came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI.
- It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information.
- EU:
- In May 2023, the European Parliament reached aPreliminary Agreement on a new draft of theArtificial Intelligence Act, which aims to regulate systems like OpenAI’s ChatGPT.
- The legislation was drafted in 2021 with the aim of bringing transparency, trust, and accountability to Al and creating a framework to mitigate risks to the safety, health, Fundamental Rights, and democratic values of the EU.
Way Forward
- Non-G7 countries also have the opportunity to launch similar processesto influence global AI governance. This shows that AI governance has become a global issue, with more complexity and debates expected in the future.
- In this context, the Indian government should take proactive steps by creating anopen-source AI risk profile, setting up controlled research environments for testing high-risk AI models, promoting explainable AI, defining intervention scenarios, and maintaining vigilance.
- It is important to establish a simple regulatory framework that defines AI capabilities and identifiesareas prone to misuse. Prioritizing data privacy, integrity, and security while ensuring data access for businesses is crucial.
- Enforcing mandatory explainability in AI systems will enhance transparency and help businesses understandthe reasoning behind decisions.
- Policymakers should strive to strike abalance between the scope of regulation and the language used, seeking input from various stakeholders, including industry experts and businesses. This way forward will contribute to effective AI regulations that address concerns and promote responsible AI deployment.
The G7 leaders have welcomed international guiding principles on artificial intelligence and a voluntary Code of Conduct for AI developers.
The Hiroshima AI Process Comprehensive Policy Framework consists of four pillars:
- analysis of priority risks, challenges and opportunities of generative AI,
- the Hiroshima Process International Guiding Principles for all AI actors in the AI eco-system,
- the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems,
- project based cooperation in support of the development of responsible AI tools and best practices.
Blog By: Trupti Thakur