Blog Credit: Trupti Thakur
Image Courtesy: Google
Public Working Group By NIST
US NIST Launches Public Working Group for AI Technologies
The United States National Institute of Standards and Technology (NIST) has initiated a public working group dedicated to tackling the challenges and risks associated with Artificial Intelligence (AI).
Addressing Risks in Generative AI
NIST’s working group aims to develop guidance and strategies to effectively address the risks linked to generative AI technologies. With the rapid advancement of AI, it has become imperative to comprehensively understand the potential risks associated with generative AI and take appropriate measures to mitigate them.
Inclusive Participation of Qualified Individuals
Qualified members of the public are invited to participate in this crucial endeavor. NIST recognizes the importance of diverse perspectives and expertise in formulating effective guidelines and frameworks for managing risks in generative AI.
Focusing on Generative AI
The working group’s specific area of focus is generative AI, particularly the intricate aspects of generating diverse types of content, including code, text, images, videos, and music. By delving into the nuances of generative AI, the group aims to gain comprehensive insights into potential risks and challenges.
Volunteers’ Role in Testing and Evaluation
Volunteers from both public and private sectors will play an essential role in the working group. They will contribute their expertise by assisting in testing, evaluation, and measurement activities related to generative AI. Their involvement will help ensure a comprehensive understanding of the risks and potential impact of generative AI technologies.
Exploring Opportunities for Generative AI
In addition to addressing risks, the working group will explore the vast opportunities presented by generative AI. They will delve into how this technology can be harnessed to address challenges in critical areas such as health, environment, and climate change. By exploring these opportunities, the group aims to unlock the transformative potential of generative AI for societal benefits.
NIST’s Motivation for Establishing the Group
NIST considers the working group necessary due to the unprecedented speed, scale, and potential impact of generative AI on various industries and society at large. By proactively addressing risks and challenges, NIST aims to ensure that generative AI can be effectively integrated into diverse sectors while safeguarding the public interest.
Development of Tools for Risk Management
Through the collaborative efforts of the working group, NIST seeks to develop tools that enhance the understanding and management of risks associated with generative AI. These tools will provide valuable insights, guidelines, and frameworks to enable stakeholders to navigate the complex landscape of generative AI technologies securely.
More About Public Working Group According to WASHINGTON —
U.S. Secretary of Commerce Gina Raimondo announced that the National Institute of Standards and Technology (NIST) is launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management to address this rapidly advancing technology. The Public Working Group on Generative AI will help address the opportunities and challenges associated with AI that can generate content, such as code, text, images, videos and music. The public working group will also help NIST develop key guidance to help organizations address the special risks associated with generative AI technologies. The announcement comes on the heels of a meeting President Biden convened earlier this week with leading AI experts and researchers in San Francisco, as part of the Biden-Harris administration’s commitment to seizing the opportunities and managing the risks posed by AI.
“President Biden has been clear that we must work to harness the enormous potential while managing the risks posed by AI to our economy, national security and society,” Secretary Raimondo said. “The recently released NIST AI Risk Management Framework can help minimize the potential for harm from generative AI technologies. Building on the framework, this new public working group will help provide essential guidance for those organizations that are developing, deploying and using generative AI, and who have a responsibility to ensure its trustworthiness.”
The public working group will draw upon volunteers, with technical experts from the private and public sectors, and will focus on risks related to this class of AI, which is driving fast-paced changes in technologies and marketplace offerings.
“This new group is especially timely considering the unprecedented speed, scale and potential impact of generative AI and its potential to revolutionize many industries and society more broadly,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “We want to identify and develop tools to better understand and manage those risks, and we hope to attract broad participation in this new group.”
NIST has laid out short-term, midterm and long-term goals for the working group. Initially, it will serve as a vehicle for gathering input on guidance that describes how the NIST AI Risk Management Framework (AI RMF) may be used to support the development of generative AI technologies. This type of guidance, called a profile, will support and encourage use of the AI RMF in addressing related risks.
In the midterm, the working group will support NIST’s work on testing, evaluation and measurement related to generative AI. This will include support of NIST’s participation in the AI Village at the 2023 DEF CON, the longest-running and largest computer security and hacking conference.
Longer term, the group will explore specific opportunities to increase the likelihood that powerful generative AI technologies are productively used to address top challenges of our time in areas such as health, the environment and climate change. The group can help ensure that risks are addressed and managed before, during and after AI applications are developed and used.
Those interested in joining the NIST Generative AI Public Working Group, which will be facilitated via a collaborative online workspace, should complete this form no later than July 9. Participants will have the opportunity to choose to help develop the generative AI profile for the AI RMF as part of their contributions to the group.
Generative AI is also the subject of the first two in a new series of NIST video interviews with leaders in AI to explore issues critical to improving the trustworthiness of fast-paced AI technologies.
Part 1 features Jack Clark, co-founder of Anthropic, and Navrina Singh, founder and CEO of CREDO AI, who are interviewed by Elham Tabassi, associate director for emerging technologies in NIST’s Information Technology Laboratory.
In Part 2, Rishi Bommasani, a Ph.D. student at Stanford University, and Irene Solaiman, policy director at Hugging Face, are interviewed by Reva Schwartz, principal investigator, AI bias, at NIST. All videos in the “NIST Conversations on AI” series will be available on the NIST website.
Additionally, today, the National Artificial Intelligence Advisory Committee delivered its first report to the president and identified areas of focus for the committee for the next two years. The full report, including all of its recommendations, is available on the AI gov website.
Blog By: Trupti Thakur