NIST Develops Cyber AI Profile to Enhance Security in AI Systems
NIST is creating a 'cyber AI profile' to enhance AI security in alignment with existing cybersecurity frameworks.
Key Points
- • NIST is developing a 'cyber AI profile' to address AI security risks.
- • It will build on established cybersecurity frameworks.
- • The initiative focuses on securing AI systems, adversarial usage, and employing AI for cybersecurity enhancement.
- • The bipartisan VET AI Act seeks to advance NIST's role in AI standards.
The National Institute of Standards and Technology (NIST) is actively developing a 'cyber AI profile' aimed at addressing the burgeoning security challenges posed by artificial intelligence (AI). This initiative is crucial in establishing guidelines that integrate seamlessly with existing cybersecurity frameworks, as emphasized by NIST officials. Katerina Megas, who heads NIST's cybersecurity program for the Internet of Things, highlighted the necessity for clarity regarding the role of AI in cybersecurity during a recent conference.
The forthcoming guidance specifically targets three critical aspects of AI’s intersection with cybersecurity: securing AI systems themselves, understanding the adversarial uses of AI in cyber threats, and utilizing AI to enhance overall cybersecurity measures. This approach seeks to balance the need for comprehensive security guidance without overwhelming Chief Information Security Officers (CISOs), who have expressed concerns about the added burden of compliance.
In addition to its development of the cyber AI profile, NIST is poised to engage in a bipartisan legislative effort aimed at strengthening AI standards through the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act. This bill, steered by Senators John Hickenlooper and Shelley Moore Capito, is designed to position NIST as a key player in formulating voluntary guidelines for the internal assessment and external validation of AI systems, a step that underscores the urgency for responsible AI development.
Hickenlooper remarked on the critical need for the U.S. to establish robust guidelines to ensure that the advantages of AI technologies can be harnessed safely and effectively. Michael Kratsios of the Office of Science and Technology Policy reinforced this assertion, advocating for reliable standards to measure AI performance, which would bolster trust in AI applications across various sectors.
Following a series of workshops aimed at gathering stakeholder input, NIST anticipates releasing a preliminary draft of its cyber AI profile, open for public feedback and further refinement. The development of this profile signifies a pivotal step toward bridging the divide between AI innovation and cybersecurity measures, an essential balance in today’s rapidly evolving technological landscape.