Agencies in the United States, New Zealand, Australia, Canada and the United Kingdom have issued joint cybersecurity guidance to protect artificial intelligence systems from cyberthreats.
The document describes how cyber actors often combine AI and traditional IT infiltration techniques to enable complex cyberattacks that surpass layers of defenses and provides best practices based on the U.S. government’s Cybersecurity Performance Goals to secure AI deployment environments and AI systems.
Such practices include having the same person responsible and accountable for the AI system and the organization’s overall cybersecurity, ensuring that an AI system operates within an organization’s risk tolerance, requiring the primary AI system developer to create a threat model, and protecting all data sources used to train or enhance AI models.
The agencies that helped author the report acknowledged that the best practices may not be applicable for all organizations or environments. Attack methods depend on the cyber actor, so organizations should consider their use cases and threat profiles when applying the recommendations.
The guidance authors are the U.S. Cybersecurity Infrastructure and Security Agency, National Security Agency and FBI, the Australian Cyber Security Centre, the Canadian Centre for Cyber Securitym, the United Kingdom’s National Cyber Security Centre and the New Zealand National Cyber Security Centre.