Security should be a core requirement in developing artificial intelligence capabilities, according to Lindy Cameron, CEO of the National Cyber Security Centre in the U.K.
Appearing as a keynote speaker at the recent Chatham House Cyber 2023 conference, Cameron called on developers to anticipate security risks and embed mitigating measures when designing AI tools, the UK Defence Journal reported.
She warned that treating risks as an afterthought could lead to the development of vulnerable AI systems.
The NCSC chief said adopting a “secure by design” approach in advancing the new technology would eliminate the need to retrofit security into AI at a later time.
The strategy aligns with the Five Eyes intelligence alliance urging technology companies to integrate cybersecurity into their products and services from the start. The alliance groups the United States, the United Kingdom, Canada, Australia and New Zealand.
During her speech, Cameron noted NCSC’s expertise in helping organizations better understand cyber risks linked with AI. The center also supports partners’ efforts to deploy the technology for increased cyber protection and predict plans by hostile states and cyber criminals to exploit AI, the CEO said.
She stressed that adversaries are expected to use the emerging technology to advance their tradecraft.