A secure-by-design approach for AI systems can be challenging, as it requires specialized skills and may involve significant costs.
Singapore has rolled out new cybersecurity measures to safeguard AI systems against traditional threats like supply chain attacks and emerging risks such as adversarial machine learning, including data poisoning and evasion attacks.
In its Guidelines and Companion Guide for Securing AI Systems, Singapore’s Cyber Security Agency (CSA) stressed that AI systems must be secure by design and secure by default, like other digital systems.
This approach aims to help system owners manage security risks from the start, the agency added.
“To reap the benefits of AI, users must have confidence that the AI will behave as designed, and outcomes are safe and secure,” the CSA said in the guide. “However, in addition to safety risks, AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI …