ΑΙhub.org
 

Launch of a new standard for AI security in Singapore

Singapore standards logo
The adoption of artificial intelligence (AI) in various applications, from self-driving autonomous vehicles to AI-assisted medical diagnoses, has accelerated in recent years. From 2018 to 2020, there was a five-fold increase globally in the percentage of organisations deploying AI.

While the adoption of AI brings numerous benefits, cybersecurity threats such as hacking pose a significant threat to AI systems, especially in applications where hackers may gain access to confidential information or cause automated systems to malfunction.

Answering the call to protect the integrity of AI programmes and create trust in AI solutions, a team of NTU researchers and AI leaders has launched a new standard on AI security.

Unveiled on 16 March 2022 at the Al Security Standard Launch Singapore TR 99:2021 | Growth opportunities for government & industry adopting trustworthy Al, and published by Enterprise Singapore’s Standards Consortium, the standard was developed from research led by NTU scientists Prof Liu Yang of NTU’s School of Computer Science and Engineering, former research fellow Dr Xiaofei Xie and PhD candidate Mr David Berend.

Spearheading the standard together with NTU are Mr Feng-Yuan Liu, Vice President of Aicadium, a global technology company founded by Temasek, Mr Soon Chia Lim, Director of the Cybersecurity Engineering Centre at the Cyber Security Agency of Singapore (CSA), Mr Laurence Liew, Director of Innovations at AI Singapore, Dr Aik Beng Ng, Regional Manager at NVIDIA AI Technology Centre, Dr Jianshu Weng, Head of Data Science at Chubb and Mr Gerry Chng, Executive Director at Deloitte.

A milestone that places Singapore amongst the first countries in the world to steer advances in AI security, the standard will be used to guide global standardisation strategies in this area, through the International Organisation for Standardisation (ISO).

NTU researchersFrom left to right: Mr Lim Soon Chia, Director of the Cybersecurity Engineering Centre at the Cyber Security Agency of Singapore, Mr Feng-Yuan Liu, Vice President of Aicadium, Mr Laurence Liew, Director of Innovations at AI Singapore, Prof Liu Yang, Mr David Berend and Dr Ang Aik Ben, Regional Manager at NVIDIA AI Technology Centre.

Protecting AI systems from security threats

Developed over one year, with input from 30 AI and security professionals from industry, academia and government, the new standard explains the various threats that AI systems may encounter, the assessment measures for evaluating the security of an AI algorithm and the approaches that AI practitioners can take to mitigate such attacks.

To illustrate the importance of secure AI systems, the standard highlights four case studies where security breaches could have disastrous consequences – content filters on social media platforms to flag offensive content; credit scoring systems to protect individuals and credit institutes; AI-enabled disease diagnosis systems and systems that detect and protect computers from malicious software.

If these AI systems fail, there may be severe repercussions on the lives of individuals. For example, users may be exposed to extremist messages on social media platforms, receive a wrong diagnosis or be given an inaccurate credit score.

“By providing advice on the necessary defences and assessments to make AI applications more secure, we aim to create trust in AI for AI practitioners,” said Prof Liu, who initiated the working group and established the foundations which enabled this standard, together with his team of researchers.

“At the same time, we hope that consumers will feel more confident in using AI solutions that have been certified with the standard,” said Mr Berend, who led the working group with Prof Liu.

The team now aims to validate and put the standard into operation in Singapore and around the world.




Nanyang Technological University, Singapore




            AIhub is supported by:



Related posts :



Memory traces in reinforcement learning

  12 Sep 2025
Onno writes about work presented at ICML 2025, introducing an alternative memory framework.

Apertus: a fully open, transparent, multilingual language model

  11 Sep 2025
EPFL, ETH Zurich and the Swiss National Supercomputing Centre (CSCS) released Apertus today, Switzerland’s first large-scale, open, multilingual language model.

Interview with Yezi Liu: Trustworthy and efficient machine learning

  10 Sep 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

Advanced AI models are not always better than simple ones

  09 Sep 2025
Researchers have developed Systema, a new tool to evaluate how well AI models work when predicting the effects of genetic perturbations.

The Machine Ethics podcast: Autonomy AI with Adir Ben-Yehuda

This episode Adir and Ben chat about AI automation for frontend web development, where human-machine interface could be going, allowing an LLM to optimism itself, job displacement, vibe coding and more.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

  05 Sep 2025
The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.

#IJCAI2025 distinguished paper: Combining MORL with restraining bolts to learn normative behaviour

and   04 Sep 2025
The authors introduce a framework for guiding reinforcement learning agents to comply with social, legal, and ethical norms.

How the internet and its bots are sabotaging scientific research

  03 Sep 2025
What most people have failed to fully realise is that internet research has brought along risks of data corruption or impersonation.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence