ΑΙhub.org
 

Launch of a new standard for AI security in Singapore

Singapore standards logo
The adoption of artificial intelligence (AI) in various applications, from self-driving autonomous vehicles to AI-assisted medical diagnoses, has accelerated in recent years. From 2018 to 2020, there was a five-fold increase globally in the percentage of organisations deploying AI.

While the adoption of AI brings numerous benefits, cybersecurity threats such as hacking pose a significant threat to AI systems, especially in applications where hackers may gain access to confidential information or cause automated systems to malfunction.

Answering the call to protect the integrity of AI programmes and create trust in AI solutions, a team of NTU researchers and AI leaders has launched a new standard on AI security.

Unveiled on 16 March 2022 at the Al Security Standard Launch Singapore TR 99:2021 | Growth opportunities for government & industry adopting trustworthy Al, and published by Enterprise Singapore’s Standards Consortium, the standard was developed from research led by NTU scientists Prof Liu Yang of NTU’s School of Computer Science and Engineering, former research fellow Dr Xiaofei Xie and PhD candidate Mr David Berend.

Spearheading the standard together with NTU are Mr Feng-Yuan Liu, Vice President of Aicadium, a global technology company founded by Temasek, Mr Soon Chia Lim, Director of the Cybersecurity Engineering Centre at the Cyber Security Agency of Singapore (CSA), Mr Laurence Liew, Director of Innovations at AI Singapore, Dr Aik Beng Ng, Regional Manager at NVIDIA AI Technology Centre, Dr Jianshu Weng, Head of Data Science at Chubb and Mr Gerry Chng, Executive Director at Deloitte.

A milestone that places Singapore amongst the first countries in the world to steer advances in AI security, the standard will be used to guide global standardisation strategies in this area, through the International Organisation for Standardisation (ISO).

NTU researchersFrom left to right: Mr Lim Soon Chia, Director of the Cybersecurity Engineering Centre at the Cyber Security Agency of Singapore, Mr Feng-Yuan Liu, Vice President of Aicadium, Mr Laurence Liew, Director of Innovations at AI Singapore, Prof Liu Yang, Mr David Berend and Dr Ang Aik Ben, Regional Manager at NVIDIA AI Technology Centre.

Protecting AI systems from security threats

Developed over one year, with input from 30 AI and security professionals from industry, academia and government, the new standard explains the various threats that AI systems may encounter, the assessment measures for evaluating the security of an AI algorithm and the approaches that AI practitioners can take to mitigate such attacks.

To illustrate the importance of secure AI systems, the standard highlights four case studies where security breaches could have disastrous consequences – content filters on social media platforms to flag offensive content; credit scoring systems to protect individuals and credit institutes; AI-enabled disease diagnosis systems and systems that detect and protect computers from malicious software.

If these AI systems fail, there may be severe repercussions on the lives of individuals. For example, users may be exposed to extremist messages on social media platforms, receive a wrong diagnosis or be given an inaccurate credit score.

“By providing advice on the necessary defences and assessments to make AI applications more secure, we aim to create trust in AI for AI practitioners,” said Prof Liu, who initiated the working group and established the foundations which enabled this standard, together with his team of researchers.

“At the same time, we hope that consumers will feel more confident in using AI solutions that have been certified with the standard,” said Mr Berend, who led the working group with Prof Liu.

The team now aims to validate and put the standard into operation in Singapore and around the world.




Nanyang Technological University, Singapore




            AIhub is supported by:


Related posts :



Interview with AAAI Fellow Roberto Navigli: multilingual natural language processing

  21 Mar 2025
Roberto tells us about his career path, some big research projects he’s led, and why it’s important to follow your passion.

Museums have tons of data, and AI could make it more accessible − but standardizing and organizing it across fields won’t be easy

  20 Mar 2025
How can AI models help organize large amounts of data from different collections, and what are the challenges?

Shlomo Zilberstein wins the 2025 ACM/SIGAI Autonomous Agents Research Award

  19 Mar 2025
Congratulations to Shlomo Zilberstein on winning this prestigious award!

#AAAI2025 workshops round-up 1: Artificial intelligence for music, and towards a knowledge-grounded scientific research lifecycle

  18 Mar 2025
We hear from the organisers of two workshops at AAAI2025 and find out the key takeaways from their events.

The Good Robot podcast: Re-imagining voice assistants with Stina Hasse Jørgensen and Frederik Juutilainen

  17 Mar 2025
Eleanor and Kerry chat to Stina Hasse Jørgensen and Frederik Juutilainen about an experimental research project that created an alternative voice assistant.

Visualizing research in the age of AI

  14 Mar 2025
Felice Frankel discusses the implications of generative AI when communicating science visually.

#IJCAI panel on communicating about AI with the public

  13 Mar 2025
A recording of this session at IJCAI2024 is now available to watch.

Interview with Tunazzina Islam: Understand microtargeting and activity patterns on social media

  11 Mar 2025
Hear from Doctoral Consortium participant Tunazzina about her research on computational social science, natural language processing, and social media mining and analysis




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association