IBM Launches Open-source Toolkit to Safeguard AI Systems

On 17th, April 2018, IBM Security announced the launch of an open-source toolkit at the RSA conference in San-Francisco. The Open-Source library entails framework-agnostics software that contains defenses, attacks as well as bench markings to safeguard artificial intelligence systems. This open-source library is also known as Adversarial Robustness Toolbox is designed to assist in protecting Artificial Intelligence system (AI) and Deep Neural Networks (DNNs) into the “Open-Source Community”.

The toolkit will enable the cyber community and developers to test AI-based security defense against any strong and complex attacks so as to help build resilience as well as dependability into the systems.

AI systems weakness may be exploited through very indistinct means. With reference to the tech giant; these three actors used to achieve this are often simple, small as well as undetectable alterations in content which may include videos, images, and audio recordings; they can be crafted in order to confuse your AI system. Any small changes in AI system can result in major security problems and impact the performance of your AI systems as well.

A good example in this scenario is if your AI major task is to control traffic systems; the tricking artificial controllers might make the stop signs to change so as to appear as if it is 70 mph signs. It can be either in the applications of the map or even in one day or physically.

Open-source community can now use this toolbox as a respiratory as well as the source of information on how to fight threats on the current and future AI systems. The Adversarial Robustness Toolbox focuses on fighting Adversarial AI by recording threat data and assisting developers to create, benchmark and deploy practical defense systems to solve real-world AI.

Also Read: 5 Best Open Source Frameworks For Developers and Programmers

Features of the toolbox include:

  • A library
  • Interfaces
  • Metrics

Introduction of the toolkit to the “open-source” may inspire others to create a solution before the “Adversarial AI” turns to a true threat. The IBM researchers were majorly inspired to develop the AI library due to lack of the needed defense to protect the AI systems, the existing tools didn’t offer enough security needed to secure the AI systems. Therefore, Open-source AI is the first as well as the only AI library that accommodates attacks, defense and also benchmarks so as to implement improved security.

Other announcements made by the IBM this week include:

  1. The introduction of ML and AI orchestration capabilities in regards to Resilience platform.
  2. The Launch of the “IBM X”. This is a force-threat management services system. They harness the same technologies in-order to analyze as well as detect cyber-security threats for enterprise networks.

You can now get this toolbox on Github and is readily available for download. Based on our reach so far, for libraries who have tested their AI systems; they have only managed to collect quite a collection of attacks henceforth we still need to apply effective and appropriate defenses in-order to actually improve the AI systems.

How effective is the toolbox?

  • It uses multiple attacks against AI systems as the security team who are tasked with enhancing the effectiveness of AI system chooses the most effective-defense to fight the attack.
  • It works by trying and tricking the AI with intentionally modified-external data. Though the data which are sent against the AI will be translated to be “fuzzy” and it will make the AI misclassify the data.

Verdict

This open-source toolkit is very essential and the cyber-security industry must work as a team since collaborative defense is currently the only available way for both the security teams as well as developers to plan ahead of the “Adversarial AI threats”.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.