Close
TIC AI Cybersecurity

PingWest Tech Innovators Conference: AI attacks calls for the need of AI shields

Zijing Fu

posted on December 10, 2021 6:18 pmEditor : Wang Boyuan

If you’ve ever seen the movie “Escape Plan”, you would know that any security measures have loopholes. At PingWest's annual Tech Innovators Conference, Cyber security experts Jiayu Tang from RealAI had a say about this crucial topic.

In the digital age, that statement applies to AI technologies such as image recognition, which have been widely adopted in many industries and many high-stake scenarios, including mobile payment, security cameras, autonomous driving, and others. Just like human security guards, image recognition models can be deceived, tricked into neglecting threats, and even opening backdoors for illegal activities.

Every minute, $2,900,000 is lost to cybercrime and top companies pay $25 per minute due to cyber security breaches, according to RiskIQ’s research findings in 2019.

According to RealAI, an AI tech company providing services like smart risk control & management, they can simulate attacks on image recognition systems with targeted countermeasures, making certain objects invisible or appear as something completely different. For example, by adding specific graphics on top of the image, they can activate a backdoor in an image recognition system, thus making it think the guns and bullets are normal, non-threatening items. Yet, the image of guns and bullets has not been altered dramatically, as they still can be recognized by the human eye.

With a yellow rectangle overlapping on a photo of a gun, AI would think the picture on the right harmless.
With a yellow rectangle overlapping on a photo of a gun, AI would think the picture on the right harmless.

When cybersecurity problems are rooted in AI, it needs to be fixed by AI. According to the International Peace Institute, AI has the potential to expand the reach for spotting and defending against cyberattacks. For example, Google’s machine learning algorithms are capable of blocking 99.9 percent of spam and phishing messages on Gmail.

Similarly, to tackle loopholes found by their simulated attacks, RealAI has developed an artificial intelligence safety platform, the RealSafe, to provide analysis for AI products, safety enhancement solutions, and risk assessments.

In addition, RealAI has also developed a facial recognition AI firewall, enhancing the security of scenarios such as unlocking smart devices, identifying personal IDs such as passports, and face scanning at entrances of public establishments. “The firewall is the industry's first product dedicated to comprehensive reinforcement of face recognition systems,” said Jiayu Tang, Vice President of RealAI.

RealAI has over 10 patents and has collaborated with government departments such as Ministry of Industry and Information Technology, Ministry of Science and Technology, Ministry of Public Security, and State Administration of Radio Film and Television. RealAI also provided services for tech companies such as Ant Financial and Huawei, according to Tang.

According to a report by Brandessence Market Research and Consulting, the global cybersecurity market is forecast to be valued at $403 billion by 2027.

The value of artificial intelligence in cybersecurity will surpass $100 billion by 2030, with the artificial intelligence cybersecurity market growing at a compound annual growth rate of 25.7% during 2020-2030, according to a Research And Markets report mentioned by Tong Chao, Head of Product and External Cooperation at 360 AI Research Institute.

As criminals have drew on AI to commit cybercrimes, such as using AI to mimic a CEO’s voice and scamming $243,000 from a company, companies such as Qihoo 360, one of China's largest cybersecurity companies, have been developing adversarial and defensive machine learning models with the aid of its three machine learning platforms “XLearning”, “Prophet” and “Perception”.

In a research conducted by 360 AI Lab, multiple apps and facial recognition devices were found to have model data leakage security risks. Accordingly, Qi hoo 360’s AI technology has been applied to malicious code detection, malicious traffic analysis, vulnerability mining, botnet detection, network black and gray detection, and other fields.

"We hope to help more companies in the AI security ecology, build it up into a healthier and faster growing market," said Tang at the conference.