AI in cybersecurity, 3 contrasting views

AI
Share on facebook
Share on twitter
Share on pinterest

I wrote the final section in this article on how AI can be used by the bad guys against cybersecurity. It contrasts with 2 more positive views of how AI is helping us as cybersecurity practitioners by the co-authors.

And in a less squint your eyes format, here is my bit ….

You have now read about all the remarkable pioneering implementations of Artificial Intelligence (AI) in cybersecurity for “The Good” and “The Bad”. So let me now introduce you to the darn right ugly face of artificial intelligence in our beloved field.  We have all heard the cliché before, “cybersecurity is an arms race”, well when it comes to AI it really is.  Our nemesis is moving quickly to weaponise AI against us, and here are just a few examples of how they are doing it.  

Detecting vulnerabilities in source code:  Open source code has always been perceived as a double-edged sword from a cybersecurity perspective.  On the one hand, its transparent nature allows robust security checking by an extensive collection of open-source advocates,  all keen to contribute to ensuring the application is secure.  On the other hand, the bad guys can see it too, and if they spot a vulnerability in the code, they will keep quiet and compromise it; the so-called zero-day attack.  With machine learning, they now have the means to do this quicker and easier. A recent academic paper, written by Xin Li et al. at Beijing University[1] proposes methods of using machine learning to teach safe programming patterns to systems by subjecting them to many instances of known mature safe code.  This learning process then creates rules for determining secure code.  If new code is then subjected to these rules and fails, we can be almost sure it is vulnerable.  Imagine the bad guys feeding through masses of code snippets from Github to these algorithms, not a pleasant thought is it?

Kamikaze malware: One of our principal weapons against malware is the ability to reverse engineer it and figure out precisely what it is doing.  The process consists of using specialised tools, including disassemblers, network analysers, debuggers and memory analysis tools.  Evidently, though, nobody wants to execute malware in a production environment, so the analysis tools are usually bundled together into malware analysis sandboxes, to isolate the malware analysis procedure from the engineers operating system.  In retaliation, the malware developers include several tests to see if the malware is operating in a sandbox environment, and if it is it modifies its intended operation or deletes itself to keep us all guessing how it works, sneaky eh? However, the researchers know these tricks and hook into the malware, fooling it into thinking it is on a real system, touché bad guys!  Now though, the bad guys have AI and can train the malware to recognise the patterns of virtualised environments, and when they detect they are running in one, will shut up shop, checkmate, hackers win.

IBMs Deeplocker [2]: This is proof of concept AI malware designed by IBM, first showcased at Blackhat 2018 [2].  The malware is combined with benign software such as an audio application to avoid detection by security analysis and antivirus.  Also, it is fused with target attributes. When these target attributes are recognised, the malware is opened, and the payload activated.  This target recognition uses an AI neural net which has been trained to detect traits of the target. These traits could be a combination of any of, user activity, location, software environment, physical environment or audio or visual identifiers, including face or voice recognition. With the target identified, the malicious payload, e.g. ransomware, is released.  It brings to mind images of precision-guided smart missiles hitting their targets.  The million-dollar question is, are such techniques in the wild now? The truth is, we don’t know for certain, but one thing is for sure, they are most definitely coming for us.

[1] Li, X., Wang, L., Xin, Y., Yang, Y. and Chen, Y., 2020. Automated Vulnerability Detection in Source Code Using Minimum Intermediate Representation Learning. Applied Sciences, 10(5), p.1692.

[2] Stoecklin, M., Jang, J. and Kirat, D., 2020. Deeplocker, The Next Generation Of Malware Using AI Locksmithing.

The latest cyber security news

Enter your email below to be notified when a new article is released.

Share this post with your colleagues

Share on linkedin
Share on facebook
Share on twitter

This website uses cookies to ensure you get the best experience on our website.