How AI-powered malware uses facial recognition technology | Artificial intelligence

TechRepublic’s Dan Patterson spoke with Marc PH. Stoecklin

Marc PH. Stoecklin: What we show in this proof of concept is AI-powered through a distribution channel, which is using unsuspicious, innocent-looking application. We use for this purpose a videoconferencing application that we call Talk. We’re downloading this application. The user is opening the application from his download and it is running.

It’s behaving normally. We have the sign-in screen. Now, the application can be used as if it was a normal application. Indeed, it is a normal application. It is a fully usable application at that point. However, what we’re going to see now, if we’re moving the laptop to look at Dan’s face, the behavior will suddenly change.

SEE: IT leader’s guide to the threat of fileless malware (Tech Pro Research)

What happened now, the AI model picked up on Dan’s face, and from Dan’s face derived a key. It used Dan’s face as a key, basically, to derive how and when to unlock that malware. It makes it very evasive and very targeted to only Dan by using this application and showing him malicious behavior.

The AI is inspecting what is being seen by the webcam and is able to derive a key to unlock the malicious intent, and only if a specific person is showing up in front of the webcam to which the AI has been trained to recognize the person, then, in this case, the key can be derived, and the malicious behavior is showing up.

20180814ibm1dan.jpg

Image: Dan Patterson

You might also like
Leave A Reply

Your email address will not be published.