Charles Leaver – Be Prepared For These Consequences When Machine Learning Takes A Hold

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you study history you will see many examples of severe unintentional repercussions when new technology has been presented. It frequently surprises individuals that brand-new technologies may have wicked purposes in addition to the positive intentions for which they are launched on the market but it takes place on a very regular basis.

For instance, Train robbers using dynamite (“You think you utilized enough Dynamite there, Butch?”) or spammers using email. Just recently using SSL to hide malware from security controls has actually ended up being more typical just because the legitimate use of SSL has actually made this method more useful.

Since brand-new technology is typically appropriated by bad actors, we have no reason to believe this will not be true about the new generation of machine-learning tools that have reached the marketplace.

To what effect will there be misuse of these tools? There are probably a couple of ways that assailants might use machine-learning to their benefit. At a minimum, malware authors will test their new malware against the brand-new class of innovative danger protection solutions in a quest to modify their code so that it is less probable to be flagged as malicious. The effectiveness of protective security controls constantly has a half-life due to adversarial learning. An understanding of artificial intelligence defenses will help attackers become more proactive in decreasing the effectiveness of machine learning based defenses. An example would be an assailant flooding a network with phony traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the assailant would be to trick the protector’s artificial intelligence tool into misclassifying traffic or to produce such a high degree of false positives that the defenders would dial back the fidelity of the signals.

Artificial intelligence will likely also be utilized as an attack tool by attackers. For example, some researchers forecast that assailants will use artificial intelligence methods to refine their social engineering attacks (e.g., spear phishing). The automation of the effort that is required to tailor a social engineering attack is especially troubling given the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a powerful economic incentive for assailants to embrace the strategies.

Anticipate the kind of breaches that deliver ransomware payloads to increase sharply in 2017.

The need to automate tasks is a significant driver of investment choices for both attackers and protectors. Machine learning guarantees to automate detection and response and increase the operational pace. While the technology will increasingly become a basic element of defense in depth methods, it is not a magic bullet. It ought to be understood that assailants are actively working on evasion approaches around machine learning based detection products while likewise utilizing machine learning for their own attack functions. This arms race will need defenders to progressively attain incident response at machine pace, additionally worsening the requirement for automated incident response abilities.