Researchers from MIT’s Computer Science, Artificial Intelligence Laboratory (CSAIL) and the machine-learning startup PatternEx have demonstrated an artificial intelligence platform that predicts cyber-attacks with 85% accuracy. Named as AI2, this artificial intelligence platform is roughly three times better than present security systems, and also reduces the number of false positives by a factor of 5.
Drawbacks in the present Security systems
The present security systems are either Human or Machine-centric. The human based Security systems rely on the existing rules created by living experts and therefore miss any attacks that don’t match the rules. Similarly, the machine reliant systems rely on “anomaly detection,” which tends to trigger false positives that both create distrust of the system and end up having to be investigated by humans, anyway.
However, AI2 is a platform that makes use of human intelligence and machine capabilities simultaneously. It predicts cyber-attacks significantly better than existing systems by continuously incorporating inputs from human experts. (The name comes from merging artificial intelligence with what the researchers call “analyst intuition.”)
How does AI2 predict Cyberattacks
AI2 scrutinizes the present data completely and detects suspicious activity by clustering the data into meaningful patterns using unsupervised machine-learning. These patterns are then taken to security experts who analyze and confirm which events are actual attacks and incorporates that feedback into the AI2 models for the next set of data.
“You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”
AI2 is significantly better than the prevailing Artificial Intelligence Security systems as it tends to improve itself without asking too much of human involvement. Combining three different unsupervised-learning methods, AI2 shows up the top events to analysts for them to the label. It then builds a supervised model that it can constantly refine through what is called as a “continuous active learning system.”
Researchers say that AI2 will become more accurate each time it detects an attack. New attacks will lead to more analyst feedback, which in turn will improve the accuracy of the future predictions.
Professors from the University of Notre Dame see AI2 as a security system which can prevent attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.