Microsoft has released an open framework to help machine learning (ML) systems defend against security threats. Microsoft has put up a blog post that sheds light on how cyberattacks against machine learning systems are more common than you might like to think.
Microsoft on attacks on machine learning systems
Even though machine learning systems see widespread adoption in critical areas such as finance, healthcare, and defense, businesses have not paid close attention to their security, something Microsoft is now looking to address.
Over the last few years, Microsoft continues to witness a growth in terms of attacks on commercial ML systems. The Gartner study predicts that by 2022, 30 percent of all AI cyberattacks will “leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.” Microsoft’s survey on 28 businesses found that most businesses lack the right set of tools to safeguard ML systems.
“Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern. This is a problem because cyber attacks on ML systems are now on the uptick,” Microsoft said in its blog post.
Microsoft has partnered with MITRE over the Adversarial ML Threat Matrix to enable security teams to defend against a growing number of cyberattacks on ML systems. According to Microsoft, adversarial ML is a significant area of research in academia. In fact, it’s an attempt to collect known adversary techniques against ML Systems that are different from traditional threats to corporate networks.
“We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organization’s mission critical ML systems,” Microsoft added.
Microsoft is looking to develop and deploy ML systems in a secure way, and the open framework will help the company achieve the same. The process will involve the Azure Trustworthy Machine Learning team for routine assessment of the security posture of critical ML systems. It will also deal with product teams and front-line defenders from the Microsoft Security Response Center (MSRC) team.