Microsoft has released an open source tool called Counterfit that helps developers test the security of artificial intelligence (AI) systems.
Microsoft has published a Counterfit project on GitHub, and a previous study found that most organizations lacked the tools to deal with hostile machine learning.
“This tool assesses vulnerabilities in Microsoft’s AI systems with the goal of proactively protecting AI services in accordance with Microsoft’s Responsible AI Principles and Responsible AI Strategy (RAISE) Initiative in Engineering. It was born out of my own needs, “Microsoft said in a blog post. ..
to see: Building Bionic Brain (Free PDF) (TechRepublic)
Microsoft describes command-line tools as “a common automation tool for large-scale attacks on multiple AI systems.” Microsoft’s red team operations use it to test their own AI models. Microsoft is also considering using Counterfit during the AI development phase.
This tool can be deployed from a browser through the Azure Shell or installed locally in the Anaconda Python environment.
Microsoft promises that command line tools can evaluate models hosted in any cloud environment, on-premises, or edge network. Counterfit also strives to be model-independent, data-independent, and applicable to models that use text, images, or common input.
“Our tools help make exposed attack algorithms accessible to the security community and provide an extensible interface for building, managing, and launching attacks against AI models,” Microsoft said. I have.
The tool is a hostile machine that allows an attacker to trick a machine learning model into using operational data, such as hacking an old Tesla McAfee with a MobileEye camera, and sticking black tape to misread the speed limit. Can be used partially to prevent learning. Speed sign. Another example was the disaster of Microsoft’s Tay chatbot, where bots tweeted racist comments.
Its workflow is designed for popular cybersecurity frameworks such as Metasploit and PowerShell Empire.
“This tool is preloaded with a public attack algorithm that can be used to bootstrap Red Team operations to evade and steal AI models,” Microsoft explains.
This tool is also useful for scanning AI systems for vulnerabilities and creating logs to record attacks on target models.
to see: Facial recognition: Don’t use it to peep into the feelings of staff, Watchdog says
Microsoft has tested Counterfit with several customers, including the aerospace giant Airbus, a Microsoft customer developing an AI platform with Azure AI services.
In a statement, Airbus senior cybersecurity researcher Matilda Road said, “AI is increasingly being used in the industry, especially to understand where feature space attacks can be achieved in problem spaces. It’s important to anticipate the protection of technology. ”
“Organizations such as Microsoft welcome the release of open source tools for security professionals to assess the security of AI systems, clearly demonstrating that the industry is taking this issue seriously. . “
Microsoft’s new open source tools can prevent AI from being hacked
Source link Microsoft’s new open source tools can prevent AI from being hacked