Anticipated benefits of the collaboration between Cisco Foundation AI and Hugging Face include more rigorous model vetting, early detection of vulnerabilities, and shared threat intelligence.
The Foundation AI team at Cisco has teamed with AI model hub Hugging Face to bolster malware protection and strengthen security across the AI ecosystem.
“As part of this expanded collaboration, Cisco Foundation AI will provide the platform and scanning of every public file uploaded to Hugging Face — AI model files and other files alike — in a unified malware scanning capability powered by custom-fit detection capabilities in an updated ClamAV engine,” wrote Cisco’s Hyrum Anderson and Alie Fordyce in a blog post about the collaboration.
Cisco launched Foundation AI in April. It’s a team within Cisco Security, created after the acquisition of Robust Intelligence and focused on developing open-source models and tools for securing the AI supply chain. ClamAV is an open-source malware detection scanner from Cisco Talos that targets malware, trojans, viruses, and other malicious threats aimed at email gateways, file and web servers.
“By combining Hugging Face’s central role in open-source AI with Cisco’s comprehensive malware scanning capabilities, this enables more rigorous model vetting, early detection of vulnerabilities, and shared threat intelligence — building greater trust and stronger security across the entire AI ecosystem,” Anderson and Fordyce wrote.
Hugging Face adds a new model on average every 7 seconds, and the platform hosts nearly 1.9 million models available to developers worldwide. Its scale is fueling a wave of innovation but also reinforcing the need to secure the AI supply chain, according to Anderson and Fordyce: “AI supply chain risks now permeate every stage of the AI lifecycle — from vulnerable software dependencies and malicious or backdoored model files to poisoned or non-compliant datasets. Given this complexity, it is increasingly challenging for any single organization to address these issues alone. Effective security of the AI landscape requires close collaboration across the community to secure AI.”
As a result of the collaboration, Cisco Foundation AI and Hugging Face “are democratizing AI model antimalware,” stated Anderson and Fordyce, citing two examples of new features that are available:
- ClamAV can now detect malicious code in AI models: “We are releasing this capability to the world. For free. In addition to its coverage of traditional malware, ClamAV can now detect deserialization risks in common model file formats such as .pt and .pkl (in milliseconds, not minutes). This enhanced functionality is available today for everyone using ClamAV,” Anderson and Fordyce wrote.
- ClamAV is focused on AI risk in VirusTotal: “ClamAV is the only antivirus engine to detect malicious models in both Hugging Face and VirusTotal – a popular threat intelligence platform that will scan uploaded models.”
Prior Cisco-Hugging Face collaborations
An earlier tie-in between Cisco’s Foundation AI and Hugging Face helped produce Cerberus, an AI supply chain security analysis model. Cerberus analyzes models as they enter Hugging Face and shares the results in standardized threat feeds that Cisco Security products can use to build and enforce access policies for the AI supply chain, according to a blog from Nathan Chang, product manager with the Foundation AI team.
Cerberus technology is also integrated with Cisco Secure Endpoint and Secure Email to enable automatic blocking of known malicious files during read/write/modify operations as well as email attachments containing malicious AI Supply Chain Security artifacts as attachments. Integration with Cisco Secure Access Secure Web Gateway enables Cerberus to block downloads of potentially compromised AI models and block downloads of models from non-approved sources, according to Chang.
“Users of Cisco Secure Access can configure how to provide access to Hugging Face repositories, block access to potential threats in AI models, block AI models with risky licenses, and enforce compliance policies on AI models that originate from sensitive organizations or politically sensitive regions,” Anderson and Fordyce wrote.
Cisco Foundation AI
When Cisco introduced Foundation AI back in April, Jeetu Patel, executive vice president and chief product officer for Cisco, described it as a “a new team of top AI and security experts focused on accelerating innovation for cyber security teams.” Patel highlighted the release of the industry’s first open weight reasoning model built specifically for security:
“The Foundation AI Security model is an 8-billion parameter, open weight LLM that’s designed from the ground up for cybersecurity. The model was pre-trained on carefully curated data sets that capture the language, logic, and real-world knowledge and workflows that security professionals work with every day,” Patel wrote in a blog post at the group’s introduction.
Customers can use the model as their own AI security base or integrate it with their own closed-source model depending on their needs, Patel stated at the time. “And that reasoning framework basically enables you to take any base model, then make that into an AI reasoning model.”
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.