英语轻松读发新版了,欢迎下载、更新

Microsoft researchers say they've developed a hyper-efficient AI model that can run on CPUs | TechCrunch

2025-04-16 15:48:27 英文原文

作者:Kyle Wiggers

Microsoft researchers claim they’ve developed the largest-scale 1-bit AI model, also known as a “bitnet,” to date. Called BitNet b1.58 2B4T, it’s openly available under an MIT license and can run on CPUs, including Apple’s M2.

Bitnets are essentially compressed models designed to run on lightweight hardware. In standard models, weights, the values that define the internal structure of a model, are often quantized so the models perform well on a wide range of machines. Quantizing the weights lowers the number of bits — the smallest units a computer can process — needed to represent those weights, enabling models to run on chips with less memory, faster.

Bitnets quantize weights into just three values: -1, 0, and 1. In theory, that makes them far more memory- and computing-efficient than most models today.

The Microsoft researchers say that BitNet b1.58 2B4T is the first bitnet with 2 billion parameters, “parameters” being largely synonymous with “weights.” Trained on a dataset of 4 trillion tokens — equivalent to about 33 million books, by one estimate — BitNet b1.58 2B4T outperforms traditional models of similar sizes, the researchers claim.

BitNet b1.58 2B4T doesn’t sweep the floor with rival 2 billion-parameter models, to be clear, but it seemingly holds its own. According to the researchers’ testing, the model surpasses Meta’s Llama 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B on benchmarks including GSM8K (a collection of grade-school-level math problems) and PIQA (which tests physical commonsense reasoning skills).

Perhaps more impressively, BitNet b1.58 2B4T is speedier than other models of its size — in some cases, twice the speed — while using a fraction of the memory.

There is a catch, however.

Achieving that performance requires using Microsoft’s custom framework, bitnet.cpp, which only works with certain hardware at the moment. Absent from the list of supported chips are GPUs, which dominate the AI infrastructure landscape.

That’s all to say that bitnets may hold promise, particularly for resource-constrained devices. But compatibility is — and will likely remain — a big sticking point.

Kyle Wiggers is TechCrunch’s AI Editor. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Manhattan with his partner, a music therapist.

关于《Microsoft researchers say they've developed a hyper-efficient AI model that can run on CPUs | TechCrunch》的评论


暂无评论

发表评论

摘要

Microsoft researchers have developed BitNet b1.58 2B4T, the largest-scale 1-bit AI model available under an MIT license, capable of running on CPUs including Apple’s M2. This bitnet compresses weights into three values (-1, 0, and 1) to optimize memory and computing efficiency for lightweight hardware. Trained on 4 trillion tokens, it outperforms models like Meta’s Llama 3.2 1B and Google’s Gemma 3 1B in benchmarks such as GSM8K and PIQA while being significantly faster and using less memory than similar-sized models. However, its performance is contingent upon Microsoft's bitnet.cpp framework, which currently only supports specific hardware excluding GPUs.

相关新闻