this post was submitted on 27 Jun 2024
31 points (94.3% liked)

Machine Learning

485 readers
1 users here now

A community for posting things related to machine learning

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 15 points 4 months ago* (last edited 4 months ago) (3 children)

The technique has not yet been peer-reviewed

Let's see, then.

[–] SatouKazuma 3 points 4 months ago

Yeah I'm not exactly holding my breath.

[–] [email protected] 3 points 4 months ago

They claim it's more efficient, which presumably means it requires less electricity to run the same process. Which means that if this is legit, the companies that do AI should implement it organically as a way to shave costs. If they have any brains whatsoever.

[–] [email protected] 3 points 4 months ago

The peer review process is way too noisy to be meaningful. Better to just read it and judge the work for yourself.

https://arxiv.org/abs/2406.02528

Or wait for follow up work that can corroborate their claims.

[–] ericjmorey 4 points 4 months ago

The author mentioned limitations only vaguely at the end of the article.

[–] [email protected] 4 points 4 months ago (1 children)

This is the best summary I could come up with:


Researchers claim to have developed a new way to run AI language models more efficiently by eliminating matrix multiplication from the process.

The technique has not yet been peer-reviewed, but the researchers—Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian—claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models.

They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones.

In the paper, the researchers mention BitNet (the so-called "1-bit" transformer technique that made the rounds as a preprint in October) as an important precursor to their work.

According to the authors, BitNet demonstrated the viability of using binary and ternary weights in language models, successfully scaling up to 3 billion parameters while maintaining competitive performance.

Limitations of BitNet served as a motivation for the current study, pushing them to develop a completely "MatMul-free" architecture that could maintain performance while eliminating matrix multiplications even in the attention mechanism.


The original article contains 412 words, the summary contains 177 words. Saved 57%. I'm a bot and I'm open source!

[–] spikespaz 1 points 4 months ago

What were the limitations they overcame with BitNet, and what are some unresolved issues mentioned in the reprint?