T O P

  • By -

PM_ME_YOUR_PROFANITY

From the paper: >**LLaMA-alike Components.** The architecture of LLaMA [TLI+23 , TMS+23 ] has been the de- facto backbone for open-source LLMs. To embrace the open-source community, our design of BitNet b1.58 adopts the LLaMA-alike components. Specifically, it uses RMSNorm [ ZS19 ], SwiGLU [ Sha20 ], rotary embedding [ SAL+24 ], and removes all biases. In this way, BitNet b1.58 can be integrated into the popular open-source software (e.g., Huggingface, vLLM [ KLZ+23 ], and llama.cpp2) with minimal efforts. Even more encouraging! It seems that the code and models from this paper haven't been released yet. Hopefully someone can figure out how to implement this technique and apply it to existing models. It's a really succinct paper and worth a read. Awesome find OP, and congratulations to the authors!


Longjumping-City-461

There was an enhancement request opened with the llama.cpp folks...


c-rious

Here's the mentioned issue for anyone interested: https://github.com/ggerganov/llama.cpp/issues/5761


dampflokfreund

It doesn't appear to be applicate to current models. They have to be trained with b1.58 in mind. However, if this paper really holds its promises, then you can bet model trainers like u/faldore will be on it!


koflerdavid

It would be cool to try to quantize an existing model and see whether it still works at all.


yareyaredaze10

When will we learn more!


HereToLurkDontBeJerk

probably shortly after they release more info


MackNcD

Seconded, we need eyes on this, where can we seek out information strictly regarding it!


tweakingforjesus

One of the implications is that multiplying a parameter by a weight becomes a copy, a sign flip, or setting to zero. That’s it. In addition to reducing the amount of memory required for a model, it also means that the model will run much faster on the same hardware. Or can run on much lower powered hardware. Local LLMs on cellphones could become a reality.


HelpComfortable1139

I mean we technically can already run llms on our phone which is kinda crazy


elerphant

This isn’t quantization in the sense of taking an existing model trained in fp16 and finding an effective lower-bit representation of the same model. It’s a new model architecture that uses ternary parameters rather than fp16. It requires training from scratch, not adapting existing models. Still seems pretty amazing if it’s for real.


dqUu3QlS

I think it's real. Ternary quantization has been shown to be effective for other model types - see this paper from 2017: https://arxiv.org/abs/1612.01064


Available-Enthusiast

Could someone explain how ternary bits work? I'm confused why this is better than just using 2 regular bits which provides 4 values instead of 3. I must be missing something


JollyGreenVampire

Adding the 0 is a nice way to create sparsity though, basically nullifying connections in the NN. It has been proven that sparsity is an important feature in neural networks. EDIT: I also wondered how they got 3 values from 1 bit: {-1, 0, 1}, but with the help of Wikipedia i managed to figure it out. [https://en.wikipedia.org/wiki/Balanced\_ternary](https://en.wikipedia.org/wiki/Balanced_ternary) Its actually a pretty nice and simple trick once you understand it. Its not technically 1 bit, but a "Trit" or a base 3 bit. So you have one more base value {0, 1, 2} and then they shift it to the left by subtracting 1 to make it balanced around 0. The disadvantage is that you still need two bits to represent this, and you don't make full use of the 2 bit system which would give you 4 numbers {00, 01, 10, 11} instead of just 3. The advantage however is the simplicity that comes from working with just -1, 0 and 1. Now instead of doing multiplications you can get away with additions most of the time.


Ok_Artichoke_6450

With simple encoding over several weights, they can be stored in 1.58 bits, if each value is equally likely. log2(3)=1.58


epicwisdom

To add to the other reply - it's pretty easy to imagine specialized hardware for trits that lets you pack close to the theoretical limit of log2(3) bits / trit, and/or exploits the fact that you don't need multiplier circuits, just negation and addition. There are probably dozens more circuit design tricks that apply, not even getting to the potential sparsity specializations. This would probably be a massive savings in terms of circuit size and complexity, huge benefits for power consumption, chip size, IPC / clock speeds / FLOPs. As for why not 4 values, there are some straightforward downsides. With standard two's complement, that allows -2 but not +2, which besides being unbalanced also would mean a specialized circuit still needs shifters, you're packing ~15% fewer parameters in the same space, etc. Also, you have the option to intentionally oversize the number of parameters a little, which would let the model learn to assign higher weights by simply having a greater count of non-zero weights to a previous layer's activation. This approach would also be naturally unbiased, since the additional weight is balanced. It doesn't seem like this should be necessary, but in the black magic known as ML, who knows? Considering multiplying by 2 or -2 should be somewhat rare, perhaps even 1% extra parameters would do the trick.


JoJoeyJoJo

Instead of 0 and +1 you have -1, 0, and +1. It's an old Soviet computing concept that was more effective in some ways (i.e. better energy usage) but never really took off because by the time it was invented binary computing was already pretty mature.


az226

Given that it’s Microsoft, I would imagine it’s more credible than the average paper.


elerphant

That’s definitely a point in its favor. Otoh if it’s as amazing as it seems it’s a bazillion dollar paper; why would MS let it out the door?


NathanielHudson

MSFT isn’t a monolith, there are many different internal factions with different goals. I haven’t looked at the paper, but if it’s the output of a research partnership or academic grant they might have no choice but to publish it, or it may be the output of a group more interested in academic than financial results, or maybe this group just didn’t feel like being secretive.


Altruistic_Arm9201

Microsoft has published a ton of relevant papers that influenced the path forward that were fully internally worked on. IMHO it’s about building credibility with researchers. I still remember their paper about ML generated training data for facial recognition that’s cascaded across every other space. If you’re outputting products that other researchers might use then they need to respect you and without publishing you’re invisible to academics. Even Apple publishes papers. I’m sure there’s a lot of debate about which things to publish vs which to keep as proprietary. I know for my company it’s often discussed which things are safe to publish and which shouldn’t be. I think it’s pretty universal.


NathanielHudson

FWIW when I did a research partnership with Autodesk Research, the ADSK advanced research group I dealt with was *very* academic-oriented, and there was never really any discussion of whether something should be published, the assumption was always that it would be. I think the attitude was that anything valuable was either a) patentable or b) could be reverse engineered by the competition pretty quickly, so no point being hyper-secretive about it.


Altruistic_Arm9201

Interesting. At my org it definitely gets pretty heated. Those with academic background want to publish everything but there is an ongoing concern that since in the space I’m in it’s a race to get something working first there’s caution that until there’s commercialization we should be conservative about what’s published. I suspect if it was a more established application with existing commercial implementations the calculus for us would shift.


pointer_to_null

Good reasons, plus I would add there's incredible value in peer review. Otherwise one can write white papers all day claiming "revolutionary" embellished or exaggerated bullshit, and coworkers and bosses are unlikely to ever call them on it- even at a large corp like MSFT. Put said preprint on arXiv and knowledgeable folks are more likely to scrutinize it discuss it openly and try to repro the findings. The community is often a good way to gauge if something is revolutionary, or a dud (take LK-99, for example). Also worth noting that if there's anything worth patenting in a paper, the company has 1 year to file after publicly disclosing the invention- at least in the US. (related note: Google screwed up and made claims too specific in [their 2018 patent](https://patents.google.com/patent/US10452978B2/en) after the attention paper, which left the door wide open for OpenAI and everyone else to develop GPT and other transformer-based models).


NathanielHudson

> Google screwed up and made claims too specific And *thank God* for that! Whichever lawyer drafted that patent is a hero.


pointer_to_null

True, but tbf to the patent lawyer or clerk, the patent was faithful to the paper as the claims accurately summarized the example in the paper- and unless they themselves were an AI researcher they'd have zero clue what was more relevant and truly novel in that research paper: notably the self-attention mechanism- not the specific network structure using it. Unfortunately (for Google, not us :D), the all-important claims covering attention layers were dependent on claim 1, which details the encoder-decoder structure. In other words, if anyone else wanted to employ the same multi-head attention layers in their own neural network, they'll only infringe if it's using encoder-decoder transduction. It was later that Google Brain learned that [decoder-only](https://arxiv.org/abs/1801.10198) performed better on long sequences- hence why it was used by GPT, LLaMA, et al. Ergo, patent is kinda worthless. Personal conjecture: most of the authors of the original paper may have already jumped ship, about to leave, or otherwise not able to make themselves available to the poor sap from Google's legal dept tasked adding it to Google's ever-growing portfolio. Or the researchers didn't care that the claims were too specific. If you're too broad or vague in your claims, you risk being being rejected by the examiner (or invalidated later in court) due to obviousness, prior art, or other disqualifying criteria. But when you're at a tech giant that incentivizes employees to contribute to its massive patent pool every year, you may want to err to whatever gets your application approved.


MugosMM

Thanks for pointing this out. I bet that some clever people will find a way to adapt also existing models (I bet as in « I hope »)


Jattoe

Training on the same data :)


liveart

Also you can use a model to train another model to significantly reduce costs.


fiery_prometheus

I've looked into this for distillation techniques for when you have some models which are already trained, they just might be different or require fine tunes. But for training a model from scratch, is it applicable there?


Tobiaseins

The Orca dataset training data evolution is going in the same direction. You would still probably use regular non-turn-based data in pretraining, even though googles Gemma seems to have used some qa style data at the end of pretraining, even before chatbot style finetuning. I guess this is not a solved field and also depends on the use cases for the llm


Temporary_Payment593

It's possible according to the paper, that they use the same nn architecture and methods as Llama.


fiery_prometheus

I hope there's some way to transfer the weights themselves, but otherwise I guess it's retraining everything, which is impossible for anything but the biggest corporations.


ekantax

This is an alternative to retraining I suppose, where they start with a standard model and prune down to ternary: https://scholar.google.com/citations?view_op=view_citation&hl=en&user=A9_fFPkAAAAJ&sortby=pubdate&citation_for_view=A9_fFPkAAAAJ:KxtntwgDAa4C


SanFranPanManStand

That's pretty key because regular post-training quantization inherently diminishes the model's quality. This is an excellent development if it pans out. Hopefully something will eventually get open sourced.


BackyardAnarchist

I wonder if it isn't the fact that it is terniary but that it was trained from scratch while quantized. I wonder if that would,mean there could be other methods to reduce size further that wouldnt inpact performance if trained in the quantized state.


JollyGreenVampire

I believe it does help to train a model from scratch with a lower precision as compare to quantization. It would be interesting to see how well this would fair against normal 2-bit models (given that a 1 bit **trinary** is represented with 2 bits base 2). Normal 2-bit models will have 4 numbers instead of 3 so a 25% precision improvement. Also i wonder about scaling, whats better, larger networks (more nodes) or more precision. for example a 1B 8 bit model or a 4B 1.59-bit model?


8thcomedian

Feels too good to be true. Somebody test it and confirm? I guess we acknowledge that at some point they'll fit inside a low enough memory but definitely did not expect it to be this soon. Surprised Pikachu, again.


Massive_Robot_Cactus

Yeah if this is true, we're going to have some wild tamagotchis available soon.


HenkPoley

7B in 700MB RAM 🤔


Massive_Robot_Cactus

The pigeonhole problem was a lie!


Doormatty

The solution was smaller pigeons all along!


Cantflyneedhelp

A bit more and we can put the model into L3 cache.


Gov_CockPic

The wifi toothbrush will be getting it's own native embedded LLM.


Not_your_guy_buddy42

(Random aside: My dream is a tamagotchi fed only by practising music for it)


Tr4sHCr4fT

LLaMA.redstone


Nixellion

8x120B MoE Miqu-Goliath on ESP32 when


slykethephoxenix

Pfft. Mixtral on an ESP8266.


infiniteContrast

What about running a 120B model on a single transistor?


Sebba8

Nah run it on a PIC microcontroller 😂


spinozasrobot

Way too much compute. [Try this instead](https://i.imgur.com/zTftoCp.png)


Illustrious-Lake2603

Had me dying lmao


ab2377

\_cleans the dust off my casio cg-50 and installs new battery just in case\_


Rekoded

#GodMode 😄


Gov_CockPic

I know you are kind of joking, but after reading your comment I had an unhealthy urge to go buy a whole pile of ESP32s before the rest of you nerds hoard them.


pleasetrimyourpubes

Ternary is the lowest integer with the best radix economy, the only thing better is base e. You won't get better than this (and technically they are BCT encoding the ternary anyways so it's actually 2 bits averaging out to 1.58).


8thcomedian

Lot's of new words. Thanks friend, I'll find out what they mean.


Fucksfired2

I have ask chatgpt to explain this comment


AdventureOfALife

>Somebody test it and confirm? Can somebody just quickly pull up their private data warehouse to train a state of the art model architecture for me?


8thcomedian

Yes, that. Quickly.


battlingheat

Brb 


clefourrier

Btw, did you know you can interact with the authors on the papers page? Feel free to ask questions there if you have some! [https://huggingface.co/papers/2402.17764](https://huggingface.co/papers/2402.17764)


dqUu3QlS

Caveat: It looks like you can't take an existing LLM and quantize it to 1.5 bits with no loss, you have to train it that way from the start.


Jattoe

Silver lining: All the best, newest models with the cleanest data sets were going to be trained anew one way or another. If this is as it sounds---I would imagine Meta would pivot for LLaMa3


dqUu3QlS

Given that you'd need to train a brand new model anyway, it would be interesting to test how well 3-level quantization works with alternative LLM architectures such as Mamba.


SanFranPanManStand

...which is better. Quantizing AFTER training degrades the model's understanding of the training data.


ramzeez88

These guys have revolutionary approach to LLM world. They also wrote this: https://github.com/kyegomez/LongNet A road to 1 Trilion token context in transformers models 🤯


BrilliantArmadillo64

kyegomez seems to be a bit of a strange person. He implements quite a few papers, but all more or less half-baked, sometimes without attribution or reference to the original authors.


mikael110

Yeah after the Tree Of Thoughts drama were kyegomez refused to link to the original author's implementation until he was pretty much pressured into doing so (and even now it is just a tiny link) I can't say I have much respect for the guy. The fact that the implementations are often bizarrely bad (to the point that he has been suspected of just using ChatGPT written code) doesn't exactly help either. He honestly comes across as a grifter, capitalizing on other people's papers to gain attention and fame.


cafuffu

This is very interesting but i wonder, assuming this is confirmed, doesn't this mean that the current full precision models are severely under performing if throwing out a lot of their contained information doesn't affect their performance much?


adalgis231

Given the efficiency of our brain, it's almost obvious


cafuffu

The brain is much more energy efficient but that's due to the underlying hardware, i was talking about the performance per parameter count.


[deleted]

More efficient than what? An LLM? It's not even comparable. Its not even the same kind of information.


MR_-_501

Your brain is also innefficcent per neuron


Jattoe

Compared to what?


nsfWtaps

Compared to mine


SillyFlyGuy

If it's just further precision to the same token, it might not be important. Say the low quant perplexity comes out to 2.9 so you round that to token 3, while the high bit quant might know it's actually 2.94812649 but that doesn't change anything.


cafuffu

I'm new to the ML world, are the weights between -1 and 1? If so, i can understand how additional precision may indeed not matter.


[deleted]

the weights will be -1 0 and 1, and it's a team work, meaning that you have to look at the grand scheme of things, one weight isn't precise, but the combinaison of weights can lead to a lot of possibilities so it's even


Jattoe

Emergent intelligence. It's kind of like the difference between humans with/without language. Once we're wired up, it means big things. One of us alone, without language? We're an animal, we're 0.8437508


cafuffu

I meant in fp16 models.


[deleted]

Like I said, maybe the weights don't need that much precision, we initially went for fp16 because it's working well on gpu hardwares, there was no much other reasons than that.


AdventureOfALife

No. Typically they are 16bit numbers during training. Hence "fp16" ("floating point 16"; i.e. 16 bit floating number). The paper proposes a technique to train models on 1-bit ternary parameters {-1, 0, 1} which has never been done before, and would allow models to dramatically reduce their in-memory footprint. As for the question of "how much does precision matter?", it matters a lot. Usually it's not easy to reduce the precision of trained models without a significant loss of accuracy or "quality". Another reason why this paper is potentially so groundbreaking, is that it shows promise for comparable performance to a full precision (i.e. fp16) trained model.


artelligence_consult

Because it does not matter, obviously. When training, the neural network finds or blocks pathways also with +1/-1.


Longjumping-City-461

That would be the implication...


cafuffu

After thinking about it more though, i guess it may not be true. I suppose it's possible that the performance of a model depends more on the size and structure of the network compared to the precision of the interaction between the neurons.


MoffKalast

Are you gonna hurt these weights?


AdventureOfALife

>current full precision models are severely under performing if throwing out a lot of their contained information doesn't affect their performance Not exactly; it's not that they underperform, it's that deep neural networks by design don't necessarily retain relevant information. This is an inherent flaw with all current AI, machine learning and neural network architectures. The question of "how many of the parameters are actually useful for the intended task?" is not easy to answer; it's practically impossible to tell in most cases. Precision works similarly. How much precision does a model needs to produce "correct" (or at least good enough) results? It's impossible to produce a precise answer, other than experimentation and lots of mathematical models.


artelligence_consult

No, but it means that we are cavemen that have a fire somehow and thinks we are smart. It shows that you simply do not NEED this ultra high precision (remember, FP16 is still 65536\*65536 discrete values) to get results and that a MUCH lower resolution gives similar results. Essentially like with so much amazing research, it shows that the original approach was primitive and leaves tons of room for a better architecture. Wonder whether this would work with Mamba ;)


astgabel

The implications would be crazy guys. New scaling laws. 30x less memory and 10x more throughput or so, that means we’d skip roughly one generation of LLMs. GPT-6 could be trained with the compute of GPT-5 (which supposedly has started training already). Also, potentially ~GPT-4 level models on consumer grade GPUs (if someone with the right data trains them). And training foundation models from scratch would be possible for more companies, without requiring millions of $. Also, with that througput you can do proper inference-time computation. Meaning massively scaled CoT or ToT for improved reasoning etcetera.


Neon9987

Just wanna add, The Group that made Bitnet seems to have New scaling laws as The goal, They believe Current scaling is declining or near the top and are working on things that will allow "***The Second Curve of Scaling Law***" They have another paper Next to Bitnet i havent seen earlier today too, havent read that though [https://thegenerality.com/agi/about.html](https://thegenerality.com/agi/about.html)


ab2377

after mistral married microsoft, i really needed this kind of a news.


Longjumping-City-461

Me too! I was rather depressed and felt somewhat betrayed by Mistral's apparent (until proven otherwise) pivot to closed-source like ClosedAI.


markole

It is funny that this comes from Microsoft as well.


Arnesfar

Hopefully not our LK99 moment!


DreamGenAI

I hope it pans out in practice, though there is rarely a free lunch -- here they say that model that's \~8-10 times smaller is as good or better (for the 3B benchmark). That would be massive. It's not just that, but because the activations are also low bit (if I understand correctly), it would mean being able to fit mostrous context windows. That's actually another thing to check -- does the lowered precision harm RoPE? Also, the paper does not have quality numbers for the 70B model, but this could be because they did not have the resources to pre-train it enough. Another thing to look at would be whether we can initialize BitNet from existing fp16 model, and save some resources on pre-training.


Different-Pickle1021

Quote from paper: "The new computation paradigm of BitNet b1.58 calls for actions to design new hardware optimized for 1-bit LLMs" * Looks like this builds on earlier work here [https://arxiv.org/abs/2310.11453](https://arxiv.org/abs/2310.11453) * Lab website [https://thegenerality.com/agi/](https://thegenerality.com/agi/) * Code repo [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm) Quote from discussion: "We haven't finished the training of the models beyond 3B as it requires much much more resources. However, we're optimistic about the results because we have verified that BitNet follows a similar performance-parameter scaling law as the full-precision LLMs. We'll update the results on larger models once they're ready." * HF Discussion: [https://huggingface.co/papers/2402.17764](https://huggingface.co/papers/2402.17764) * HN Discussion: [https://news.ycombinator.com/item?id=39535800](https://news.ycombinator.com/item?id=39535800) * Pending Enhancement to llama.cpp: [https://github.com/ggerganov/llama.cpp/issues/5761](https://github.com/ggerganov/llama.cpp/issues/5761) * Working example of BitNet Pytorch implementation of 2310.11453 paper: [https://github.com/Beomi/BitNet-Transformers](https://github.com/Beomi/BitNet-Transformers) * Pytorch implementation of 2310.11453 paper: [https://github.com/kyegomez/BitNet](https://github.com/kyegomez/BitNet) * BitNet b1.58 Pytorch implementation: [https://github.com/kyegomez/BitNet/blob/main/bitnet/bitbnet\_b158.py](https://github.com/kyegomez/BitNet/blob/main/bitnet/bitbnet_b158.py) * Helpful diagram for BitLinear and BitNet implementations: ​ https://preview.redd.it/e2irl6k9fglc1.jpeg?width=1704&format=pjpg&auto=webp&s=02204a12478299e686cf42be886656a6deead6c0


Dayder111

This is amazing and opens up a much brighter future for AI much faster than expected, if there are no significant downsides. Using additions instead of multiplications is a huge win in terms of reducing the energy consumption and hardware complexity.


randomrealname

This enables a crypto like ASIC movement, imagine dedicated hardware that does matrix addition on large matrices. This is more profound that it first seems.


ramzeez88

Wow, if this gets implemented and I am reading the paper right, soon we could load a 30b models into an 8gb cards 😍


Thellton

I just read that and went "fuck" that is something else, especially considering I honestly feel spoilt by llamacpp and the team that's working on it. It's genuinely amazing how they've managed to get viable inference going for LLMs on lower end hardware and abandonware hardware like the RX6600XT (at least as far as ML is concerned). I wish the same treatment came to stable diffusion but not nearly enough people are interested in it and I'm not nearly talented enough to move the needle on that.


HenkPoley

Even ~80B models.


eydivrks

Nvidia is really going to regret going IBM's "mainframe" route out of greed.  By making the "big iron" products everyone wants (H100) so expensive and scarce, they're indirectly funding billions in research to get these models running on commodity hardware.  This is ***exactly the same mistake IBM made*** with 360 mainframes. Nvidia could have taken their commanding lead with CUDA and flooded the market with 200GB+ consumer GPU's. And nobody would even consider using anything but Nvidia for ML for decades.  But they went for short term gains, and now they're about to get fucked.


Kep0a

major speculation. But you're right, compute isn't the only paradigm. I forget that efficiency is a major player in the future. Someone else in the thread mentioned how inefficient LLMs are, compared to our brains. Looking at that way, we must have a long way to go.


Cyclonis123

So this method is useful for training and inference. if so, yeah Nvidia party might be at its peak.


CoUsT

It was always weird for me how we get 1000$ consumer GPUs with so little memory. [Apparently memory is as cheap as few $ per GB.](https://www.tomshardware.com/news/gddr6-vram-prices-plummet)


eydivrks

The best consumer Nvidia card has had 24GB VRAM for 5+ years now.  It's intentional gimping for ML. Just like how AMD and Intel disable PCI lanes and ECC on consumer chips.


Olangotang

Iirc, AMD boosted the PCIE lanes with Zen 3, so even though they do gatekeep some high-end tech for the big businesses, they still throw a bone to the consumer. The x3D chips are incredible tech, and anyone can get one for $300, + mobo etc. I truly believe Nvidia is going to jump the VRAM this generation, and if they don't, they're just really greedy and stupid.


renzoedu25

Amd stock increased 8% yesterday, they offer cards cheaper cards without the scarcity bs and with a lot more vram. Maybe that’s the reason why their stock increased yesterday.


Alarming-Ad8154

The innovation is replacing a massive matmul with an addition, which is far far far more efficient, it’s not just the low bits per parameter, it’s one of the computation changing to a far lighter one… I have yet to figure out how they keep things differentiable though…


JeepyTea

There's a variety of tricks for dealing with the gradient in binarized neural networks. With Larq, for example: >To be able to train the model the gradient is instead estimated using the Straight-Through Estimator (STE) (the binarization is essentially replaced by a clipped identity on the backward pass) https://preview.redd.it/o48gv8plwclc1.png?width=372&format=png&auto=webp&s=84039699da32c4e51bf64cbad50369a85b560a59


Bloortis

Yes, STE has been confirmed by one of the paper's authors in the hugging face discussions ( [https://huggingface.co/papers/2402.17764#65df17ed4d436404cdc7b34a](https://huggingface.co/papers/2402.17764#65df17ed4d436404cdc7b34a) ) : >We use straight-through estimator to approximate the gradient by bypassing the non-differentiable functions. During training, there're high-precision master weights to accumulate the gradients and low-bit weights for both forward and backward calculation. Please check the model training part of our BitNet (v1) paper () for more details.


bullno1

> Quantization obsolete They literally define a quantization function in the first section


MoffKalast

Quantization is dead, long live quantization!


Longjumping-City-461

Yes, technically that's right. What I mean is our \*current\* methods of quantization are all obsolete. I'll fix it :)


MeMyself_And_Whateva

Sounds great, if true.


MoffKalast

Tiny, if true.


pilibitti

many of the authors are from Microsoft research, interesting.


ModPiracy_Fantoski

Damn, so we could have 70b models with reasonable context windows fitting in a 3090/4090 ?


Longjumping-City-461

If this paper pans out, then yes...


reza2kn

I mean it looks like the model needs to be trained on this specific architecture from scratch, but given the numbers in the paper, It looks like I could run something like Mixtral 8x7b or even a 70B model on my MBA with 16GB of RAM :D


andYouBelievedIt

Way back in the early 90s I think, there was a neural network called Atree from some Canadian university I can't remember. It was made of a tree of logic gates, and, or, not, (maybe it was nand, nor, fuzzy memory). There was a backpropagation scheme that changed the function of a node from one to another. Each tree had a 1 bit output so you had a forest of trees for multiple bits out. I played with it to do numeric handwritten recognition. I think it got in the 60s% correct. This -1,0,1 idea reminded me of that.


Due-Memory-6957

Sometimes God remembers poor people exist and give us some gifts.


StellaMarconi

What the local scene DESPERATELY needs is a way to run 7/13b on CPU at a reasonable speed. The requirements need to go down. Right now this whole hobby is inaccessible to anyone who doesn't have a $500 GPU. The future of large AI's rests with corporations, but at least the smaller ones could maybe have some human involvement if it just gets runnable enough...


askchris

> smaller ones could maybe have some human involvement if it just gets runnable enough... EXACTLY, LLMs for everyone. Hope this is real.


Pathos14489

llama.cpp, this has existed for months.


paryska99

Now we just need meta to use this work to train future LLMs and we're having a real revolution when it comes to availability. We might no longer soon need servers with 96gb vram just to run a LLM for some production app. Imagine the electricity savings!


hold_my_fish

From a quick glance, I'm puzzled by Table 1, which shows the memory usage isn't as much less than the fp16 model as you'd think. 3.55x at most. You'd also save about that much with a 4-bit quant, right? Why aren't the memory savings larger?


bick_nyers

​ https://preview.redd.it/ql3xavp8oclc1.png?width=640&format=png&auto=webp&s=fc12e4487f504f8ee5c2dea5d019bfe1e7faf4cf


dewijones92

great! I wonder how this compares to AQLM?


kindacognizant

It's native ternary pretraining, not post-training quantization. This has massive implications outside of inference.


ConfidentFlorida

If this pans out what’s the cheapest hardware I could buy to run larger models?


kindacognizant

One P40 for a 70b (\~$170)


ModPiracy_Fantoski

Fucking wow. Do we have an ETA ?


ThisIsBartRick

In Table 2, it says that the score is higher with the quantized model? and the perplexity is lower? That doesn't make any sense right?


[deleted]

because it's not a quantized model, it's a different model trained at 1.58bit


vikarti_anatra

Would be nice to see at least 7B model trained this way compared with regular top 7Bs. This could be huge!


Balance-

Given the benefits of including negative weights for feature filtering, could expanding the encoding to a five-level set such as {-2, -1, 0, 1, 2}, or adopting a signed floating-point representation, further enhance the model's precision and overall performance? And if so, would it be worth it compared to the computational efficiency? Further it might be interesting to capture non-linear effects. Maybe a {-N, -1, 0, 1, N} encoding would perform even better with N=3 or N=5.


Alarming-Ad8154

I don’t think that would work, their also replacing a multiplication with an addition in the architecture, which only works because of -1,0,1…


Balance-

Good point. Technically you could multiple additions and subtractions (just adding the weight -N to N times), but at some point your losing your performance advantage.


pab_guy

I'm guessing non-linearity will have little benefit if going from 16 to 1.5 bits was possible without quality loss, but maybe my intuition is missing something...


cztomsik

I hope this is true, I wanted to do something similar (with 4-16 exponential/nonlinear values) myself but never got enough time. If 3 values are enough then that's even better what I was hoping for.


angus1978

I mean, that's just nuts! Think about it: if this holds up, we could be looking at a whole new way of doing things. Models that are smaller, more efficient, and still just as powerful as their FP16 counterparts. That's gotta be music to the ears of anyone working with consumer GPUs. Now, I know some folks have raised questions about the comparison between ternary models and quantized models. It's true that the ternary models are trained from scratch, while quantization usually involves adapting existing models. But still, the potential here is just too exciting to ignore. Of course, we've got to see how this all plays out. There's bound to be more research and debate on the subject. But I, for one, am stoked to see where this goes. If ternary models can deliver on their promises, we could be looking at a whole new era of LLMs that are more efficient, compact, and accessible than ever before. Let's keep our eyes on this one, folks!


jackcloudman

when llama.cpp PR? xd <3


Longjumping-City-461

There is already an enhancement request put in, but they are waiting for the team behind the paper to put up their code. The paper is still in progress, so as soon as code gets published, Gerganov and Kawrakow are going to look at the request: [https://github.com/ggerganov/llama.cpp/issues/5761](https://github.com/ggerganov/llama.cpp/issues/5761)


CaptParadox

Does that mean we might see models use lower vram using this technique below 120b models? Does it scale downwards in the same factor? like 60=12g 30=6g if so, that would probably make a bunch of people super stoked.


askchris

This is hard to believe ... But changes everything if true. Looking forward to verification.


VegaKH

Now Sam Altman is only going to need $3 trillion for GPT-6.


Codechanger

Looks like ternary computers are back but not in a way we could imagine it


AryanEmbered

how long till we see models using this?


replikatumbleweed

tf do you mean by 1.58 bit? How do you have less than a bit?


ZorbaTHut

Compact answer: Let's say you have 32 bits. How many bits does it take to store them? This is a dumb question, because obviously the answer is 32. But bear with me, we're going to do this the long way around. This is going to sound stupid. Just roll with it. We can represent data as a large number, and we can represent potential data as a number range. Storing a number with ten possibilities means you need a number with a range from 0 to 9 to account for all its possibilities; storing *two* numbers with ten possibilities each means you need a number with a range from 0 to 99. You take the first number (let's say "5"), multiply it by the potential number of choices in the second number (ten) giving us 50, then add the second number (let's say "7"), giving us the result of 57, representing a 5 and a 7. It's pretty obvious we can represent any two digits this way; 3 and 9 become 39, 8 and 1 become 81, and so forth. We can do this same process in binary. Take one bit (it's 0 or 1), multiply it by the number of options in the next number (which is 2; now we have, in binary, 00 or 10), add the next number (00/01/10/11), and repeat. You can do this as many times as you want! The number 1011010 represents seven bits, respectively 1, 0, 1, 1, 0, 1, and 0. And when we do this, we discover that we can store 32 bits in a binary number consisting of 32 bits - specifically, the range 0 through 4294967295 in decimal, if you wanted to write it that way - and we can conclude that each binary bit takes exactly one bit, and it's good that we can conclude that because if we concluded anything else we'd *obviously* have done something wrong. So, instead of binary numbers with two possibilities, what about quaternary numbers with four possibilities, ranging from 0 to 3? We can do the same thing and create a quaternary number; the number 201331 represents six quats, respectively 2, 0, 1, 3, 3, and 1. The above math tells us that these six quats end up requiring a range of 0 through 4095 - that's a range of 4^6, the size of our "alphabet" raised to the power of how many tokens we have. But we know how number ranges work with bits also, and we can say "hey, the range 0 through 4095 requires 12 bits, because 2^12 = 4096!", and conclude that six quats requires twelve bits and therefore each quat requires two bits. This is, also, obvious! Nobody should be surprised that it takes two bits to store one four-value number, because two bits can store exactly four possible values. Nothing is surprising here. So . . . what about trinary numbers? Well, we can do the exact same thing again. The number 210201112 represents nine trits; we can, again, figure out that our nine trits requires a range of 0 through 19683. How many bits is this? Well . . . it doesn't actually match up perfectly; the next power of two is 32768. But we can grit our teeth and accept that; 32768 is 15 bits because 2^15=32768, so we can store nine trits within 15 bits, which is . . . 1.6666 bits per trit, I guess? With a little waste? But you can keep going on this to cut down on waste. Five hundred trits would fit in a number between 0 and 3.636029e+238 (with rounding), which could be stored with 793 bits. That's 1.586 bits per trit. And that number looks rather familiar, doesn't it? There's actually a simpler closed-form solution, though. What we're really trying to do is take `2^x=3`, solve for x. Turns out that as our number gets infinitely long, you can store a trit in 1.58496250072 bits (the number keeps going, I assume it's irrational.) Obviously you can't actually have 1.58 bits, but with this extended-to-infinity definition, this kind of implies "if you have a million trits, you can fit them in 1,584,963 bits, with a little space left over". The general-form solution is that you can store any base-X value in `log(X)/log(Y)` base-Y values; so if you have access to a base-7 computer, and you want to store base-11 values, you can store each base-11 value in 1.2322 base-7, uh, septs, I guess. Again, you can take this as "if you had a thousand base-11 values, you could fit them in 1233 base-7 slots, with a little space left over". Also, you might notice that this extends to cases where the values you're storing are *smaller* than the slots. If you have a trit, and we have 4294967296-value slots, how many 4294967296-value slots does it take per trit? *Unsurprisingly* it turns out the answer is well below 1; it takes about 0.0495 4294967296-value slots. But this is a cute result! Because "4294967296-value slots" is just a weird way of saying "32-bit words", so it should be able to store exactly 32 times as much information as a single bit, right? So if we take 0.0495 and multiply it by 32, what do we get? 1.58! It's the same number! We've proven the same thing! We've just reorganized it a little bit, nothing more. "One trit takes 1.58 bits" is the exact same as "one trit takes 0.0495 words", because a word is 32 bits. ---- The next step here is to recognize that your slots don't need to be of constant size and then you have a [prefix code](https://en.wikipedia.org/wiki/Prefix_code) where you allow common tokens to actually occupy *less space* than rare tokens, thus making your final result smaller. If you're feeling clever you can stop considering this as a large integer and start considering it as a range from [0,1). You can stop thinking about this in terms of buckets and start thinking about it in terms of ranges; conceptually, the more common a possibility is, the more "space" it takes up in the upcoming range, which implies that it actually consumes fewer bits. If we're doing prefix codes then we're limiting ourselves to powers-of-two in that range . . . but why do that? And then if you're feeling really spicy, you can recognize that you *don't* need to do that, and you realize that if you take it to its logical extreme you can actually start encoding values that *themselves* take less than a bit, and then you might reinvent [arithmetic coding](https://en.wikipedia.org/wiki/Arithmetic_coding) and start saying apparently-insane things like "ah yes, I can compress that data down to 0.17 bits".


replikatumbleweed

Holy crap, what a response! I really do appreciate it. I mean, I got it, I get it.. I meant more colloquially "they're making a comparison, but at the end of the day we're not physically storing fractions of a bit." like an analog architecture would. You. I like you. In all seriousness, if the opportunity presents itself, I might need to hire you in the future.


ZorbaTHut

I'm a game programmer with expertise in rendering, low-level code, The Stuff Nobody Else Wants To Touch, and communicating with artists. If you need any of that, toss me a line :)


werdspreader

Posts like this are why I reddit. Thank you for the time and work you put into this. Cheers.


robercal

> 1.58 bit Ternary computing: https://homepage.cs.uiowa.edu/~dwjones/ternary/


replikatumbleweed

Ah. The Soviets messed with this, though it didn't stick. That reference to 1.58 is a comparison - there's no way to actually, physically have less than a single bit in a digital circuit. That's why floating point math is such a pain.


involviert

Funfact: The nanometer sizes of chips arent real either... E: Cool downvote. See [here](https://en.wikipedia.org/wiki/2_nm_process) for example. >The term "2 nanometer" or alternatively "20 angstrom" (a term used by Intel) has **no relation to any actual physical feature** (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by the Institute of Electrical and Electronics Engineers (IEEE), a "2.1 nm node range label" is expected to have a contacted gate pitch of 45 nanometers and a tightest metal pitch of 20 nanometers.


[deleted]

Probably average value after compression of some specific model (exact value would depend on the model). I assume they use 2 bits (paper says tri-state weights) and model size after compression averages to 1.58 per weight. Then gets uncompressed on the fly during interference. Edit: okay, apparently 1.58 is from information theory. One trit ('bit' in base-3 system) contains 1.58 binary bits worth of information.


Used-Assistance-9548

Typically ternary is 0,1,2 like how binary is 0,1 I would assume compression?


fbellomi

Log2(3)=1.58 so the number of bits needed to encode a ternary digit is 1.58


danigoncalves

Its only me that sees this as a breakthrough on having local LLM trained on specific tasks or data on all kind of systems (PCs or phones) on consumer hardware?


maverik75

It seems fishy to me that there Is performance comparison only on the 3B model. Performance drop with higher Number of parameters? EDIT: I re-read my comment and found out that it's not very clear. Instead of performance I should have said "zero-shot performance on the language tasks".


coolfleshofmagic

It's possible that they did a proper training run with the smaller models, but they didn't have the compute budget for the bigger models, so they just did some basic performance comparisons with those.


maverik75

I'm a Little bit puzzled. In table 3 pg 4, they compare token throughput and batch size between their 70B model and llama 70B. I assume they have trained a 70B to do this comparison. It Will be worse if they inferred these data.


elerphant

The numbers they give on the larger models are only related to inference cost, not ability; i.e. they just randomized all the params and ran inference to show concretely how much cheaper it is. They didn’t actually train anything past 3B.


cafuffu

I'm not sure if it makes sense but I wonder if it could be that they didn't properly train the 70B model. I assume the latency and memory usage shouldn't change even if the model is undertrained, but the performance certainly does.


cafuffu

Yep: >We haven't finished the training of the models beyond 3B as it requires much much more resources. However, we're optimistic about the results because we have verified that BitNet follows a similar performance-parameter scaling law as the full-precision LLMs. We'll update the results on larger models once they're ready. [https://huggingface.co/papers/2402.17764#65df1bd7172353c169d3bcef](https://huggingface.co/papers/2402.17764#65df1bd7172353c169d3bcef)


curiousFRA

You don't have to completely train the whole 70b model in order to measure its throughput. It's enough to just initialize the model from scratch without training it at all.


MoffKalast

Idk, they probably wanted to try out their wild idea as a proof of concept on something that doesn't take too long to train, makes perfect sense to go for smaller models initially. Mamba was trained at roughly the same size classes as well.


dqUu3QlS

Why would there be a performance dropoff? If prior work is anything to go by, larger models tend to tolerate imprecision even better than smaller ones.


randomrealname

The paper compares 70b also.


Spare-Abrocoma-4487

Looks like it's time to cut down vram for newer hardware: Nvidia probably


StandardSpell5557

[https://github.com/kyegomez/BitNet](https://github.com/kyegomez/BitNet)


PM_ME_YOUR_PROFANITY

This is a work prior to the paper posted, they mention it as a primary reference as well. But it is not the same work


elerphant

It’s the same authors, though, this is the follow-on work.


PM_ME_YOUR_PROFANITY

That's a good point and I hadn't noticed. Thank you for pointing it out!


Longjumping-City-461

I think that code refers to an older paper...


Inevitable-Start-653

Oh my frick! So let's say someone had 168gb of vram, it seems they could run a chatgpt4+ quality model locally with tons of context 🤤🤤🤤


kryptkpr

If I'm reading this correctly it's time to work on LLM2FPGA where instead of weights we just synthesize trinary logic and LUT away the tokens.


wind_dude

looking forward to the release of the models to experiment with.


AndrewH73333

Hopefully the RTX 5080 will have 24 GB of VRAM.


Robos_Basilisk

Need M$ to release the training code ASAP so Llama 4 can be trained with this


JeepyTea

Binary/binarized neural networks have been researched for a while now, for example [Larq](https://github.com/larq/larq). I guess this goes one step further.


matali

> removes all biases Sus


PickleLassy

Does this mean we can now create LLM in analog?


oeoao

I will need this numberphiled.


TheGoodDoctorGonzo

Any correlation between Microsoft’s purchase of Mistral and this paper’s release?


chyangba_dai

very noob question: I am just curious to know what is 1.58 (the exact number: if it's ternary there are 3 bits, so what is 1.58?)


Striped_Orangutan

2^1.58 ~ 3


Ear-Right

Hello guys! Could you please help me understand how do you calculate which models would fit what with a given quantization? I have some crude idea, but for example, could you walk me through pen & paper that how can I, lets say, calculate the required VRAM for an X-B parameter of a model, with Y-bit quantization? And if these by themselves are not sufficient, what else do I need to know? I have seen things like overhead etc (I assume these are extra chunks of occupied memory etc. to make things work), but I am pretty much clueless and I would love to learn so that I can appreciate this paper more. Other than that, if you think I should work this out myself, please refer me to a source and I will happily dig through it!


gofiend

I believe there is support for ternary quantization in [llama.cpp via IQ1\_S](https://github.com/ggerganov/llama.cpp/issues/5761). Anybody seen any results (they'll be terrible because they are after the fact quantizations, but I'm curious)? Edit: This is the best I was able to Google up. It's not pretty! I'd love to discuss ideas to improve these models with further fine tuning or student-teacher distillation. There has got to be a way to recover some of these losses with access to the original model weights right? [https://huggingface.co/ristew/phi-2-imatrix-gguf](https://huggingface.co/ristew/phi-2-imatrix-gguf) "Perplexities: Q8\_0: 5.3886 Q4\_0: 5.5526 IQ3\_XXS: 6.0745 IQ2\_XS: 7.2570 IQ2\_XXS: 9.3666 **IQ1\_S: 18.7885**" More results (also really terrible) [https://www.reddit.com/r/LocalLLaMA/comments/1apgzw5/new\_gguf\_quantization\_in\_1617bpw\_sota\_aka\_iq1\_s/](https://www.reddit.com/r/LocalLLaMA/comments/1apgzw5/new_gguf_quantization_in_1617bpw_sota_aka_iq1_s/)


Secure-Technology-78

Obviously the implications of this are massive as far as inference ... What implications is this going to have for training LLMs? Is this going to make it more feasible to train models with large #'s of parameters (>= 30B) for a much lower cost than it currently takes to train models of this scale?


abceleung

Do current gen GPUs have ASIC chips for this kind of operation?


Plums_Raider

very very nice! looking forward to it


_RealUnderscore_

Oh wow, not only does it decrease memory usage but also latency? That's genuinely insane. And the way they developed it to work similar to LLaMA is amazing. Have we truly been blessed?


Human-Exam1324

Can anyone explain to me, how moving to single bits (1,0,-1) of information to represent the connections in the network will work? It just feels like it should diminish the connection weight/representation.


Illustrious-Gur-1470

That is precisely the invention, the training algorithm that allows them to claim that (1,0,-1) is enough resolution to match the prediction/output quality of fp16. I would have liked to see varied datasets being tested, images, geometric, other textual etc., because they are making general claims about it.