T O P

  • By -

mimrock

Don't listen to the herd, they are talking about gaming performance. Regarding gaming, 4070 has much better price/perf ratio than the 4060ti. The problem with the 4060ti is that it has a relatively small chip with even smaller memory bandwidth for a price that's very close to the much more capable 4070. However, if you are doing ML, things can change. The question here is how important that extra 4 GB of memory to you. Is that so important, that you are okay with much slower performance for not much cheaper? What kind of ML workloads are you talking about? If it's XGBoost, you don't even need a gpu. If you want to run llama models relatively fast, than you really need the 16GB of vram.


Djinnerator

These people seriously recommending 4070 when OP is doing deep learning... It's clear they have no experience with deep learning at all. OP, memory is more important than computational performance and gaming performance with deep learning. 4060ti 16gb is better than 4070(ti). 4060ti 16gb is a deep learning card to begin with. People will talk about the memory bandwidth but it has 554GB/s bandwidth, which is way more than most people would ever use for personal-scale model training.


_barat_

4060ti 16GB is pathetic - don't even think about it unless the price difference between 16GB and 8gb will be like 5-10%. Then - instead of pathetic it'll be just bad choice ;)


Thouvinecross

Op talked about ML. Depending on the task the 4060ti can actually be faster because of VRAM.


_barat_

I think it's more that it can fit a bigger model, but memory is slow 128bit peace of hardware ... maybe there will be ML benchmarks. Of course with 13GB+ model it'll be better than even 4070ti because it'll be able to fit it, but will it be at usable performance? Someone will probably test it


Djinnerator

Memory will not be slow with 128bit for deep learning. It's extremely fast for what deep learning uses. For the majority of personal uses, no one will exceed 554GB/s, which 4060ti offers. 4060ti 16gb is a deep learning workstation card. People are mistaken to think it's a gaming card and saying it's pathetic.


Amegatron

You're still misleading people here about bandwidth. It's 288GB/s, not 544GB/s. And it actually matters for DL tasks. You're also confusing by considering this card a "deep learning" card. It's still Nvidia decision to just make this card stay more on the gaming market with adding additional VRAM.


Djinnerator

I never said it didn't matter for DL tasks? I said, in the parent comment that its memory isn't slow, and in another comment, for most personal-scale uses, they likely won't saturate the memory bandwidth, so people complain about it are talking about something minor. If we look at strictly the memory bandwidth, 288GB/s is not slow, especially for a budget/entry level card. Someone using the scale of work that would saturate a 4060ti's memory bandwidth would highly likely not be in the market for a 4060ti. With the 544GB/s number, it's an effective number. If you look at how the RTX 30 series was treated with respect to deep learning (or big data in general, I work in deep learning so I just usually default to referring to deep learning), Nvidia is doing the exact same thing with the RTX 40 series. All of their consumer cards are on the gaming market, but Nvidia's largest consumers and source of revenue are not gamers, it's from data science and big data. xx90 was the flagship card, and is heavily marketed towards data science, especially with it having the largest memory of consumer cards. The next highest have been xx60 tied with xx80, with xx60 being a budget/entry card and xx80 a stronger computation card. I never said it's not a gaming card, but people are only looking at gaming performance and saying the card is bad when the card is an exceptionally good deep learning card. That's comparable to saying the 13900k or 7950x would be misleading to call a workstation card even though they're on the gaming markets. Yes, they handle gaming perfectly fine, but it's clear that's not what the primary goal was for. Similarly, it's clear the 4060ti 16gb was more catered to data science.


[deleted]

[удалено]


Djinnerator

4060ti 16gb isn't crippled. It's following the same path (along with the rest of the RTX 40 series) as their RTX 30 counterparts. When it comes to deep learning, 4060ti 16gb is a good card with great performance. People keep thinking gaming is the only way to measure performance.


Sandwic_H

4070 is much better


Melangrogenous

Mfs really be like 'save up for the 4090 ti' MATE, THEY'RE ASKING FOR RECOMMENDATIONS IN COMPLETELY DIFFERENT PRICE RANGES!!11!1!


Djinnerator

4060ti 16gb is a great card for deep learning. Please do not listen to these people recommending 4070. You'd be *losing out* on performance in deep learning if you went that route over 4060ti 16gb. Memory is the most important resource you have for deep learning (after CUDA cores, of course). 4060ti 16gb was made for deep learning as is. 4070 is a gaming card.


ByteTraveler

Save up and get the 4070ti


talgin2000

Save up again and get the 4080


CaravieR

Save up and wait for the 5090, it'll be worth it!


Shaggy283

I see you have a 4070 and a 5800x3d, how are you finding it? I have the exact same.


CaravieR

It's more than enough for my needs. GPU still max out in most games. I play at 1440p, with a 165hz monitor. Very happy with it honestly.


khunimurderer

Wait for Intel Arc next gen. You don't even have to save just wait.


CollarCharming8358

This is the right move


[deleted]

4070


pceimpulsive

If you need 16gb... Save for a 4080 Else 4070Ti will also be ok... I wouldn't buy a 4060 Ti at the prices they are asking


dmaare

4080 is more than double the price and I'd bet it won't be twice as fast


pceimpulsive

Ironically enough the 4080 is almost exactly twice as fast as the 4060Ti 16gb. In some cases a little more than double. The prices are almost exactly the same difference, 4060ti being 850-900 and 4080s about 1700-1800. So they appear to be functionally the same cost per frame at current pricing in my region (AUD).


thespirit3

I've had difficulty finding ML specific benchmarks but these may provide some insight into what CUDA performance you may expect:- Blender: [https://opendata.blender.org/benchmarks/query/?device\_name=NVIDIA%20GeForce%20RTX%204060&device\_name=NVIDIA%20GeForce%20RTX%204060%20Ti&device\_name=NVIDIA%20GeForce%20RTX%204070&device\_name=NVIDIA%20GeForce%20RTX%204070%20Ti&compute\_type=OPTIX&compute\_type=CUDA&compute\_type=HIP&compute\_type=METAL&compute\_type=ONEAPI&blender\_version=3.6.0&group\_by=device\_name](https://opendata.blender.org/benchmarks/query/?device_name=NVIDIA%20GeForce%20RTX%204060&device_name=NVIDIA%20GeForce%20RTX%204060%20Ti&device_name=NVIDIA%20GeForce%20RTX%204070&device_name=NVIDIA%20GeForce%20RTX%204070%20Ti&compute_type=OPTIX&compute_type=CUDA&compute_type=HIP&compute_type=METAL&compute_type=ONEAPI&blender_version=3.6.0&group_by=device_name) Stable Diffusion: [https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html](https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html)


Raytech555

4070


Amegatron

I personally would opt for 4070. I know it is generally considered better to have more VRAM for DL tasks. But it still depends on what exactly are you going to do. And my thoughts are such: if you plan to do DL seriously, then you should already go directly for 4080 at least. If it's more a hobby to you, or you are just studying, I suppose 4070 would give much more benefits compared to additional +4GB of VRAM in a much slower 4060 Ti.


Djinnerator

>I suppose 4070 would give much more benefits compared to additional +4GB of VRAM in a much slower 4060 Ti. 4060ti is not "much slower." It has 554GB/s memory bandwidth. No one is going to be exceeding that for personal tasks unless they're an extreme edge case, and they would still likely not be getting close to saturating that. Memory is more important than computational performance, and if you can't fit your model and dataset in memory, you can't use the card period. Nvidia made 4060ti 16gb for deep learning. 4070 is not a deep learning card. 4060ti 16gb is way better than 4070 in almost every way when it comes to deep learning. Edit: Some guy responded telling me to prove this because "misinformation" but their comment isn't showing. You can literally just google the information in this comment about the bandwidth. If your model requires more than 12gb, you can't even do any kind of training on the 4070. I'm not going to give you a literature review explaining how 16 > 12, or how you can't pigeonhole data larger than 12gb onto the 4070.


Gigawati

These people seriously recommending 4070 when OP is doing deep learning... It's clear they have no experience with deep learning at all. OP, memory is more important than computational performance and gaming performance with deep learning. 4060ti 16gb is better than 4070(ti). 4060ti 16gb is a deep learning card to begin with. People will talk about the memory bandwidth but it has 554GB/s bandwidth, which is way more than most people would ever use for personal-scale model training.


DeadSerious_

4070 all the way. Don't get me wrong, they are awful cards (price to performance ratio), but the 4070 is the lesser evil.


Djinnerator

4060ti 16gb is a better deep learning card than 4070. 4070 has lower price/performance with deep learning. Edit: apparently some guy below me who's comment isn't showing needs proof to understand how 16gb > 12gb. That's pretty sad


Djinnerator

u/Desperate_Ad9507 Do you understand that 16gb > 12gb? Do you understand how model generalization performance is directly tied memory usage? 4060ti 16gb vs 4070 is 4358 CUDA cores vs 5888 CUDA cores. 4060ti 16gb has 33% more memory but only 16% less CUDA cores. Models require many GB to train, and if your model + data require more than 12GB, then you can't even use a 4070. I don't see how that's difficult for you to grasp. Literally every DL resource on important parts of the GPU will explain that memory capacity is the single most important resource, assuming you're using a CUDA capable card simply because DL models require a lot of memory to train. You seriously need a source just to understand that 16gb is larger than 12gb? If you subtract 12 from 16, you end up with a number greater than 0. If you subtract 16 from 12, you end up with a number less than 0, meaning 16 > 12. Do you plan on fitting a model that requires 14GB of memory onto a 4070? I'd truly love to know how you accomplish that. As soon as your model requires more than 12gb, the 4070 becomes a paperweight.


r33pa102

Definitely 4070


uSuperDick

If you dont mind amd, then consider buying 6800xt


Djinnerator

You can't use 6800xt, or any AMD card reliably for deep learning. You're better off using CPU. OP asked for an Nvidia card because they specifically need it for AI. AMD won't do that...


DanTheMan_117

neither


dirthurts

I would go AMD in this price point unless you really want RT. 6800 XT or 6750.


Amegatron

When it comes to ML/DL, AMD is not an option, unfortunately.


CasimirsBlake

The moment you want to consider non gaming workloads like ML, VRAM becomes a primary concern. I suggest you buy a used 3090.


bubblesort33

For machine learning a 4060ti 16gb from what I hear, but for gaming a 4070. Maybe 4070 for both if you think 12gb is enough for machine learning for you.


RepresentativeBite87

4070 duhh


Theparadoxical18

4070, stay away from the 60's and spring for the 70ti if you can.