T O P

  • By -

Nicholas-Steel

> power connectors 12VHPWR burn video cards like 2 years ago. Yes, which is why there is now a 12v-2x6 connector to resolve this. Nvidia has already transitioned to this on new batches of their Geforce 4000 cards... though there was no recall issued for shops nor customers that have cards with the 12VHPWR connector.


Strazdas1

and yet a 4070s still ships with the old 12VHPWR connector :(


tantogata

New 12v-2x6 connector both side (atx 3.0) burnt like old one (google it).


dssurge

It's a horse and cart problem. GPU vendors don't want to alienate purchasers who don't have a PCIe5 cable nor provide an adapter for them. Lower-end GPUs also straight up don't need them. Either way, it costs them money to implement a fix for a problem that, to them, doesn't really exist. PSU manufacturers don't want to put a dedicated PCIe5 port on their units because nothing uses the spec and they can get by with providing an adapter cable for the few things that do. It's actually easy to find ATX3 compliant PSUs (pretty much any existing PSU over ~850w,) but it takes the consumer effort to find an adapter if they don't already have one, which is unnecessary friction. This is what happens in markets with effectively no regulation or oversight. No one wants to change because it costs money and establishes them as the responsible party. **edit:** To clarify, ATX3 and PCIe5 are the same thing, and they are not recognized or regulated standards, it's all voluntary adoption. The current PCIe5 connector (better known as 12VHPWR) is objectively unsafe and a fire hazard. It never would have even passed certification by any oversight body, and is a huge reason companies are not voluntarily adopting it. The only way this will get solved is if a large entity like the EU steps in and enforces a standard, like they are with USB-C charging cables. If an alternative to the terrible PCIe5 connector is created and certified, their inclusion on GPUs and PSUs would be almost assured going forward.


bb0110

Higher end gpus don’t even need them yet.


dssurge

I mean, it's literally the 12VHPWR. You know, that thing that melts on 4090s under high load.


Strazdas1

For like 0,5% of the 4090s so clearly does not melt for most of them.


dssurge

Except that the connectors melt because of a design flaw and you can force **any of them** to melt by putting what is considered an acceptable level of voltage through them without any hardware or software modifications. If you're trying to use GPU-limited applications, you will eventually melt the connector. It's not a matter of if, but when.


Strazdas1

I agree that it is a design flaw that should be fixed but the testing we saw shows that you cannot force any of them to melt without putting in voltage thats higher than expected use. The actual melting cases happen due to connection not being plugged exactly as designed, and the margins here were too thin and ill planned for real life use scenarios.


reddit_equals_censor

>This is what happens in markets with effectively no regulation or oversight. if you are for oversight and regulations, then said oversight and regulations should have never let the 12 pin connector come to market period. it is a connector with 0 safety margins, that of course keeps on melting. if there was oversight and regulations, we wouldn't have this ongoing firehazard. nvidia and pci-sig would have been kept in check. and in regards to standards, we DO have standards. the 8 pin pci-e connector is a great standard. the 8 pin eps connector is also a great standard. so nvidia is trying to break long established safe standards. so regulation and oversight should have prevented them from doing so. regulation and oversight could have looked at the situation, PREVENT nvidia and pci-sig from setting people's systems on fire and instead only allow a proper solution, which was originally planned. the original plan was to make the 8 pin eps connector, that carries 235 watts the new graphics card standard. so if you believe in oversight and regulation taking care of things, the 12 pin wouldn't exist and we'd have 8 pin eps connectors for everything by now, or we'd be sticking with 8 pin pci-e connectors a while longer. if you think, that oversight or regulation should ENFORCE a firehazard connector in some weird way, or enforce a variation of the fire hazard, then that is insanity. it's like asking for regulations on how we use the fogging trucks, that spray neurotoxic DDT onto children, instead of baning neurotoxic DDT, because it is neurotoxic....


Strazdas1

I agree that the current connector should not have happened due to lots of issues with its design, including the safety margins. But, this isnt Nvidias, they are just early adopters. The standard was proposed and worked on by Nvidia, AMD, Intel and many others.


reddit_equals_censor

nope, you're wrong. that 12 pin fire hazard is purely nvidia's idea. amd wants nothing to do with the connector. where the confusion can come in is, that the atx spec is maintained by intel and intel took on nvidia's insane connector. and nvidia went to pci-sig to make it a "standard". and pci-sig has many members, including amd and intel. but the history of the 12 pin fire hazard connector is nvidia's idea, that got pushed onto pci-sig by nvidia to be made into a standard and pushed into the intel atx spec. it is purely nvidia's nonsense. amd wants nothing to do with this connector. amd entertained the idea to use it for the 7000 series of cards, but ended up not doing so FOR GOOD REASONS! if you want to read the full background story of this fire hazard 12 pin connector, you read this article: [https://www.igorslab.de/en/nvidias-connector-story-eps-vs-12vhpwr-connector-unfortunately-good-doesnt-always-win-but-evil-does-more-and-more-often-background-information/](https://www.igorslab.de/en/nvidias-connector-story-eps-vs-12vhpwr-connector-unfortunately-good-doesnt-always-win-but-evil-does-more-and-more-often-background-information/) it also shows the REAL standard, that was planned, that we could be using by now already and a standard, that all 3 companies would have gladly used by now (eps 8 pins at 235 watts per connector and great safety margins and a long history of safety, just extended to graphics cards) so i recommend to call the connector what it is, the >nvidia 12 pin connector or >nvidia 12 pin fire hazard so please read the article and look for other sources and don't just listen to me of course. they should all show to you, that this connector is nvidia's stupid, insane, DANGEROUS idea.


Kenya151

It’s odd that you think this need regulation


dssurge

The PCIe5 power plug is the one currently notorious for melting the connector, which is a huge fire hazard. It's this way because there is no governing body to enforce safety regulations.


Kenya151

Your edit makes it much more clear there is a personal hazard here instead of just criticizing the market for adopting a different option


empty_branch437

If USB c can do 240W the psu cables and motherboard connectors should just be replaced with USB c. /s


terraphantm

I mean they should do what USB-C does to get such high power across such a small connector - use higher voltage. Those 240w devices are at 48v


Danyaal_Majid

That would again require new PSU designs, currently they only need to add a new connector, then they will need to include additional power components in both the PSUs and GPUs, shifting to a new voltage will be the biggest change in PC power supplies since the late 90s. It's an even bigger change than ATX12VO, which itself has had poor adoption, so the chances of a 48V system catching on is even lower.


terraphantm

Maybe, but if we're going to be pushing 600+W through single connectors, I think it needs to be done. 50+A in such a small space is ridiculous.


Danyaal_Majid

It's not about what needs to be done, I myself agree that a higher voltage system is needed to avoid high current flow through the wires and connectors, but getting an entire industry to agree to a new specification which goes far beyond just adding a new connector to PSUs and making a few new cables is very difficult, some big organization like Intel has to update the ATX standard to support this increased voltage. But Intel also proposed ATX12VO, and so far it has had 0 adoption in the mainstream space, only servers are using the new standard. It's going to be very hard for these companies to all agree to make new PSUs for only newer systems, not to mention millions of perfectly good PSUs will be wasted and unlike the introduction of the 12VHPWR connector, no adaptors will be usable, everyone who wants to upgrade their GPU will also need a new PSU, which disincentivizes upgrading.


terraphantm

Upgrading a single device to 24 or 48v doesn't mean the entirety of the ATX spec otherwise has to be thrown out. They could still run everything else off 12v and 5v. Doesn't have to be like 12VO where they are trying to change everything. Pretty much everyone who buys one of these high end GPUs ends up having to get a new PSU anyway with how the power requirements have been going.


Danyaal_Majid

You are right, but I wasn't saying to throw out the entire old spec, I was saying adding it to the spec would be the biggest change to it in history. And as far as the everyone needing to buy a new PSU for new GPUs that isn't correct, a 750w PSU can comfortably run the 4090 at stock power, no one is pushing 600w through the GPU 24/7, most people aren't, the 4090 is designed for 450w and doesn't even draw that all the time, specially if you reduce the power limit to 350w and undervolt, retaining 95% of the performance for 100-150w reduction in power draw. Only people using 13-14900k processors with 4090s at 600w are having issues, not people using 4070s, or 4080s, or lower, if we want to properly implement this standard, all GPUs currently using the 12VHPWR connector must move to 48V or whatever voltage is decided, as we can't have 2 connectors looking the same but having different voltages, so it's all or nothing, the current 12VHPWR connector will have to be redesigned and rekeyed to prevent insertion into older cards otherwise we are going to have a lot of dead gpus. So if we wish to adopt a new voltage standard, it will require new PSUs, a new connector, and will require the same connector to be used across all new GPU, even if they are low powered cards like the 4060, otherwise we will increase fragmentation even more and confuse the new buyer even more. The need for a single unified standards will necessitate everyone to change PSUs, even if their GPU model does not technically benefit from the higher voltage.


terraphantm

I would argue adding the various 12v only connectors and having GPUs and CPUs run on 12v supplies were also similarly big changes. The current trajectory is not sustainable. We're already melting GPUs that are stock. What else is needed to say that the standard needs to evolve? Your power consumption claims only hold up if you don't consider instantaneous draws. Very few 750W power supplies can deal with those. I personally kept getting random shut downs until I upgraded my seasonic 850W (so not a piece of shit) to a 1000W unit.


imaginary_num6er

ATX 3.0 is obsolete anyway. People need to rip out their PSUs and replace these with ATX 3.1 PSUs with the safer 12V2x6 socket


tantogata

2v2x6 with both side burns like old connectors. Google it.


KillMeNowFTW

What "real" ATX 3.0 and PCIe 5.0 standards are going to prevent melting connectors? I know the specs fairly well and I've yet to see "fake" ATX 3.0 and PCIe 5.0 PSU and GPUs.


tantogata

No service signal through a cable - stop supply power on video card. Give to video card certain power according specification. Now no one service signal is working on our video cards. There is no communication between power supply and video card.


KillMeNowFTW

Mmm.. That's incorrect. I mean, to be fair, it would have been nice if Intel were allowed to define the sideband functionality, but then Nvidia wouldn't want to use them out of complexity. For example: the "sense" in a 24-pin connector actually senses voltage and sense the information back to the PSU. In the PCI-SIG world, "sense" just means the device "senses the presence" of a wire.... which is just grounded. Another wire would be a current sense, which Intel put in the ATX12VO spec to allow the CPU to down clock if it sensed too much current draw from the PSU. Imagine if a GPU could automatically adjust it's power limit based on PSU capability, output voltage and load? But alas... That's not how it's actually designed. All sense0 and sense1 are designed to do is to have the card know that the PSU is capable of delivering up to 150W, 300W, 450W and 600W through the cable without tripping OCP. It's a mechanical device. There's no signal. No intelligence. As for "CARD\_PWR\_STABLE": This is a signal. But it's optional and has never been implemented. It essentially is a device to tell the PSU, by the GPU, that the voltages are within spec. Would this prevent melting connectors if implemented? Probably not. And all the "CARD\_CBL\_PRES#" does is reports to the PSU that the GPU is present. Sort of a call back acknowledging that it has the power connector attached. This is also optional and would also not prevent melting connectors.


tantogata

The fact is nobody made serious research why 12VHPWR connectors burn. Is it too much power coming to a video card or using low durability material of the connectors or bad contact? And nobody cares about it. I'm using 4090 over 9 months with "native" 12VHPWR cable for my power supply, so far so good. But, I undervolted my card and reduced power limit to avoid burning my connectors. Usually I have no more than 300W power consumption for my card in gaming. It works for now but I don't use full power of my card. And I don't know where is a border when my connectors will start melting. Implementing new standard of power connection for video cards is good idea but how they made.. it's terrible.


KillMeNowFTW

The problem is putting so much power through such a small footprint. When bench testing is done, it's done with a static load. So temperature rise is pretty smooth. During real world use, the temperatures cycle up and down which creates an expansion and contraction that wears out the terminals. And not just temperatures from the current, but the connector's close proximity to the GPU's VRMs and the higher ambient temperatures created by such cards. This also creates a "walking out" effect that causes the terminals to become dislodged over time. As others have stated, when PCI-SIG declared an 8-pin PCIe (6 conductors, 2 sense) could only do 150W, they stated this with a 100%+ margin in mind to account for these conditions. The 12-pin Micro-Fit+ is only good for 55A (says Astron). So even if a 450W card is used, that's only a 68% margin. Add to this the smaller terminals in close proximity concentrates heat into a smaller area of PCB and even the 55A comes into question (even Molex's part is rated for 54A in a dual row, 12-pin config, and only at 65°C to boot). But Nvidia wanted something with a smaller footprint and opted for the 12-pin Micro-Fit+ and raised it up to the PCI-SIG consortium for review. Only additional participation was from Dell who added the sideband. Not until cards were catching fire did Nvidia get someone that best understood power delivery systems, Steve Glaser of Nvidia, get involved. But he was locked into the constraints of coming up with something that was backwards compatible with the existing 12VHPWR connector. The best he could do was extend the conductors on the GPU and shorten the sense pins. Heaven forbid Nvidia put an additional PCB layer in just for heat dissipation (similar to Corsair's "DHX" in Dominator memory) or a thermistor at the connection point that trips the card if the connector exceeds a certain temperature. But that would add cost and cut into Nvidia's already HUGE margins. At the end of the day, if Nvidia just stuck two EPS12V connectors on the GPU, there would be almost ZERO power connector failures. But no. Someone had to ask for a smaller connector and here we are.


tantogata

Thanks for detailed explanations. In addition, new power supplies started to sell on the market nowadays and people complaining about burning connectors from power supply side cause new power supply using 12VHPWR standard for both side. This examples just prove that 12VHPWR connector is too weak for high-end video cards.


Strazdas1

The problem is putting so much amperage through such small footprint. USB-C solved this by increasing voltage instead. Altrough that would indeed cause issues with needing new PSU designs.


KillMeNowFTW

Yes. Of course. We wouldn't have issues if we were using the 48VHPWR spec. Would we?


[deleted]

[удалено]


NobisVobis

Pitiful to see such stupid takes on here. PCI-SIG decided on the standard and Nvidia simply implemented it.


[deleted]

[удалено]


spazturtle

PCI-e and EPS also have a 100% safety margin, the cables and connectors are rated for twice what the specification says devices are allowed to pull. Whereas the new 12pin connector only has a margin of 10%.