> At these wattages just give it its own mains plug.
You might think you're joking, but there are gamer cases with space for two PSUs, and motherboards which can control a secondary PSU (turning both PSUs on and off together). When using a computer built like that, you have two main plugs, and the second PSU (thus the second mains plug) is usually dedicated to the graphics card(s).
I've done this, without a case, not because I actually used huge amounts of power, but because neither PSU had the right combination of connectors.
The second one was turned on with a paperclip, obviously.
Turns out graphics cards and hard drives are completely fine with receiving power but no data link. They just sit there (sometimes with fans at max speed by default!) until the rest of the PC comes online.
You can also hookup a little thingy that takes sata power on one side and 24 pin on the other. As soon as there is power on sata side, relay switches and second PSU turns on.
This may not be fast enough for some add-in cards. It would be better to connect the PS_ON (green) cable from both ATX24 connectors together, so that the motherboard turns on both PSUs simultaneously.
This would still have the disadvantage that the PWROK (grey) cable from the second PSU would not be monitored by the motherboard, leaving the machine prone to partial reset quirks during brown-outs. Normally a motherboard will shut down when PWROK is deasserted, and refuse to come out of reset until it returns.
No they don't. Server-grade redundant PSUs usually use a CRPS form factor, where individual PSU modules slot into a common multi-module PSU housing known as a PDB (power distribution board). Each module typically outputs only 12V and the PDB manages the down-conversion to 5V, 5VSB, and 3.3V. From there, there is only one set of power cables between the PDB and the system's components including the motherboard and any PCIe add-in cards. Additionally, there is a PMBus cable between the PDB and the motherboard so that the operating system and the motherboard's remote management interface (e.g. IPMI) can monitor the status of each individual module (AC power present, measured power input, measured power output, measured voltage input, measured frequency input, fan speeds, temperature, which module is currently powering the system, etc).
PSUs can be removed from the PDB and replaced and reconnected to a source of power without having to shut down the machine or even remove the case lid. You don't even need to slide the machine out of the rack if you can get to the rear.
But that still only happens over one set of power cables, from the PDB. The post you replied to described using a separate PSU with separate component power cables to power specific components. Current sharing in server PSUs is handled by every PSU equally powering all of the components.
Edit: For example, in a 3+1 redundant setting, 3 PSUs would be active and contributing toward 1/3 of the total load current each; 1 PSU would be in cold standby, ready to take over if 1 of the others fails or is taken offline.
Also put it in a separate case, and give it an OcuLink cable to attach to the main desktop tower. I suspect that's exactly where we're heading, to be fair.
I've built video rigs that did just that. An external expansion chassis that you could put additional PCIe cards when the host only had 3 slots. The whole eGPU used to be a cute thing, but it might have been more foreshadowing than we realized.
It's about 75-90% the speed of light, but even that's too slow.
Modern hardware components are getting to latencies in single digit nanoseconds.
Light travels about 30cms in a nanosecond, so extending a pcie port to a different box is going to have a measurable difference.
A single round trip isn't going to register, but there are multiple in a frame, so it's not inconceivable that it could add up at some point. I would like to see it demonstrated, though.
Without one of these rigs, you would not be able to do much at all because of the limited PCIe slots in the host. "not much" here means render times into the hours per clip to even longer. With the external chassis and additional cards, you could achieve enough bandwidth for realtime playback. Specific workflow would have been taking Red RAW camera footage that takes heavy compute to debayer, running whatever color correction on the video, running any additional filters like noise removal, finally writing the output back to something like a ProRes. Without the chassis, not happening, with the chassis you can do realtime playback during the session and faster than realtime during rendering/exporting.
Also, these were vital to systems like the MacPro Trashcan that had 0 PCIe slots. This system was a horrible system, and everyone I know that had one reverted back to their latest 2012 cheese grater systems with the chassis.
There was another guy I know that was building his own 3D render rig for his own home experimental use when those render engines started using GPUs. He built a 220v system that he'd unplug the dryer to use. It had way more GPU cards than he had slots for by using PCIe splitters. Again, these were not used to draw realtime graphics to a screen. They were solely compute nodes for the renderer. He was running circles around the CPU only render farm nodes.
People think that the PCIe lanes are the limiting factor, but again, that's just for getting the GPUs data back to the screen. As compute nodes, you do not need full lanes to get the benefits. But for doubting Thomas types like you, I'm sure my anecdotal isn't worth much
There were no latency concerns. These were video rigs, not realtime shoot'em ups. They were compute devices running color correction and other filters type of thing, not pushing a video signal to a monitor 60fps 240Hz refresh nonsense. These did real work /s
We could also do like we do on car audio, just two big fat power cables, positive and negative, 4awg, or even bigger with a nice crimped ferrule or lug bolted in.