DC power wise, the original reason why they did this is that it's grossly inefficient to have a 42RU worth of 1U servers, each with its own individual 110-240VAC to 12/5VDC power supply in it.
Or the equivalent of that with four-servers-in-2RU type setups, but also with AC to DC power supplies.
They centralized the AC to DC conversion in a single unit in the rack, feeding either 277VAC or 480VAC to each rack, and ran 12VDC to each server. The new system wisely moves from 12VDC to 48VDC (same as most telecom equipment), and probably has basic DC-DC converters in each server unit for 48 to 12VDC conversion.
They've also gone with custom motherboards that entirely eliminate the 5VDC rails which are distributed by 'normal' ATX server power supplies.
I read somewhere that part of it was also doing the short-term battery backup on a rack-level, without needing conversion back to mains AC and then again back to DC in the server for it.
Hmm, but arent there low-voltage DC server PSU? I know that a lot of network gear(cisco) can be powered by 48V DC.
Quick googling shows there are a lot of options like these. I dont see how designing own server form factor is a better solution than using COTS components.
I've been trying to find cheap 48v power supplies for 1A and less telcom gear, but they seem to be pretty spendy. Cheaper to buy old gear that includes the PSU than order the PSU itself.
Actually it seems to me that standardization hampers the last bit of efficiency. Just one example that grinds my gears is the LEDs on the reference designs. Nobody needs these but there they are, drawing current 24x7 in a dark warehouse in the middle of nowhere.
It was described in detail much earlier than that, in ESR's The Magic Cauldron, part of The Cathedral and the Bazaar:
"...When the development of the open-source X window system was funded by DEC in the 1980s, their explicit goal was to 'reset the competition'. At the time there were several competing alternative graphics environments for Unix in play, notably including Sun Microsystems's NeWS system. DEC strategists believed (probably correctly) that if Sun were able to establish a proprietary graphics standard it would get a lock on the booming Unix-workstation market. By funding X and lending it engineers, and by allying with many smaller vendors to establish X as a de-facto standard, DEC was able to neutralize advantages held by Sun and other competitors with more in-house expertise in graphics. This moved the focus of competition in the workstation market towards hardware, where DEC was historically strong. ..." from http://www.catb.org/~esr/writings/cathedral-bazaar/magic-cau...
The essay you linked attributes the concept to Joel Spolsky, saying that he wrote about it in 2002 and quoting from that. So the concept was coined quite long ago.
The idea is old, but I didn't think people were referring to it by name before. Searching for "commoditize your complement" I get first recent results discussing the essay I linked, though digging deeper there are quite a few linking to the Spolsky post as well.
Could ocp make a rack that behaves like a blade server?
Construct a server rack that works like a network Patch panel and server power bus? Thus one would just install the server in the rack slot, no more Cables to the server. Bonus award if a robot could slide in the server in the rack.
There are rack designs out there built on this idea, but they are usually pretty specialized. For example, the Cray XC series has racks with built in power, network, and out of band monitoring.
A downside of this kind of thing is that it makes upgrades and maintenance harder, and you often have to do any hardware work on a whole rack at once. Heterogeneous setups get really hard. And it’s usually very vendor specific.
You'd either need a different rack design per particular server or a rack that is overdesigned for 99% of servers plugged into it (e.g. different speeds/counts of network connections). If the thought is the servers will be static for the lifetime of the rack does wiring up a preset rack to a switch really save any time from wiring up preset server designs to a switch?
A rack but that has two power feeds in standardized position for each rack unit. When you slide in the server in the rack, you provide power. It would slide in the power connector provided by the rack. More specifically I mean that the server would get automatically connected to IEC 320 C13 female provided by the rack. The modifications to a standard server would be too standardize the position of two 230V/110V power feeds.
Alternatively, connect the server to a 48DC bus when its racked in providing +48 volt direct current and ground which can be through the server chassis.
Network connectivity should be provided in two standardized positions as well.
No. Cost / value is not in alignment there. You want your servers to be cheap and disposable.
Plus lose the control plane and you lose all the blades within it. There's just no value to it unless you're doing workloads that benefit from a high degree of locality. Cloud services focus on having their networks be as fast as possible so as to reduce the disadvantage of not being highly local.
Oracle (my employer), and AWS have been announcing various HPC cloud products where we're starting to focus on highly local, servers with fast interconnects, and it's still not looking at blades. HPC workloads are rather untapped by clouds so far, and it's a big market.
Facebook does use it. They design their own hardware, but they are do not manufacture it themselves, they aren't in the hardware manufacturing business. They parter with companies like Quanta to do that.
I'm well aware that. I'm saying that in a sense that they don't buy or use much of the original "ocp platform" they show off at their events.
And much of it was said to have ended after limited deployment in their Prineville DC, after which they switched back to regular OEM quanta gear with just few things like blue-green handlebars added, and "barebone" motherboard trays.
I've been whispered that the biggest buyers of the original "ocp platform" gear is not even facebook these days, but some banks
Facebook's entire blob storage and data warehouse (multi-exabyte) is run completely on OCP storage hardware built by ODMs, of which Quanta is included. Anyone telling you that we don't use OCP is grossly misinformed.
Source: I was on the blob storage team when we migrated all of our data from OCP's gen1 storage design [1] to the new gen2 storage design [2].
More OCP hardware (or designs that will eventually be contributed to OCP), in many different SKUs depending on use case. This page lists most of the current-gen hardware that I am aware of:
So your theory is, what, that Facebook spends tons of money designing multiple generations of hardware that they don't use? Why would that make any sense?
Well, in that case I just got disinformated. My buddy worked in their DC, and that was his first hand account: "OCP stuff came with major deficiencies, haphazardly reverting everything back to off the shelf U1s"
ODM - Original Design Manufacturer, the specifications and some design work is done by FB and then this company goes and actually designs and builds the equipment that FB buys.
ODMs = System Integrators, if you're familiar with that term instead?
There's several companies that will do large scale bespoke-design server manufacturing and assembly work.
Amazon, Facebook, Microsoft et al. all order servers through them that are built to specifications their hardware engineers have designed (usually in collaboration with them). Once you get above a certain scale, the value proposition of OEMs goes out the window.
What does that have to do with an open specification for racks of computers? This is a design that anyone could potentially adopt, and gain from the engineering efforts that those involved in OCP have done.
There's even a marketplace you can purchase Open Rack components through (and you could likely also go more direct to those companies rather than via OCP's marketplace):
https://www.opencompute.org/products