Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought an interesting point was the liquid cooling -- unclear how important this is to them, but I'm guessing it means that they designed it with a TDP that requires liquid cooling.

This (wanting higher density) is the opposite of the trade-off that I was expecting. In my (limited and out of date) experience, power was the limiting factor before space, and I believe AI racks have very high power draws already.

I would have guessed this would be because larger nodes would be better for AIs tight communication patterns, but they specifically call out datacenter space as the constraint. Curious if anyone knows more about this



My understanding is that if you are renting your data center space right now, power still is the limiting factor and you will need to leave some rack space empty to be able to run GPUs.

On the other hand, if you are building your own data center, which is the case for Microsoft, presumably you can arrange high power zone to run GPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: