Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The answer is kinda "whatever Nvidia implements." Research papers literally build around their hardware capabilities.

A good example of this is Intel canceling, and AMD sidelining, their unified memory CPU/GPU chips for AI. They are super useful!.. In theory. But actually, they totally useless because no one is programming frameworks with unified memory SoCs in mind, as Nvidia does not make something like that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: