Is training compute interchangeable with inference compute or does training vs. inference have significantly different hardware requirements?
If training and inference hardware is pooled together, I could imagine a model where training simply fills in any unused compute at any given time (?)
Also, if you pull too manny resources from training your next model to make inference revenue today, you’ll fall behind in the larger race.
Is training compute interchangeable with inference compute or does training vs. inference have significantly different hardware requirements?
If training and inference hardware is pooled together, I could imagine a model where training simply fills in any unused compute at any given time (?)