Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Doing this on-premise is also pretty tantalizing - I watched a video recently of Linus Tech Tips where he built a 64 core Threadripper and used virtual machines to replace four physical computers in his home including two players simultaneously gaming.

so basically a mainframe? I can't imagine it's economically viable though. a 64 core threadripper costs more than eight ryzen 3700x and clocks lower.



Inspired by Linus Tech Tips and bummed that the new threadrippers were out of my budget, I decided to build a 4 gamers 1 CPU box based on a Ryzen 3900X. I used the Ryzen 3900X (12 core) and a x570 Taichi, and then 4 GPUs (in the Nivida 1060 to 1660 range) (I had to use one of these: https://www.amazon.com/gp/product/B07YDH8KW9 to connect my 4th GPU to one of the m.2 slots) My total cost was somewhere in the $2400 range.

I personal think it's a super awesome setup, but it's _definitely not_ for the faint of heart. You have to _really enjoy_ debugging crazy shit for it to be worth it.

But even at my "budget" $2400 range, a $20/month for 4 users of some game streaming service, would buy me about _ten years_ for that price.

The economics of a streaming service are really quite killer.


Which OS do you use as a host? What I really dislike about my setup is the dedicated gpu I need just for the BIOS to post.

//Edit: Using a Ryzen 5 2600 and a Gigabyte X470 Auros Ultra Gaming


I'm using Unraid, (a linux), which uses kvm for the virtualization. If you're using kvm, you can passthrough your primary GPU by dumping the vbios, and then passing that along when you initiate the passthrough. Passing a custom vbios is pretty easy to do in Unraid, though dumping the vbios is still a manual process. I have do that in my setup because I don't have any spare slots for another GPU, even a tiny one.


Thanks! That leaves me thinking, though. I'm also using Unraid, and the main gpu is exclusively used for my vm passthrough (it's a Radeon). I thought that the bios won't free it once it has been claimed by the bios for passthrough, hence the Geforce GT710 for the bios. If I could free that, I could host another gaming setup.


You can definitely run another gaming setup through that. Here's what you need to do:

- Follow the instructions in this video for getting a dump of your vbios: https://www.youtube.com/watch?v=mM7ntkiUoPk (you can stop once you've gotten the vbios)

- Make sure your Unraid is updated to at least 6.7

- Read the "New vfio-bind method" section of: https://forums.unraid.net/topic/80001-unraid-os-version-67-a...

- Use the knowledge gained from that to add an IOMMU group assigned to your GT710 to /boot/config/vfio-pci.cfg

- Reboot your Unraid server

- Do the normal gpu passthrough thing for the GT710 for a VM, but add the dumped vbios to the "Graphics ROM BIOS:" field in the VM "edit" gui

Hopefully it should work :D

One thing to keep in mind about the vfio-pci.cfg file is that it's effectively a blacklist, and if you do something that could change your IOMMU group assignments (such as adding or removing a PCI device) you could end up inadvertently blacklisting a PCI device you don't intend to. All you need to do is update the IOMMU groups in vfio-pci.cfg to fix it, but it can freak you out if you're not expecting it.

(For example if I remove one of my GPUs, one of my SATA controllers will inevitably end up getting the IOMMU group that _used_ to belong to a GPU, so it'll get blacklisted, and two of my array drives will appear missing until I update the vfio-pci.cfg to match the new IOMMU groups)


Thanks a lot for the write-up! Once I got a tad more time at hand, I'll tinker around a bit. :)


I'm using Linux and configured the kernel to basically entirely disown the GPU from the very start (blacklisted kernel modules, disabled framebuffer). After that, I'm able to passthrough the GPU to a VM.


Thanks for the hint - I always thought it's the bios grabbing the gpu, and not releasing it again for the vm. I might need to read into some stuff.


Is your complete build posted somewhere (for example pcpartpicker) ? Thanks


"Economically viable" isn't very glamorous these days. Or difficult.

You can pick up a used i5-3xxx or 4xxx system with 8GB of RAM real cheap. If you get one that has at least a 300W power supply, you can then put in a GTX 1660 or similar and do quite well with most games. Max settings @ 1080p for some older games, and still decently playable at decent quality for the newer ones. And all for $300 USD or less.

I just put an RX 580 with 8GB of video RAM into my old i5-3570S system. The card was just $135 at Microcenter, because it was open-box.


> The card was just $135 at Microcenter, because it was open-box.

Even then you'll need 27 months (2 years and 3 months) before being more cost effective than Geforce Now. That's considering that you are on a desktop though with a CPU powerful enough.

Personally I started doing cloud gaming mostly because I was on a netbook at school (I liked the form factor and the battery life was quite nice versus most laptop). At first on OnLive, then on LiquidSky, and since a year ago, on Geforce Now.

I don't believe the service will stay at 5$/month though, both service I used shutdown because they weren't viable and they were in the 10$ range. LiquidSky was supposed to come back, but they missed multiple release date now and with the release of GeForce Now, I have no hope. Shadow seems more viable at 25$/month, which makes your solution much more cost effective.


Shadow recently dropped their price to $13 USD a month for a year contract. The $34 USD / month for month-to-month is still pretty steep.


>I can't imagine it's economically viable though

Well he also built a 6 PC watercooling loop, so yeah not all of it is particularly applicable to the average viewer


The full-room water cooling was one of my favorite LTT projects. It might be inapplicable to most users but it was a pretty good idea. (If I remember correctly it had a lot of problems and never really worked, but I appreciate someone trying it out.)


Same here. That's usually the kind of videos I watch, cause I'm not that interested in seeing benchmarks of the latest RGB memory sticks, or any other memory sticks, GFX cards, headphones for that matter.

When I saw his LAN gaming center I though it would be cool if he'd retry that project for that. :->


> I'm not that interested in seeing benchmarks of the latest RGB memory sticks,

With the kind of workshop they are building with Alex, it's clear they are aware of that. Let just hope they can do more of them quicker!


I don't even know what his build ran at in total, but he had $1700 (x2!) USB repeaters using fiber optic to directly connect USB peripherals all through his house, half a terabyte of ram and 4 GPUS. I think you could build a lesser machine to cover most household computing needs.

I don't think the core speed would matter much - Steam reports 37% of gamers are using 3ghz-and-below processors so they may be mediocre but still competent:

https://store.steampowered.com/hwsurvey/processormfg/


If you live with 2 or 3 other SWEs this is an attractive option. You have enough PCIe lanes to pack in 4 GPUs. You have enough spare horse power to also host basic home services like a file server, gitlab instance, home automation stuff, vpn, etc.

I think personally I'd opt for building four or five separate machines and managing them as a cluster though.


> If you live with 2 or 3 other SWEs this is an attractive option.

In some sort of software engineering commune?


Most 2 and 3 bedroom apartments in San Francisco are software engineering communes, yes.


Google intentional living or intentional community.


Or just, you know, having roommates.


But you can't write a blog about re-inventing an old concept if you don't give it a trendy new-age name.


The new age name was commune, I believe.


If by "new age" they mean 1871...


I've heard of a few people living like this, especially in areas with very high rent. I'm assuming it's not extremely uncommon and it doesn't sound like the worst way to live.


Are people really this unfamiliar with the idea of having roommates? I'm 26 and it's been a necessity for my entire post living in parents house life. I could buy a house in the city I currently live in, but I'd be spending >50% of my (relatively large) income on mortgage repayments and I'd be utterly screwed if I ever hit the "housing prices crash" + "no job" combo.


I know students in the US sometimes have roommates. I didn’t think many adults did it in the west. Let alone well-paid professionals.


I think it's more common now among tech employees. It's also probably less common in areas with cheaper housing. Personally I feel this kind of shared housing situation would probably be much more enjoyable than loving alone.


It's been great for me. But it heavily relies on the people you're living with being generally reasonable.

I've also found it fun to live with people with varying interests, living with just other software engineers might be boring. Common hobbies are great though. Imagine every night being a lan party.


Why would this be attractive to SWEs? Seems like something that would be more relevant to gamers.

Half of the SWEs I work with don’t game.


My workstation is set up like this, with one giant LVM pool for storage and two GPUs, it gives a few advantages:

* Can run two OS at one time, each with half the resources * Can run one with full resources if needed * Can have multiple linux and windows installs * Can have snapshots of installs * Takes around 5-10 seconds to swap one of the running installs for a different install * Can run headless VMs on a core or two while doing all the above, ie a test runner or similar service if needed

I use a 49" ultrawide with PBP, have one GPU connected to each side, so the booted installs automatically appear side to side, and Synergy to make mouse movement seamless, etc, etc

It took a little work to set up, but I've worked this way for ~ 3 years now and never had to think about the setup after the initial time investment, and during upgrades. Highly recommend it.

I definitely can see the advantage for a small team for having a single large machine with multiple GPUs, and letting them sit at a thin workspace and "Check out" whatever install they want to use, how much CPU power and RAM they need, etc, clone and duplicate installs and get a copy with their personal files in home, ready to go, and can check out larger slices of the machine when they have more CPU/GPU intensive tasks, that's probably my ideal office machine, after using my solo workstation for a while


Any guide on this? I would love MacOS and Windows side by side.


The arch linux wiki page [1] is a good place to start (Note you don't have to use arch for the host to get value out of the page, I use nixos), and/or the macos repo [2] (Or maybe this newer one [3])

The only real hurdle hardware wise is your IOMMU groups and your CPU compatibility, if you have a moderately modern system it should be a problem.

I also have a couple of inexpensive PCIe USB cards that I pass through to the guests for direct USB access, highly recommended.

The guides will use qcow2 images or pass through a block device, as I mentioned I have a giant LVM pool, I just create lvs for each vm and pass the volume through to the vm as a disk , and let the vm handle it. In the host you can use kpartx to bind and mount the partitions if you ever need to open them up.

[1] https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...

[2] https://github.com/kholia/OSX-KVM

[3] https://github.com/yoonsikp/macos-kvm-pci-passthrough


You don't do this just because you like playing games. You do this because you're primarily an IT dork who likes building crazy setups. Maybe you'll play a few games with your friends on it later.

This is like the Arch Linux of computer hardware. You don't install Arch Linux to get shit done. You do it because you like building and configuring, and to learn about Linux.


Wrong point about Arch though. You can get a lot of shit done with Arch since it has the AUR and is not forcing you to reinstall anything from scratch from an old distro and kernel combo every 2 years.


And it doesn't surprise you with something - or some combination of things - that are installed. If you need it, you put it there, otherwise it's not in your way.

I'm on Mac at the moment and frequently pine for Arch. 'What on Earth just happened...', 'What did that...', 'What is this...', 'Well that wouldn't have happened on Arch.'

People claim Mac 'just works' because of out-of-the-box readiness, well I'd say in contrast Arch 'still works' or 'keeps working': it does what you tell it and if you don't tell it to change, it stays working in the same way.


I am starting to get somewhat disenchanted with the whole “the Mac just works” thing. I suppose their recent dip in QA effort is to blame to some extent. But also, whenever something doesn’t work, and Apple doesn’t care enough to fix it —- tough luck. On Linux you at least have options if you’re a technical person and are not afraid of the command line.

I just wish I could pay a company that would have engineers and designers building an actual user-friendly, stable OS on top of Linux. Mac OS, with all its faults, is still miles ahead in terms of usability and polish.


Fair. I don't install Arch to get stuff done, I decided to stick with debian derivatives and xfce for that, but not everyone is the same.


The parent comment was likely referring to using the clients as workstations, not necessarily gaming computers. Though the cost of this server and the fiber optic usb peripherals is crazy. One of those fiber usb repeaters costs as much as a high end computer.


> One of those fiber usb repeaters costs as much as a high end computer.

Not if you "borrowed them from work for testing"...


> and clocks lower

And that's the core problem, isn't it? Aren't games notorious for being mostly sequential workloads?


There's no reason they have to be. Game development is slowly but surely migrating toward fully capitalizing on high core/thread counts.


I'd be interested to know if this has changed recently - the Xbox One and PS4 are both 1.6GHz/1.75GHz 8-core machines.


When you're designing for that known console configuration you probably make use of the extra cores if you need it. And the really cpu intensive genres (strategy games mostly) don't tend to get console releases.

As 4 core/8 thread machines become basically the minimum you can assume most pc gamers will have we'll probably see devs making more use of multithreading everywhere.


> the core problem

I see what you did there.


Linus's setup doesn't make much sense economically but the concept works. You don't have to go to the extreme end of available CPUs. 64-cores for 4 machines = 16 cores per machine, which is overkill, especially for gaming.

If you go for a 12 or 16 core Threadripper like the 2920X or 2950X you're only looking at $425/$690, which is $106/$173 per "PC".

Combine that with the savings you get from not having to buy 4 cases, PSUs and motherboards and I think a multi-head threadripper setup will end up significantly cheaper than buying 4 machines.


The CPUs alone are cheaper, yes, but there is room for savings in hardware consolidation. Only one PSU, one case, etc. The effect may be small compared to peripherals and adds inconvenience by tying multiple user experiences to a single point of failure, but it’s still a neat idea.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: