Go generates large Wasm modules, because it bundles its goroutine scheduler, garbage collector and standard library into the module.
Translating that back to Go will give you a pretty big Go file.
Go is "known" for being fast to compile, but that huge Go file will take (at least?) as long to compile as compiling the Go toolchain does.
wasm2go is best used on moderately sized modules (like SQLite). Last I heard, the person who tried to translate Perl got a 80MB Go file that was taking them 20min to compile.
I have a fairly large house (2 story 3k sqft) with all cat5e. I iperf’d every run and they could all do 10gb negotiation and TCP, most of the runs could sustain very high UDP rates with low packet loss. There’s just one run (which is the one to the internet) which had a slightly higher UDP packet loss rate. So basically every run can do 10gb fine. Been running the whole network like this for a year. It’s been great! I just need a 10 gig capable NAS. My current one can only do 3.5 or so because it’s a usb 5gb/s which isn’t really 5 gb.
If I was running a physical business and I wrote down each person’s name and credit card number and the exact time and order they placed, that would be pretty invasive and “spying”. If I write down how many units I sold of each item per day, and the volume of transactions by credit card vs cash, it’s anonymized and I don’t think this would generally be considered “spying”, just normal business metrics. How’s the latter much different than anonymized product analytics?
Watching me use my computer in my house or office is spying.
Aggregating request statistics server-side unless you're only generating those requests to spy on what I'm doing on my computer is more like the not-spying you're talking about.
Most telemetry is more along the lines of "user spent N minutes on platform, clicked on these things, looked at these other things" etc etc. And the primary way devs use this data is by aggregating across all users and running a/b tests or viewing longer term trends.
Are some companies spying on you the way you say? Yea, probably. Most of us just want data to know what's working and what's not.
The logical conclusion is you’re asking for no local products and everything to run server side. It’s kind of a ridiculous position that doesn’t change the spying being done other than it’s on the other side of a browser.
No you didn’t. If I build you a web video editor, is that because I want to spy on you or because I want to make deployment easier and reduce install friction?
You’re making a distinction that puts you in the privileged judge position of evaluating if a service is making requests just so you can spy vs what the app author’s might believe is a critical design feature in how they want the product to operate.
> Watching me use my computer in my house or office is spying.
I agree, but once you cross the borders out to the internet, I'd say you need to stop seeing that as "Me sitting at my computer at home", because you're actually "on someone else's property" at that point essentially. And I say this as someone who care greatly about preserving personal privacy.
> Watching people move their mouse and click stuff on “your webpage” is fucking spying. It’s in my browser. On my machine. Not running on your hardware.
Well, I was mainly talking about network requests, which are quite literally served by "my hardware" when your client reaches out to my servers, and they agree to serve your client. I do agree that it sucks that browser viewports now also are considered "mine" from the perspective of servers, but you do have a choice to execute that code or not, you can always say no.
I don't think it's as much "this attitude took over", people saying that the internet is the wild west and warning you "browse at your own peril" has been around for as long as I can remember.
Yeah server logs don’t bother me. I’m requesting a resource, you unavoidably see that happen.
The attitude that’s changed is that in the 90s and 00s a program that sent information about what you’re doing that wasn’t necessary and expected for how it operates would have been instantly, popularly, and unequivocally labeled spyware by a programmer crowd. Now it’s normal and you get a bunch of folks claiming it’s ok.
Cloudflare already did that and it’s available now[1], although it’s billed as a “spiritual successor” and not a literal one (so probably not backwards compatible).
Yes but anodization implies thickness around ~5–25 micrometers (µm) for aluminum. The natural oxide coating is ~2-5 nanometers (1,000–5,000× less thick).
Does anyone believe this? Is it real? Windows laptops always claim crazy battery life figures but when I have the latest, nicest windows machine it ends up lasting like 3-4 hours every time, with a normal coding workload. My MacBooks in contrast will last 8+.
> "Moving on, the Dell XPS 14 (2026) endured for around 20 hours and 21 minutes when playing a 4K YouTube video. The MacBook Air 15 M5 lasted for 14 hours."
> "Finally, the Apple M5 showed its efficiency advantage under maximum load. The MacBook Air 15 M5 kept going for 4 hours and 10 minutes while gaming. The Dell XPS 14 could only hold out for 2.5 hours."
That about fits my personal experience. The dozens-of-hours-long battery life claims for laptops are based on streaming videos, but real-life battery life is always far shorter for me, as I usually use my laptop for more intensive tasks.
And what is using Confluence in the first place? Your MacBook Pro is faster than a supercomputer from 20 years ago. As we make compute cheaper, we find ways to use it that are less efficient in an absolute sense but more efficient for the end user. A graphical docs portal like Confluence is a hell of a lot easier to use than EMacs and SSH to edit plain text files on an 80 character terminal. But it uses thousands of times more compute.
It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.
I'm confused by your analogy. A wiki run server is extremely efficient to run, and can be hosted from a tiny little raspberry pie. A search engine can be optimized to provide results near O(1). You can even pull up and read results on a very old computer. All of the concerns around cost and resource efficiency can be addressed as all of this is a solved problem.
Even with an LLM agent getting cheaper to run in the future, it's still fundamentally non-deterministic so the ongoing cost for a single exploration query run can never get anywhere near as cheap as running a wiki with a proper search engine.
reply