Hacker Newsnew | past | comments | ask | show | jobs | submit | chlorion's commentslogin

Does it though?

People experience "blue screens" and kernel panics and such pretty often.


The statistics we have on real world security exploits proves that most security exploits are not coming from supply chain attacks though.

Memory safety related security exploits happen in a steady stream in basically all non-trivial C projects, but supply chain attacks, while possible, are much more rare.

I'm not saying we shouldn't care about both issues, but the idea is to fix the low hanging fruit and common cases before optimizing for things that aren't in practice that big of a deal.

Also, C is not inherently invulnerable to supply chain attacks either!


>The curl|sh workflow is no more dangerous that downloading an executable off the internet

It actually is for a lot of subtle reasons, assuming you were going to check the executable checksum or something, or blindly downloading + running a script.

The big thing is that it can serve you up different contents if it detects it's being piped into a shell which is in theory possible, but also because if the download is interrupted you end up with half of the script ran, and a broken install.

If you are going to do this, its much better to do something like:

    sh -c "$(curl https://foo.bar/blah.sh)"
Though ideally yes you just download it and read it like a normal person.


No it definitely is a monolith.

It's NOT loosely coupled in any way. Try running any part of the systemd software suite on an openrc system and see how that works out?

I have no idea why people are so insistent on claiming that its not a monolith, when it ticks off every box of what a monolith is.


Most systemd components do rely on some core systemd components like systemd (the service manager) and journald. I would say that a core thesis of systemd is that Linux needs/needed a set of higher-level abstractions, and that systemd-the-service-manager has provided those abstractions. The fact that other parts of systemd-the-project rely on those abstractions does not imply that the project is monolithic.


>Try running any part of the systemd software suite on an openrc system and see how that works out?

Well from this POV it's kinda openrc's problem if it doesn't. What about trying to run any part of the Openrc software suite on an Upstart system? The question why would anyone sane want to is rhetorical tho...

Why obsessing over whether systemd is monolithic and in what measure anyway? There certainly ARE optional systemd parts. So it's correct to say it's not entirely monolithic.


openrc-init can be used on an upstart system, the daemon manager itself can't but that's because you'd have two different daemon managers. Beyond that there aren't any openrc software components, because it was designed to be a modular init system that just handles what it was intended to handle.

The rest of the system for example chrony, sysklogd, cron, etc run fine on upstart systems, because they aren't tied to systemd and are fully modular.

It's okay to be a monolith, that doesn't make it inherently bad or anything, but we should be honest about it, and it does come with some tradeoffs.


I have been able to do some light programming in elisp in termux in emacs on my moto g. My emacs config is now setup to detect termux and config itself accordingly which is neat.

Also I have a wireguard vpn setup so that I can ssh between my phone and desktop computer via a VPS with a public ipv4 address. This allows me to just "ssh 10.0.0.4" to access the phones sshd, instead of having to deal with changing IP addresses and NAT traversal.


It adds up very quickly.


I self host a small static website and a cgit instance on an e2-micro VPS from Google Cloud, and I have got around 8.5 million requests combined from openai and claude over around 160 days. They just infinitely crawl the cgit pages forever unless I block them!

    (1) root@gentoo-server ~ # egrep 'openai|claude' -c /var/log/lighttpd/access.log
    8537094
So I have lighttpd setup to match "claude|openai" in the user agent string and return a 403 if it matches, and a nftables firewall seutp to rate limit spammers, and this seems to help a lot.


And those are the good actors! We're under a crawlocalpyse from botnets, er, residential proxies.

"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36", anyone?


Yeah the flood of these Chrome UAs with every version number under the sun, and a really large portion being *.0.0.0 version numbers, that's what I've tended to experience lately. Also just kind of every browser user agent ever:

Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12 (.NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; .NET CLR 3.5.21022)

There were waves of big and sometimes intrusive traffic admitting to being from Amazon, Anthropic, Google, Meta, etc., but those are easy to block or throttle and aren't that big a deal in the scheme of things.


It’s unfortunate that you have to resort to this. OpenAI does publish their bot IP addresses at https://platform.openai.com/docs/bots, but Anthropic doesn’t seem to publish the IP addresses of their bots.


The third-party hit-counting service I use implies that I'm not getting any of this bot scraping on my GitHub blog.

Is Microsoft doing something to prevent it? Or am I so uncool that even bots don't want to read my content :(


I'm interested in that service and how it works. Link?


It is https://github.com/silentsoft/hits . It works by loading an SVG "shield" file (like the ones you see at the top of GitHub readmes all the time) from their server from a unique URL (you just choose one when you write/render your HTML). The server, implemented in Java, just counts hits to each URL in a database and sends back the corresponding SVG data. There's also a mini dashboard website where you can check basic stats for a given URL (no login required, everyone's hits-per-day stats are just public) and preview styling options for the SVG. For example, for my most recent blog post https://zahlman.github.io/posts/2025/12/31/oxidation/, I configured it such that you can view the stats via https://hits.sh/zahlman.github.io+oxidation/ (note that the trailing slash is required).

(The about section on GitHub bills the project as "privacy-friendly", which I would say is nonsense as these dashboards are public and their URLs are trivially computed. But it's also hard to imagine caring.)


They're probably not downloading every svg each time they scrape the site. Probably focused on scraping the text.


What? No, I mean the HTML for the SVG contains a custom URL for an API request. There's no scraping involved on either end.


I use org mode files, a small build script and git.

The build system just invokes emacs and compiles org documents to HTML, and installs them in /var/www/${site}. I have a git update hook on the server that invokes the build script when I push updates.

Originally I did just rsync over HTML files though, but I like the new setup a lot.


Claude was scraping my cgit at around 12 requests per second, but in bursts here or there. My VPS could easily handle this, even being a free tier e2-micro on Google Cloud/Compute Engine, but they used almost 10GB of my egress bandwidth in just a few days, and ended up pushing me over the free tier.

Granted it wasn't a whole lot of money spent, but why waste money and resources so "claude" can scrape the same cgit repo over and over again?

    >(1) root@gentoo-server ~ # grep 'claude' /var/log/lighttpd/access.log | wc -l
    >1099323


It's not even a niche instance, protocols solve their problem lol.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: