Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?
Setting min-release-age=7 in .npmrc (needs npm 11.10+) would have protected the 334 unlucky people who downloaded the malicious @bitwarden/cli 2026.4.0, published ~19+ hours ago (see https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi... and select "show deprecated versions").
Same story for the malicious axios (@1.14.1 and @0.30.4, removed within ~3h), ua-parser-js (hours), and node-ipc (days). Wouldn't have helped with event-stream (sat for 2+ months), but you can't win them all.
~/.npmrc
min-release-age=7 # days
~/Library/Preferences/pnpm/rc
minimum-release-age=10080 # minutes
~/.bunfig.toml
[install]
minimumReleaseAge = 604800 # seconds
# not related to npm, but while at it...
~/.config/uv/uv.toml
exclude-newer = "7 days"
p.s. shameless plug: I was looking for a simple tool that will check your settings / apply a fix, and was surprised I couldn't find one, I released something (open source, free, MIT yada yada) since sometimes one click fix convenience increases the chances people will actually use it. https://depsguard.com if anyone is interested.
> Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?
Most of these attacks don't make it into the upstream source, so solutions[1] that build from source get you ~98% of the way there. If you can't get a from-source build vs. pulling directly from the registries, can reduce risk somewhat with a cooldown period.
For the long tail of stuff that makes it into GitHub, you need to do some combination of heuristics on the commits/maintainers and AI-driven analysis of the code change itself. Typically run that and then flag for human review.
I like the idea of a cool down. But my next question is would this have been caught if no one updated? I know in practice not everyone would be on a cool down. But presumably this comprise was only found out because a lot of people did update.
> presumably this comprise was only found out because a lot of people did update
This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.
But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.
The cooldown is a defence against malicious actors compromising the release infrastructure.
Having the forge control it half-defeats the point; the attackers who gained permission to push a malicious release, might well have also gained permission to mark it as "urgent security hotfix, install immediately 0 cooldown".
I have not heard anyone seriously discuss that cooldown prevents compromise of the forge itself. It’s a concern but not the pressing concern today.
And no, however compromised packages to the forge happens, that is not the same thing as marking “urgent security hotfix” which would require manual approval from the forge maintainers, not an automated process. The only automated process would be a blackout period where automated scanners try to find issues and a cool off period where the release gets progressively to 100% of all projects that depend on it over the course of a few days or a week.
Cooldown sounds like a good idea ONLY IF these so called security companies can catch these malicious dependencies during the cooldown period. Are they doing this bit or individual researchers find a malware and these companies make headlines?
I am thinking about Django releases. They release a "Release Candidate", which you have to download by other means to test it. I rarely do it. But when a new official is out, I install it very easily in a testing environment and run my tests against it. I think this is what most people do, and the phase where supply attacks get caught, because in that 48 hour window all the tests in the world are run.
It's not a lack of care about privacy, the 7 days delay is like a new stage between RC and final release, where you pull for testing but not for production.
For researchers who notice new releases as soon as they are published and discover malice based on that alone, I agree, and every step of that can be automated to some level of effectiveness.
But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.
Novel obfuscation, with a novel idea, is hard to invent. Novel obfuscation, where it is only new to that codebase, is easy(ier) to flag as suspicious.
While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.
Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.
A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.
What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.
Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.
That assumes discovering a security bug is random and it could happen to anyone, so more shots on goal is better. But is that a good way to model it?
Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.
A cooldown gives anyone who does investigate more time to do their work.
If I were in charge of a package manager I would be seriously looking into automated and semi automated exploit detection so that people didn't have to yolo new packages to find out if they are bad. The checking would itself become an attack vector, but you could mitigate that too. I'm just saying _something_ is possible.
It's a trade off for sure, maybe if companies could have "honeypot" environments where they update everything and deploy their code, and try to monitor for sneaky things.
Security by obscurity. If another language became as ubiquitous as JS then it'd be the same.
In the context of TFA, don't rely on third party github actions that you haven't vetted. Most of them aren't needed and you can do the same with a few lines of bash. Which you can also then use locally.
They are not, but npm is uniquely bad in that regard. Refusal to implement security features that would have made attacks like this harder really doesn't help https://github.com/node-forward/discussions/issues/29
The lack of a comprehensive standard library for JavaScript also results in projects pulling many more third party dependencies than you would with most other modern environments. It’s just a bigger attack surface. And if you can compromise a module used for basic functionality that you’d get out of the box elsewhere, the blast radius will be enormous.
Not to mention a culture of basically one-line packages ad infinitum. I downloaded a JS tool the other day to generate test reports and it had around 300 dependencies.
Needless to say I’m running all my JS tools in a Docker container these days.
So why hasn’t someone created a batteries include JS library? I don’t program in JS on the backend so I don’t know how feasible something like that is.
https://github.com/stdlib-js/stdlib was is one of several attempts at that, but yes the issue is that different people have very different views of what should be standard.
That doesn't seem like it should be an issue in practice? Rather than a single standard library endorsed by the language stewards if the community at large converges on a small handful of "standard" solutions that seems like it would satisfy the security aspect of things.
> Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?
With pnpm, you can also use trustPolicy: no-downgrade, which prevents installing packages whose trust level has decreased since older releases (e.g. if a release was published with the npm cli after a previous release was published with the github OIDC flow).
Another one is to not run post-install scripts (which is the default with pnpm and configurable with npm).
These would catch most of the compromised packages, as most of them are published outside of the normal release workflow with stolen credentials, and are run from post-install scripts
Cooldowns are passing the buck. These are all caught with security scanning tools, and AI is probably going to be better at this than people going forward, so just turn on the cooldowns server-side. Package updates go into a "quarantine" queue until they are scanned. Only after scanning do they go live.
"Just" is doing a lot of work; most ecosystems are not set up or equipped to do this kind of server-side queuing in 2026. That's not to say that we shouldn't do this, but nobody has committed the value (in monetary and engineering terms) to realizing it. Perhaps someone should.
By contrast, a client-side cooldown doesn't require very much ecosystem or index coordination.
I think the rest of your analysis is correct! I'm only pushing back on perceptions that we can get there trivially; I think people often (for understandable reasons) discount the social and technical problems that actually dominate modernization efforts in open source packaging.
This kind of thinking is why I don't trust the security of open source software. Industry standard security practices don't get implemented because no one is being paid to actually care and they are disconnected from the users due to not making income from them.
Having been in both worlds, I don't think the median unpaid OSS developer is any more (or less) dispassionate about security outcomes than the median paid SWE. There's lots of "maybe someone should do this" in both worlds.
(With that said, I think it also varies by ecosystem. These days, I think I can reasonably assert that Python has extended significant effort to stay ahead of the curve, in part because the open source community around Python has been so willing to adopt changes to their security posture.)
The approach you outline is totally compatible with an additional one or two day time gate for the artifact mirrors that back prod builds. Deploy in locked-down non-prod environments with strong monitoring after the scans pass, wait a few days for prod, and publicly report whatever you find, and you're now "doing your part" in real-time while still accounting for the fallibility of your automated tools.
There's risk there of a monoculture categorically missing some threats if everyone is using the same scanners. But I still think that approach is basically pro-social even if it involves a "cooldown".
I agree, even without project glasswing (that Microsoft is part of) even with cheaper models, and Microsoft's compute (Azure, OpenAI collaboration), it makes no sense that private companies needs to scan new package releases and find malware before npm does. I'm sure they have some reason for it (people rely on packages to be immediately available on npm, and the real use case of patching a zero day CVE quickly), but until this is fixed fundamentally, I'd say the default should be a cooldown (either serverside or not) and you'll need to opt in to get the current behavior. This might takes years of deprecation though, I'm sure it was turned on now, a lot of things would break. (e.g. every CVE public disclosure will also have to wait that additional cooldown... and if Anthropic are not lying, we are bound for a tsunami of patched CVEs soon...)
There are so many ways to self-host package repos that "immediate availability" to the wider npm-using public is a non-issue.
Exceptions to quarantine rules just invites attackers to mark malicious updates as security patches.
If every kind of breakage, including security bugs, results in a 2-3 hour wait to ship the fix, maybe that would teach folks to be more careful with their release process. Public software releases really should not be a thing to automate away; there needs to be a human pushing the button, ideally attested with a hardware security key.
we've been running Renovate with `minimumReleaseAge: '7 days'` across all our repos for a while now, which does basically the same thing across npm, PyPI, and Cargo in one config. the tradeoff is you're always 7 days behind on patches, but for anything touching CI or secrets tooling that feels like a fair deal. the nasty part of this class of attack is the timing window is usually sub-24h before it's pulled, so even 3 days would have caught this one.
Isn’t the problem with a minimum age release that the opposite would also occur - a high priority fix of zero day under exploit wouldn’t be fixed and you could be compromised in the window?
It is! It’s a tough problem to balance. The good news is that you can always override for specific cases. Linking to my other reply here: https://news.ycombinator.com/item?id=47880149
Regarding doing more than just a minimum release age: The tool I personally use is Aikido "safe-chain". It sets minimum release age, but also provides a wrapper for npm/uv/etc where upon trying to install anything it first checks each dependency for known or suspected vulnerabilities against an online commercial vulnerability database.
But then at the same time you should always update because it might fix a security vulnerability. Otherwise you end up running nodejs 10 because you don't need the new stuff.
Or it might introduce one. But sure, a security fix for a known vulnerability could count as something you need in a new version. Ideally they would be backported and separated from feature updates. The constant dependency churn and single-channel update stream is kind of why a lot of vulnerabilities become problems in the first place.
Stop using Javascript. Or Typescript or whatever excuses they have for the fundamentally flawed language that should have been retired eons ago instead of trying to get it fixed. Javascript, its ecosystem has always been a pack of cards. Time and again it has been proven again. I think this is like the 3rd big attack in the last 30 days alone.
Yes but it has nothing to do with the language, and everything to do with the ecosystem (npm tried to make thing such as mandatory MFA etc, npmjs is so big maintainers pushed back)
TypeScript on its own is a great language, with a very interesting type system. Most other type systems can’t run doom.
I use a separate dev user account (on macOS) for package installations, VSCode extensions, coding agents and various other developer activities.
I know it's far from watertight (and it's useless if you're working with bitwarden itself), but I hope it blocks the low hanging fruit sort of attacks.
Check your home folder permissions on macos, last time I checked mine were world readable (until I changed them). I was very surprised by it, and only noticed when adding an new user account for my wife.
I noticed that too (and changed it). The home folder appears to be world readable because otherwise sharing via the Public folder wouldn't work. The folders where the actual data lives are not world readable.
I think this is a bad idea, because it means the permissions of any new folders have to be closely guarded, which is easy to forget.
compartmentalize. I do development and anything finance / crypto related / sensitive on separate machines.
If you're brave you can run whonix.
The issue is developers who have publish access to popular packages - they really should be publishing and signing on a separate machine / environment.
Same with not doing any personal work on corporate machines (and having strict corp policy - vercel were weak here).
But how do you know which one is good? If foo package sends out an announcement that v1.4.3 was hacked, upgrade now to v1.4.4 and you're on v1.4.3, waiting a week seems like a bad idea. But if the hackers are the one sending the announcement, then you'd really want to wait the week!
Install tools using a package manager that performs builds as an unprivileged user account other than your own, sandboxes builds in a way that restricts network and filesystem access, and doesn't run let packages run arbitrary pre/post-install hooks by default.
Avoid software that tries to manage its own native (external, outside the language ecosystem) dependencies or otherwise needs pre/post-install hooks to build.
If you do packaging work, try to build packages from source code fetched directly from source control rather than relying on release tarballs or other published release artifacts. These attacks are often more effective at hiding in release tarballs, NPM releases, Docker images, etc., than they are at hiding in Git history.
Learn how your tools actually build. Build your own containers.
Learn how your tools actually run. Write your own CI templates.
My team at work doesn't have super extreme or perfect security practices, but we try to be reasonably responsible. Just doing the things I outlined above has spared me from multiple supply chain attacks against tools that I use in the past few weeks.
Platform, DevEx, and AppSec teams are all positioned well to help with stuff like this so that it doesn't all fall on individual developers. They can:
- write and distribute CI templates
- run caches, proxies, and artifact repositories which might create room to
- pull through packages on a delay
- run automated scans on updates and flag packages for risks?
- maybe block other package sources to help prevent devs from shooting themselves in the foot with misconfiguration
- set up shared infrastructure for CI runners that
- use such caches/repos/proxies by default
- sandbox the network for build$
- help replace or containerize or sandbox builds that currently only run on bare metal on some aging Jenkins box on bare metal
- provide docs
- on build sandboxing tools/standards/guidelines
- on build guidelines surrounding build tools and their behaviours (e.g., npm ci vs npm install, package version locking and pinning standards)
- promote packaging tools for development environments and artifact builds, e.g.,
- promote deterministic tools like Nix
- run build servers that push to internal artifact caches to address trust assumptions in community software distributions
- figure out when/whether/how to delegate to vendors who do these things
I think there's a lot of things to do here. The hardest parts are probably organizational and social; coordination is hard and network effects are strong. But I also think that there are some basics that help a lot. And developers who serve other developers, whether they are formally security professionals or not, are generally well-positioned to make it easier to do the right thing than the sloppy thing over time.
The hypothesis you're referring to is something like "if everyone uses a 7-day cooldown, then the malware just doesn't get discovered for 7 days?", right?
An alternative hypothesis: what if 7-day cooldowns incentivize security scanners, researchers, and downstream packagers to race to uncover problems within an 7-day window after each release?
Without some actual evidence, I'm not sure which of these is correct, but I'm pretty sure it's not productive to state either one of these as an accepted fact.
Well, luckily, those who find the malicious activity are usually companies who do this proactively (for the good of the community, and understandably also for marketing). There are several who seem to be trying to be the first to announce, and usually succeed. IMHO it should be Microsoft (as owners of GitHub, owners of npm) who should take the helm and spend the tokens to scan each new package for malicious code. It gets easier and easier to detect as models improve (also gets easier and easier to create, and try to avoid detection on the other hand)
That was my first instinct as well but I'm not sure how true it really is.
Many companies exist now whose main product is supply chain vetting and scanning (this article is from one such company). They are usually the ones writing up and sharing articles like this - so the community would more than likely hear about it even if nobody was actually using the package yet.
> This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.
Setting min-release-age=7 in .npmrc (needs npm 11.10+) would have protected the 334 unlucky people who downloaded the malicious @bitwarden/cli 2026.4.0, published ~19+ hours ago (see https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi... and select "show deprecated versions").
Same story for the malicious axios (@1.14.1 and @0.30.4, removed within ~3h), ua-parser-js (hours), and node-ipc (days). Wouldn't have helped with event-stream (sat for 2+ months), but you can't win them all.
Some examples (hat tip to https://news.ycombinator.com/item?id=47513932):
p.s. shameless plug: I was looking for a simple tool that will check your settings / apply a fix, and was surprised I couldn't find one, I released something (open source, free, MIT yada yada) since sometimes one click fix convenience increases the chances people will actually use it. https://depsguard.com if anyone is interested.EDIT: looks like someone else had a similar idea: https://cooldowns.dev