% Every piece of software should run within a sandbox
What about shared libraries? How do you define the access rights or set of processes that glibc is allowed to run with? (you might have to break up glibc into dozens of smaller libraries for that - and then manage their dependencies, the resulting web will not be too different from monolithic glibc)
Also it does not solve the problem completely (not possible to do that with security anyway) - see rowhammer exploits.
There's a faction trying to kill shared libraries. Nobody outside mobile really cares anymore. Go and Rust compile to giant shared blobs. The GTK people don't care about binary compatibility anymore. Musl doesn't support dlclose.
I don't think Musl's lack of dlclose should cause any issues in real world use. The rationale is:
Either behavior conforms to POSIX, but only the musl behavior can satisfy the robustness conditions musl aims to provide. In particular:
Under glibc's approach, libraries not designed with dlclose in mind (which may not be the libraries directly loaded with dlopen, but rather their dependencies) may leave around references to themselves in such a way that removing them from the address space results in a crash (or worse) later on. This cannot happen under musl.
Managing storage for thread-local objects is much more difficult if dlclose unloads libraries. If space is to be reserved in advance (needed to guarantee no late/unrecoverable failures), supporting unloading in dlclose seems to necessitate leaking TLS memory, which largely defeats the purpose of doing the unloading in the first place.
More people complain that statically linked binaries in Musl cannot call dlopen, which some things rely on.
What really bugs me about Musl's refusal to support dclose is that the rationale is that _other_ libraries don't suppose safe unloading, so Musl shouldn't let you try. I don't like my libc to be paternalistic.
If Windows can do leak-free TLS with FreeLibrary, Musl can too. I'm sick of hearing "X is impossible" when some other system has been doing X for decades.
Ok, in this event you need to recompile all services when a vulnerability is discovered in one of the dependent libraries; i suspect that this means that the system will not get updated in time - and that's not good at all. At least with shared libraries you can swap the so file - if the external interface has not been modified by the fix.
A counter argument to this is recompilation should not be painful: it should be standard.
And this is already somewhat true anyway: binary package distributions are a little like this. Someone else handles the compiling.
So take it further: distribute binary diffs for packages to control file size, kill shared libraries dead. Have the compiler write dedupe-friendly executables so your filesystem removes duplicate blocks in binaries efficiently and let the kernel page sharing system sort out runtime.
What about shared libraries? How do you define the access rights or set of processes that glibc is allowed to run with? (you might have to break up glibc into dozens of smaller libraries for that - and then manage their dependencies, the resulting web will not be too different from monolithic glibc)
Also it does not solve the problem completely (not possible to do that with security anyway) - see rowhammer exploits.