What I’ve seen isn’t people disregarding Zig because it’s just another memory-unsafe language, but rather disqualifying Zig because it’s memory-unsafe, and they don’t want to deal with that, even if some other aspects of the language are rather interesting and compelling. But once you’re sold on memory safety, it’s hard to go back.
This is really the crust of the argument. I absolutely love the Rust compiler for example, going back to Zig would feel a regression to me. There is a whole class of bugs that my brain now assumes the compiler will handle for me.
Problem is, like they say the stock market has predicted nine of the last five recessions, the Rust compiler stops nine of every five memory safety issues. Put another way, while both Rust and Zig prevent memory safety issues, Zig does it with false negatives while Rust does it with false positives. This is by necessity when using the type system for that job, but it does come at a cost that disqualifies Rust for others...
Nobody knows whether Rust and/or Zig themselves are the future of low-level programming, but I think it's likely that the future of low-level programming is that programmers who prefer one approach would use a Rust-like language, while those who prefer the other approach would use a Zig-like language. It will be intesting to see whether the preferences are evenly split, though, or one of them has a clear majority support.
C++ already illustrates this idea you're talking about and we know exactly where this goes. Rust's false positives are annoying, so programmers are encouraged to further improve the borrowck and language features to reduce them. But the C++ or Zig false negatives just means your program malfunctions in unspecified ways and you may not even notice, so programmers are encouraged to introduce more and more such cases to the compiler.
The drift over time is predictable, compared to ten years ago Rust has fewer false positives, C++ has more false negatives.
You are correct to observe that there is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable. But I would argue we already know what you're calling the "false positive" scenario is also not useful, we're just not at the point where people stop doing it anyway.
> C++ already illustrates this idea you're talking about and we know exactly where this goes.
No, it doesn't. Zig is safer than C++ (and it's much simpler, which also has an effect on correctness).
Making up some binary distinction and then deciding that because C++ falls on the same side of it as Zig (except it doesn't, because Zig eliminates out-of-bounds access to the same degree as Rust, not C++) then what applies to one must apply to the other. There is simply no justification to make that equivalence.
> There is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable.
That's nothing to do with Rice's theorem. Proving some properties with the type system isn't a general algorithm; it's a proof you have to work for in every program you write individually. There are languages (Idris, ATS) that allow you to prove any correctness property using the type system, with no false positives. It's a matter of the effort required, and there's nothing binary about that.
To get a sense of the theoretical effort (the practical effort is something to be measured empirically, over time) consider the set of all C programs and the effort it would take to rewrite an arbitrary selection of them in Rust (while maintaining similar performance and footprint characteristics). I believe the effort is larger than doing the same to translate a JS program to a Haskell program.
> There is simply no justification to make that equivalence.
I explained in some detail exactly why this equivalence exists. I actually have a small hope that this time there are enough people who think it's a bad idea that we don't have to watch this play out for decades before the realisation as we did with C and C++.
Yes it's exactly Rice's Theorem, it's that simple and that drastic. You can choose what to do when you're not sure, but you can't choose (no matter how much effort you imagine applying) to always be sure†, that Undecidability is what Henry Rice proved. The languages you mention choose to treat "not sure" the same as "nope", like Rust does, you apparently prefer languages like Zig or C++ which instead treat "not sure" as "it's fine". I have explained why that's a terrible idea already.
The underlying fault, which is why I'm confident this reproduces, is in humans. To err is human. We are going to make mistakes and under the Rust model we will curse, perhaps blame the compiler, or the machine, and fix our mistake. In C++ or Zig our mistake compiles just fine and now the software is worse.
† For general purpose languages. One clever trick here is that you can just not be a general purpose language. Trivial semantic properties are easily decided, so if your language can make the desired properties trivial then there's no checking and Rice's Theorem doesn't apply. The easy example is, if my language has no looping type features, no recursive calls, nothing like that, all its programs trivially halt - a property we obviously can't decidably check in a general purpose language.
> I explained in some detail exactly why this equivalence exists.
No, you assumed that Zig and C++ are equivalent and concluded that they'll follow a similar trajectory. It's your premise that's unjustified.
A problem you'd have to contend with is that Rust is much more similar to C++ than Zig in multiple respects, which may matter more or less than the level of safety when predicting the language trajectory.
> But you can't choose (no matter how much effort you imagine applying) to always be sure
That is not Rice's theorem. You can certainly choose to prove every program correct. What you cannot do is have a general mechanism that would prove all programs in a certain language correct.
> One clever trick here is that you can just not be a general purpose language.
That's not so much a clever trick as the core of all simple (i.e. non-dependent) type systems. Type-safety in those languages then trivially implies some property, which is an inductive invariant (or composable invariant) that's stronger than some desired property. E.g. in Rust, "borrow/lifetime-safety" is stronger than UAF-safety.
However, because an effort to prove any property must exist, we can find it for some language that trivially offers it by looking at the cost of translating a correct program in some other language that doesn't guarantee the property to one that does. The reason why it's more of a theoretical point than a practical one is because it could be reasonably argued that writing a memory-safety program in C is harder than doing it in Rust in the first place, but either way, there's some effort there that isn't there when writing the program in, say, Java.
I've been hearing about how I'll inevitably write all this unsafe Rust for... four years now.
Some time back I checked and I had written exactly one unsafe block, and so I inspected it again and I realised two things:
1. It was no longer necessary, Rust could now just do this safely. I rewrote it in safe Rust.
2. It was technically Undefined Behaviour, predictably given the chance to shoot myself in the foot that's exactly what I had done. Like a lot of C and C++ it likely wouldn't in fact blow my foot off in any real scenario, but who knows? Not me, that's for sure.
Ah yes, "But what about other safety?". An entire year of hand wringing from C++ people was predicated on this. In one of his rambling proposal papers Bjarne listed all manner of exciting different kinds of safety he'd imagined and which, he assured us, C++ was already almost able to achieve thanks to his wisdom and foresight.
And every single item on his list of course requires the thing C++ doesn't have, memory safety. You can't write software which has any non-trivial properties when it has unconstrained Undefined Behaviour. It really shouldn't be this hard but I have reluctantly accepted that this "argument" is not made in good faith.
Which is why there is an effort to formally verify the unsafe use in the Rust standard library.
I would also say that unsafe causes a very different human reaction.
When like Zig, C or C++ everything is potentially unsafe then you can't scrutinize everything.
When submitting a PR in Rust containing unsafe code everyone wants to understand what happens because it is both rare, and everyone are cautious about the dangers posed. The first question on everyone's mind always is: Does this need unsafe?
Suppose I have a self-contained Zig project and it has a nasty memory safety bug - how can I identify where the cause might be? What parts of my project source are potentially unsafe ?
You've said it's not everything, so, what's excluded? What can I rule out?
The same useless claim could be made for C and with the same effect.
The trick Rust is doing here that Zig is not is that Rust's safe contracts are always what we would call wide contracts. As a safe Rust programmer it's never your fault because you were "holding it wrong". For example If you insist on sorting a Vec<Foozle> even though Foozles all claim they're greater even than themselves, Rust doesn't say (as C and C++ do) too bad, you broke it so now all bets are off, sorting won't be useful in Rust because Foozles don't have a coherent ordering, but your program is fine. In fact today it's quite fast to uselessly "sort" that container.
Zig has numerous narrow contracts, which means when you write Zig touching any of those contracts it is your responsibility as a Zig programmer to ensure all their requirements were upheld, and when you in turn create code or types you will likely find you add yet further narrowness - so you can be and in practice often are, "holding it wrong".
> The same useless claim could be made for C and with the same effect
It really can't be.
Memory safety is problematic because it's a common cause of some dangerous bugs. Of the two main kinds of memory safety, Rust generally eliminates both, leaving only unsafe Rust and foreign code as possible sites of memory unsafety. Zig, on the other hand, generally eliminates only the more dangerous kind, leaving only unsafe Zig and foreign code as possible sites of that.
Mind you, the vast majority of horrific, catastrophic bugs are not due to UAF. So if we get a horrific, catastrophic bug in Rust, we can eliminate UAF as a cause leaving us only with most possible causes, just as in most programming languages used to write most of the software in the world already.
This point of ha-ha, you also got a segfault while I only got all other bugs doesn't make sense from a software correctness perspective.
There is no binary line between Rust and Zig that makes Zig's superior safety to C that couldn't also be put between Rust and languages that make far stronger guarantees, putting Rust in the same bucket as C. If you think that the argument, "Rust, just like C, is unable to guarantee the vast majority of correctness properties that ATS can, therefore it is equally useless" is silly, then so is trying to put Zig and C in the same bucket.
If you believe that eliminating certain classes of bugs is important for correctness even when you don't eliminate most bugs, then I don't see how a language that eliminates the more dangerous class of the two that Rust eliminates is "just as useless" as a language that eliminates neither.
I have been programming in both C++ and Java for a very long time, and while I appreciate Java's safety, the main difference between the two languages for me hasn't been a different in correctness but in productivity. That productivity comes from Java's superior abstraction - I can make many different kinds of local changes without affecting other code at all, and that is not the case in a low-level language, be it C, C++, Zig, or Rust. I think it's good that Zig and Rust offer bounds ("spatial") safety. I also think it's good that Rust offers UAF ("temporal") safety, but I find the price of that too high for my liking.
Of course, my experience is not universal because I use C++ only for really low-level stuff (mostly when working on the HotSpot VM these days) where both Zig and Rust would have been used in their unsafe flavours anyway, because I'm more than happy to pay the increased memory footprint for higher productivity in other cases.
I guess one could claim that some feature is useful because it eliminates certain classes of bugs while another is useless because it eliminates certain classes of bugs (which happens to be the more impactful subset of the former class), it's just not a very compelling claim, especially the way you presented it, which is:
Something bad happens, say an attacker steals my data. Rust is useful because I can eliminate spatial and temporal safety as the cause, leaving only all others, while in Zig I can eliminate spatial unsafety as the cause (leaving all others), but that's just as useless as C, where I can eliminate neither spatial nor temporal unsafety as the cause.
I can see how it may be reasonable to argue that all are equally useless, but given that spatial unsafety is the largest subclass of unsafety that causes security vulnerabilities, I'm not convinced by the argument that eliminating it is completely useless while eliminating a somewhat larger class (i.e. adding a smaller marginal benefit than the first step) becomes very useful.
Have you noticed how zero is categorically different from the other numbers, even the very small ones? It's an additive identity. No matter how often we sum together zeroes, the answer is still zero and that won't work for other values. Being the additive identity is categorically different, even though it might seem as though zero is just even smaller than a tenth or a millionth, it's different.
In (safe) Rust we categorically don't have type unsafety. Safe Rust function A doesn't have unsafety, and function B which calls it doesn't have unsafety, and function C which calls that doesn't either and so on forever. So in the exercise we talked about the answer is that the fault won't be anywhere in the safe Rust. But because we don't have this in "safe" Zig even though you say there's spatial safety, oops the lack of temporal safety means our apparently OK code might induce the spatial safety issues we thought couldn't exist.
It's OK, the C++ Convener is absolutely convinced of the same line of thinking as you. Surely if they can just keep finding adjustments to make C++ fractionally safer it'll be as safe as Rust. Right? If every three years they make it 10% less unsafe, surely in thirty years it's... oh right, about 65% less unsafe. Huh.
> In (safe) Rust we categorically don't have type unsafety
This is very inaccurate. Simple (i.e. non-dependent) types can describe very, very few properties. 99% of correctness properties cannot be described with simple types at all. That is exactly why, from ATS's vantage point, Rust is about as "safe" as Assembly; its types can guarantee almost nothing, while ATS can guarantee virtually everything.
So now the question is, with what little simple types give us (which is still useful), how much are we willing to pay for what confidence in their soundness. After all, Rust doesn't actually give us 100% safety, because we interact with C code etc.. But it does give us some higher confidence than the one given to us by Zig. So now the question is, since we don't have 100% confidence anyway - there are no zeros or ones here, neither on cost nor on the benefit side - how much are we willing to pay for what amount of added confidence?
Some people find the cost of Rust to be worth the added confidence; some don't. There is no binary line here.
> It's OK, the C++ Convener is absolutely convinced of the same line of thinking as you. Surely if they can just keep finding adjustments to make C++ fractionally safer it'll be as safe as Rust.
I'm not interested in making C++ as safe as Rust. For applications programming I use Java, which is somewhat safer than Rust, and for low-level code, I'm much more interested in other correctness properties than just safety. Safety gives me some small portion of the correctness I want, and it's great when that small portion is mostly free, but the bang-for-the-buck that I get from Rust is too low for me. I pay for all this complication in exchange for only guaranteeing no UAF? For that effort, I want a lot more.
> After all, Rust doesn't actually give us 100% safety, because we interact with C code etc..
And so, after all this long thread you're back to just saying you weren't actually talking about safe Rust in the conversation about safe Rust. It was all a big waste of my time.
No, I was talking about safe Rust, which also interacts with potentially unsafe C code, you know. What do you think schedules the threads running your safe code? How do you think your safe code reads from a socket?
I thought there's no such thing as "safer". There's only whatever guarantees Rust happens to make that can prevent almost 1% of the bugs a language like ATS can prevent, which is safe, and anything that doesn't make those exact same guarantees, which is unsafe and completely worthless.
Anyway, you help prove my point, which is that even those who claim to believe in a binary distinction between what Rust happens to guarantee and anything else don't actually believe that, and understand that it's all about numbers and risks. There are many measures to reduce bugs - some through guarantees in the language, others without guarantees - that each have some level of effectiveness and some cost, and the goal is to balance those costs and reduce bugs as much as possible.
Anyone who follows the research in software correctness over the past five decades should know that in the seventies we thought we had the answers, but since the nineties we've known that there is no one right answer to correctness, and that there's no way to tell in advance which methods will be more or less effective.
And quite evidently the design and community for both C and C++ leads to design with massive amounts of very high severity bugs.
Of course ignoring how Rust also helps against logic bugs by managing null values and having an expressive type system.
You seem to be the one completely focused on the binary question since that is the only thing that allows C , C++ etc. to still be part of the conversation.
Bounds safety by default, nullability is opt-in and checks are enforced by the type-system, far less "undefined behaviour", less implicit integer casting (the ergonomics could still use some work here), etc.
This is on top of the cultural part, which has led to idiomatic Zig being less likely to heap allocate in the first place, and more likely to consider ownership in advance. This part shouldn't be underestimated.
You presumably intend "shouldn't be underestimated" rather than "can't be". I agree that culture is crucial, but the technology needs to support that culture and in this respect Zig's technology is lacking. I would love to imagine that the culture drives technology such that Zig will fix the problem before 1.0, but Zig is very much an auteur language like Jai or Odin, Andrew decides and he does not seem to have quite the same outlook so I do not expect that.
> Maybe if someone bends over backwards to rationalize it, but not in any real sense.
In a simple, real sense. Zig prevents out-of-bounds access just as Rust does; C++ doesn't. Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).
> You can't build RAII and moves into zig.
So RAII is part of the definition of memory safety now?
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
We could, of course, argue over which of Rust, Zig, and C++ offers the best contribution to correctness beyond the sound guarantees they make, except these are empirical arguments with little empirical data to make any determination, which is part of my point.
Software correctness is such a complicated topic and, if anything, it's become more, not less, mysterious over the decades (see Tony Hoare's astonishment that unsound methods have proven more effective than sound methods in many regards). It's now understood to be a complicated game of confidence vs cost that depends on a great many factors. Those who claim to have definitive solutions don't know what they're talking about (or are making unfounded extrapolations).
Then why do my data structures detect if I go out of bounds?
Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety
I didn't say anything about rust.
So RAII is part of the definition of memory safety now?
Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.
Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.
Why are you talking about rust here? Focus on what I'm saying.
We could, of course, argue over which of Rust, Zig, and C++
if anything, it's become more, not less, mysterious over the decades
Says who?
I don't care about rust or zig, I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.
> Then why do my data structures detect if I go out of bounds?
Because you have iterator debugging and/or assertions turned on and are only using non-primitive data structures (e.g. std::vector, std::array).
Zig does the thing that Rust and Go do where it makes the primary primitive for pointers to chunks of memory (slices) bounds checked. You can opt out with optimization settings, but I think most programs will build in "safe release" mode unless they're very confident in their test coverage.
It's strictly better than C++, because in practice codebases are passing lots of `(data, len)` params around no matter how strongly you emphasize in your style guide to use `std::span`. The path of least resistance in Zig, including the memory allocator interface, bundles in language-level bounds checking.
>I think most programs will build in "safe release" mode
Do you have any citations to support this 'safe release' theory? Like there are not many Zig applications and not many of them document their decisions. One i could find [1] does not mention safe anywhere.
> Standard optimization options allow the person running zig build to select between Debug, ReleaseSafe, ReleaseFast, and ReleaseSmall. By default none of the release options are considered the preferable choice by the build script, and the user must make a decision in order to create a release build.
But for more opinionated recommendations, ReleaseSafe is clearly favored:
> ReleaseSafe should be considered the main mode to be used for releases: it applies optimizations but still maintains certain safety checks (eg overflow and array out of bound) that are absolutely worth the overhead when releasing software that deals with tricky sources of input (eg, the internet).
Memory leaks are unrelated to memory safety. That is to say, code that leaks memory is memory safe. So I'm not sure what RAII is supposed to help with.
A problem not solved in C++ is the need to reserve a single bit-pattern per type that can be moved from, to indicate that it has been moved from (and is not a valid value for any other purpose).
> Then why do my data structures detect if I go out of bounds?
I didn't mean you can't write C++ code that enforces that, I said C++ itself doesn't enforce it.
> Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.
Surely there are other ways to do that. E.g. Zig has defer. You can say that you may forget to write defer, which is true, but the implicitness of RAII has cause (me, at least) many problems over the years. It's a pros-and-cons thing, and Zig chooses the side of explicitness.
> Why are you talking about rust here? Focus on what I'm saying.
You're right, sorry :)
> Says who?
Says most people in the field of software correctness (and me https://pron.github.io). In the seventies, the prevalent opinion was that proofs of correctness would be the only viable approach to correctness. Since then, we've learnt two things, both of which were surprising.
The first was new results in the computational complexity of model checking (not to be confused with the computational complexity of model checkers; we're talking about the intrinsic computation complexity of the model checking problem, i.e. the problem of knowing whether a program satisfies some correctness property, regardless of how we learn that). This included results (e.g. by Philippe Schnoebelen) showing that even though there would be the reasonable expectation that language abstractions could make the problem easier, even in the worst case - it doesn't.
The second was that unsound techniques, including engineering best practices, have proven far more effective than was thought possible in the seventies. This came as quite a shock to formal methods people (most famously, Tony Hoare, who wrote a famous paper about it).
As a result, the field of software correctness has shifted its main focus from proving program correct to finding interesting confidence/cost tradeoffs to reduce the number of bugs, realising that there's no single best path to more correctness (as far as we know today).
> I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.
That's true, but these are not memory safety guarantees. These are mechanisms that could mitigate bugs (though perhaps cause others), and Zig has other, different mechanisms to mitigate bugs (though perhaps cause others). E.g. see how easy it is to write a type-safe printf in Zig compared to C++, or how Zig handles various numeric overflow issues compared to C++. So it's true that C++ has some features we may find helpful that Zig doesn't and vice-versa, we can't judge which of them leads to more correct programs. All I said was that Zig offers more safety guarantees than C++, which it does.
And C has free, but you have to remember to use it and use it correctly every single time instead of the memory working by default with no intervention.
Says most people in the field of software correctness
Not true, the last 30 years have had much safer languages than before java, scripting languages, modern C++ and rust.
That's true, but these are not memory safety guarantees.
Pragmatically they mean you don't have to worry about bounds checking or memory deallocation and it stops being a problem. Zig doesn't have this and it doesn't have safety guarantees either.
And if you divide by 0 your program still crashes. The reality is that you can use vectors for memory allocations and you never have to worry about it. If you do need to wrap resource allocation, you do it once, test it and it will probably work from then on. This is much better than the alternative of having to remember to free the memory, close the file, unlock the mutex correctly every single time you need one.
I don't agree that "you never have to worry about it" unless you're also using smart pointers, which is rarely what I want.
I think that the alternative where all allocations and deallocations are made clear (in Zig, allocating routines are "coloured" by convention) is the better alternative, at least for the kind of low-level programming I do and for my way of thinking about low-level code.
When I write code where I don't want to see or worry about each implementation detail or see exactly where and when each operation is executed, I use Java.
If you don't want memory leaks, it probably is what you want.
There isn't a ton of difference between putting a delete in a destructor and using a smart pointer, but the best approach is to go beyond smart pointers and just use a vector, which does everything for you.
A lot of this seems like you haven't done a lot of modern C++ to see how elegant and smooth it is.
> If you don't want memory leaks, it probably is what you want.
No, sorry.
> A lot of this seems like you haven't done a lot of modern C++ to see how elegant and smooth it is.
It's true I don't want to use modern C++ (except for certain compile-time tests), and that's because when I write low-level code what I care about the most is being able to see the machine instructions that will be emitted, especially the ones related to memory management (I don't care so much about the computational instructions; the compiler, and the CPU, will rewrite them dramatically anyway).
If I find myself needing pretty abstractions, I reconsider my use of a low-level language. That's also why I'm philosophically opposed to the concept of zero-cost abstractions. What I want from C++ that C doesn't give me is templates and some other compile-time stuff, which are much more convenient for me than C macros. I don't want any implicitness in my low-level code. Zig gives me exactly what I want from a language that specialises in low-level code.
My problem with zero-cost abstractions is that they result in code that looks high-level, while still only really having a low abstraction level (what I mean by a high or low abstraction level is the extent to which I can make local changes without influencing non-local code). The resulting code looks pretty on the page, but makes me work a lot harder to understand what is being executed and when. When I don't want to care about such details, I use Java.
Just last week I had an interesting discussion, that's somewhat similar to this, about Haskell with a colleague. He said something like, look how clearly you can see the algorithm on the screen. And I said, yes, it looks great when trying to understand what it does by reading, but it's terrible to understand when you try analysing it in the debugger or profiler. The point is that there's different kinds of information that code can communicate. Sometimes you want just the function of the algorithm to be clear and want the language to hide execution details, and sometimes you're just as interested in the execution details.
`smart pointers + destructors => no memory leaks` does not entail `no memory leaks => smart pointers + destructors`.
"My solution offers X, therefore, if you want X, you should use my solution" is just a logical fallacy; the conclusion doesn't follow from the premise.
Also, I'm not so sure smart pointers and destructors actually prevent memory leaks. E.g. cycles. You can deal with cycles, but memory leaks due to them are not "prevented" just by the use of smart pointers and destructors.
Java does prevent leaks due to cycles, but it still doesn't prevent leaks due to "forgotten objects", so you get memory leaks in Java, too, even though it has fewer leaks than C++/Rust with GC pointers. So given that you can have fewer leaks than with smart pointers and still not have them completely gone, I wouldn't say that smart pointers "prevent" leaks. But yes, they're one of the ways to reduce them.
Also, I'm not so sure smart pointers and destructors actually prevent memory leaks
Well, they do, that's why people use them. I'm not sure why you would make the case against using another language that makes management manual.
Also reference counting cycles are only even possible if you use reference counting in the first place, which isn't necessary for single threaded scope based memory management.
Java does prevent leaks due to cycles
Fantastic but you said zig is safe than C++, what does java have to do with it?
You can do whatever you want, but systemically it is a lot better than doing it manually and anyone experienced with modern C++ will tell you it essentially stops being a problem.
Also you say not preventing memory leaks but your only example is cycles, which is only happens with reference counting, which is only even necessary with multi-threading. Also it implies a data structure that contains a bunch of shared pointers internally that end up referencing each other, which implies a linked list or tree made out of shared pointers, which is essentially a wild mistake huge mistake in the first place. In practice this doesn't really happen.
> You can do whatever you want, but systemically it is a lot better than doing it manually and anyone experienced with modern C++ will tell you it essentially stops being a problem.
Yeah, it's fine, but I think that systematically the Zig approach is a lot better for my needs and preferences.
> In practice this doesn't really happen.
I've only been programming low-level code for 25 years or so, including hard realtime safety-critical software, where a missed deadline or a stack overflow means dead people, so I have some grasp on what can really happen.
There's a clear tradeoff between forgetting to write some code and not noticing when code runs when you may not expect it to. Saying that one is universally better than the other is, at the very least, unsubstantiated.
> And C has free, but you have to remember to use it and use it correctly every single time instead of the memory working by default with no intervention.
Tangential, but memory leaks are not considered a safety issue, especially by those who do like to contrast with Rust (as it isn't prevented in Rust).
If we're talking about features that help (though not completely avoid) some bugs, you can't just consider the features C++ has and Zig doesn't, but also consider the relevant features Zig has and C++ doesn't.
Like I said, I don't know which of those two languages results in more correct programs (just as I don't know the answer for Zig vs Rust), but I do know that Zig offers more safety guanrantees than C++, and Rust offers more safety guarantees than Zig. I certainly don't claim that more safety guarantess always equals more correctness at a lower cost.
Even more tangentially, in the Java world we have this thing called "integrity" (https://openjdk.org/jeps/8305968) which is the ability of Java code to locally establish inviolate invariants that are guaranteed to hold globally (unless the application author - importantly not any library code - explicitly allows them to be violated). C++ scores quite low on the integrity front, as virtually all intended invariants can be violated without a global flag, sometimes in ways that are hard to detect. In both Rust and Zig, integrity violations are generally easier to at least detect (although in Zig they're sometimes harder to establish in the first place; this is intentional, and I don't entirely agree with the justification for that, although I can see its merits in a low-level language).
> Not true, the last 30 years have had much safer languages than before java, scripting languages, modern C++ and rust.
I don't see how that contradicts what I said, especially since language that offer even more correctness - such as Idris or ATS - have had effectively zero adoption. The languages that have succeeded are safer than C or FORTRAN, but also clearly compromise on what they offer (compared to Idris/ATS) because of costs. They very much embody the an acceptance of tradeoffs, and much of the memory safety in most safe languages is offered through GCs, that come with the cost of higher memory footprint. If anything, their growing popularity has come due to advancements in GCs.
Rust (you brought it up this time) is particularly interesting, because it offers something different than before to prevent UAF but at a higher cost than previous popular safe languages. While I don't know how popular Rust will be in the future, its current adoption is quite significantly lower than any language that's ever become popular at the same age.
> Pragmatically they mean you don't have to worry about bounds checking or memory deallocation and it stops being a problem
I haven't noticed that either one of these has "stopped being a problem", and I think that those who either sell or buy Rust do so because they believe these are still significant problems in C++ (and I would agree, except I think there are worse problems in C++ - that Rust, unfortunately, adopted - even with respect to correctness, that Zig attempts to solve).
> Zig doesn't have this and it doesn't have safety guarantees either
Zig definitely has safety guarantees around bounds and numeric overflow that C++ doesn't.
in the Java world we have this thing called "integrity"
Your claim was that zig is 'safer' than C++
Zig definitely has safety guarantees around bounds and numeric overflow that C++ doesn't.
This can be built in to a class too if someone really wants a bunch of branching in their math.
It seems like now safety is being redefined to say that memory leaks don't count and numeric overflow needs to be done like zig. If your program leaks memory, it eventually crashes if it runs indefinitely and that means you need to free memory, which means you need to free it at the right time only once.
There is no one definitive definition of memory safety, but it generally refers to things that can lead to undefined behaviour (in the C and C++ sense), usually due to "type confusion" (or sometimes "heap pollution"), i.e. referencing an address of memory that contains data of one type as if it were another, which can happen due to both bounds or UAF violations. Memory leaks don't cause undefined behaviour.
> This can be built in to a class too if someone really wants a bunch of branching in their math.
Let me say this again: The Zig language, just like Rust, guarantees that there are no bounds violations (except in syntactically demarcated unsafe code). C++ just doesn't do that.
That is not to say that the lack of this guarantee in C++ means you can't write correct programs in C++ as easily as in Zig or in Rust, but it is, nevertheless, a difference in the guarantees made by the language.
> It seems like now safety is being redefined to say that memory leaks don't count and numeric overflow needs to be done like zig
Memory unsafety is generally considered to be some subset of undefined behaviour (possibly including all undefined behaviour). Out-of-memory and stack overflow errors are definitely problems, but as they don't cause undefined behaviour (well, depending on stack protection) they're not usually regarded in the class of properties called memory safety.
Numeric overflows, on the other hand, might also not be regarded as memory safety, but they are very much undefined behaviour in both C and C++.
the Zig language, just like Rust, guarantees that there are no bounds violations (except in syntactically demarcated unsafe code). C++ just doesn't do that.
You said that already, but when saying zig is safer than C++, pragmatically it isn't because C++ bounds checks in the standard library but zig can never have the automatic resource management that C++ has, and that's what people use all day every day.
We keep talking about completely different things. If we're talking about "features that can help reduce some bug" then C++ or Rust have some that Zig doesn't and Zig has some that C++ or Rust don't. Which ends up more pragmatic is an empirical question that's hard to answer without data, but certainly focusing only on what C++ has and Zig doesn't while ignoring what Zig has that C++ doesn't is a strange way to compare things (BTW, I've been programming in C++ for almost 3 decades, and I really dislike RAII and try to avoid it).
But if we're talking about memory safety - which is something very specific - then, for whatever it's worth, Zig is more memory-safe than C++ and Rust is more memory-safe than Zig.
We keep talking about completely different things.
You said zig is safer than C++, then to make that argument you keep trying to redefine what safety means to include only features in the language syntax but not done in libraries while saying memory leaks don't matter and automatically freeing memory correctly doesn't matter.
I am not redefining what safety means. I am using the same definition of safety used in this entire thread by those debating the pros and cons of Rust being safer than Zig.
I definitely didn't say that memory leaks don't matter. They could possibly matter more than memory safety. They are just not called memory safety bugs, or code injection bugs, or off-by-one bugs. Memory safety is a name given to a class of bugs that lead to undefined behaviour in C or C++. It's not necessarily the most important class of bugs, but it is one, and when we're talking about preventing code injection or memory safety issues, we're not talking about preventing memory leaks - even if they're worse.
Now, if you want to talk about memory leaks and not memory safety (again, it's just a name given to some bugs and not others) then C, C++, Zig, and Rust, do not prevent them. Java prevents the kind "I forgot to free this object" kind, but not "I forgot about this object" kind.
Now, because unlike memory safety, none of these languages prevents memory leaks, it's really hard to say which of them leads to the fewest memory leaks. You really like C++'s destructors and find them useful, I really hate C++'s destructors and find them harmful, and we all have different opinions on why our way is better when it comes to memory leaks. What we don't have is data. So you can say having destructors helps and I can say no they don't until the end of time, but there's no way of knowing which is really better. So all we can do now, is to use the things we find useful to us without making broad generalisations about software correctness that we can't actually support with any evidence.
I would say: not the ones with spooky implicit actions and hidden heap allocations, but we won't know until we actually have data.
When writing in a low-level language, I always want to know where I'm allocating and where I'm deallocating. Zig makes allocations easier to spot than in C/C++/Rust, and deallocations easier to spot than in C++/Rust. That's just how I like it. I'm not saying everyone must have the same preference.
You might like making and freeing every heap allocation but that doesn't mean it's safer. Every other language would be in the category of 'hidden heap allocations' to a much greater degree. People who understand it don't feel that it is 'spooky'.
I didn't say it was safer, but by "safety" here I don't mean something that will likely work, but an absolute guarantee that it will regardless of what client code does (with the exception of clearly marked unsafe code that's easily found). C++ doesn't offer this kind of safety for pretty much anything.
So we're talking about the likelihood of making a mistake - and of not easily finding it - in the absence of safety. Without any empirical data, all we have to rely on is personal preferences and gut feeling, and those are different from one person to the other. Even expert programmers often violently disagree on what's "better", and I think that's because things can be better or worse for different use cases, but also better or worse for different programmers working on the same problem.
I would like there to be more empirical studies, but I also think we can probably live without them, because software is such an important economic activity that it's under significant selective pressures. If one approach significantly decreases the effort of delivering more value in software, it will spread almost universally (e.g. as unit tests have); the converse is that if something doesn't become universal, then it probably doesn't have a large universal impact.
Memory leaks are also a safety issue. Especially not running destructors can be a safety issue, but also a resource leak is at least a DoS. IIRC Rust also included not having memory leaks earlier in their definition of memory safety, but dropped it later.
The vast majority of catastrophic problems - nearly all of them, in the grand scheme of things - including those that can cause total system failure or theft of all data are not considered memory safety issues (which is one of the reasons that memory safety is overestimated or at least misunderstood, IMO, and why I prefer to talk about correctness in general). Memory safety refers to a specific kind of problems that correspond to undefined behaviour in C or C++. Memory safety issues are not neessarily any more or less sever than any other program weakness, it's just that for a long time they've been associated with low-level programming.
I'm not aware of any popular language - even a high level one - that prevents memory leaks with any kind of guarantee (although these come in different flavours too, and some kinds are prevented in Java). C/C++/Rust/Zig certainly don't.
Memory safety - as now being popularized by Rust in its current form - mostly corresponds to not having UB in C or C++. My point is that this not the only definition and not even the definition Rust started with.
Memory leaks are often a part of the definition of memory safety because otherwise it is trivial to fix use-after-free, i.e. simple never free the memory. Rust dropped this part because it was too hard. So in some sense the cheated a little bit.
Well, when Rust came out I had only been programming in C and C++ for about 15, maybe 20 years, but I think that even then we generally used memory safety to refer to problems that can cause "type confusion". In any event, given that none of the languages mentioned here - C, C++, Zig, or Rust - prevent memory leaks, I don't think that the question of whether or not we include it under the umbrella of memory safety could offer insight on the interesting distinctions between these languages.
I think it is relevant exactly because Rust exceptionalism is based on sloppy arguments that are fallacious because they narrow down topics and definitions in some invalid way, i.e. only considered memory safety while ignoring safety in general, only considering a specific definition of memory safety, only considering the safe subset of Rust, only accepting language-level safety, etc. until at the end it looks that Rust is extremely different to other languages while it is just some incremental step.
I completely agree, but given that even in Java, which eliminates the memory leaks Rust doesn't, programs still have bugs and security vulnerabilities, I don't think it's about what is and isn't memory safety. Most of the software that runs the world has been written in memory-safe languages for a very long time. It's more about understanding the significance and role of memory safety. With that comes the insight that it isn't binary and, while important (out-of-bounds access in C and C++ is, as far as we know empirically, one of the leading causes of security vulnerabilities), eliminating it has both a finite benefit and a cost that need to be considered.
Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer. The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
> Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer.
That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods guy, although formal methods have come a long way since the seventies, when we thought the point was to prove programs correct).
> The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.
I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at all).
> That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods person)
I agree that tests and reviews are somewhat effective. That's not the point. The point is that if you look at the history of programming languages simplicity in general goes against safety. Simplicity also goes against human understanding of code. C and assembly are extremely simple compared to java, python, C#, typescript etc. yet programs written in C and assembly are much harder to understand for humans. This isn't just a PL thing either. Simplicity is not the same as easy, it often is the opposite.
> I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at al
It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe. You are right that zig is simpler and safer, but it's a green field language. Else I might as well say rust is more safe than zig and also more complex. The point is as to isolate simplicity as the factor as much as possible.
I would even say that zig willingly sacrifices safety on the alter of simplicity.
> The point is that if you look at the history of programming languages simplicity in general goes against safety... C and assembly are extremely simple compared to java, python, C#, typescript
But Java and Python are simpler yet safer than C++, so I don't understand what trend you can draw if there are examples in both directions.
> It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe.
But I didn't mean to imply that's not possible to add safety with complexity. I meant that when the sound guarantees are the same in two languages, then there's an argument to be made that the simpler one would be easier to write more correct programs in. Of course, in this case Zig is not only simpler than C++, but actually offers more sound safety guarantees.
So far I think the adoption in critical infrastructure (Linux, AWS, Windows, etc.) is clearly in Rust favor but I agree that something at some point will replace Rust. My belief is that more guardrails will end up winning no matter the language since the last 50 years of progamming have shown us we can't rely on humans to write bug free code and it is even worse with LLM.
I think the problem with this attitude is the compiler becomes a middle manager you have to appease rather than a collaborator. Certainly there are advantages to having a manager, but if you go off the beaten track with Rust, you will not have a good time. I write most of my code in Zig these days and I think being able to segfault is a small price to pay to never have to see `Arc<RefCell<Foo<Bar<Whatever>>>` again.
I view it as a wonderful collaborator, it tells me automatically were my code is wrong and it gets better with every release, I can't complain really. I think a segfault is a big price to pay, but it depends on the criticality of it I guess.
You can write rust without over-using traits. Regrettably, many rust libs and domains encourage patterns like that. One of the two biggest drawbacks of the rust ecosystem.
I can't imagine writing c++ or c these days without static analysis or the various llvm sanitizers. I would think the same applies to zig. Rather than need these additional tools, rust gives you most of their benefits in the compiler. Being able to write bugs and have the code run isn't really something to boast about.
I would rather rely on a bunch of sanitizers and static analysis because it is more representative of the core problem I am solving: Producing machine code. If I want Rust to solve these problems for me I now have to write code in the Rust model, which is a layer of indirection that I have found more trouble than it's worth.
In practice, almost all memory safety related bugs caught by the Rust compiler are caught by the Zig safe build modes at run time. This is strictly worse in isolation, but when you factor in the fact that the rest of the language is much easier to reason about, the better C interop, the simple yet powerful metaprogramming, and the great built in testing tools, the tradeoffs start to become a lot more interesting.
catching at compile time is much better, though. there are plenty of strange situations that can happen that you'll not reach in runtime (for example, odds of running into a tripwire increase over time, things that can only happen after certain amount of memory fragmentation -- maybe you forgot an errdefer somewhere, etc.)
would you be satisfied if there was a static safety checker? (or if it were a compiler plugin that you trigger by running a slightly different command?). Note that zig compiles as a single object, so if you import a library and the library author does not do safety checking, your program would still do the safety checking if it doesn't cross a C abi boundary.