Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Given enough effort, you can banish all UB and their related CVEs from a codebase.

Sure. With infinite energy, anything’s possible - we can prevent all bugs. The problem is, we don’t have infinite energy.

> So it becomes a contest of which library had more scrutiny. I.e. you can compare a battle-tested library like cURL to stuff like baby's first XML parser.

I agree that software varies in quality, and that different people and teams can produce very different levels of quality. The issue is that we’re talking about languages used by many different people with varying capabilities and levels of scrutiny.

What really annoys me is that my phone can get hit with an RCE just from someone sending me a message. That’s exactly the kind of vulnerability that happens because languages like C or C++ are so easy to misuse, due to their complexity and lack of safety. You just can’t compare that to Go or Rust, they’re in a completely different galaxy.

> How many UBs do you leave open? How many errors other errors can your program prevent (e.g. do you allow `null`/`nil`)? And was this an error extremely obvious at time of writing (billion dollar mistake)?

Everything is a trade off. I find Go to be a middle ground where I can offload much of the memory management complexity to the garbage collector, yet still have control over the aspects I care about, with acceptable performance for multi-threaded networking code.

I make extensive use of atomics and mutexes and I don’t need "fearless concurrency," because I can only recall one serious concurrency bug I’ve ever had (a data race) which took some debugging time to track down, but it wasn’t in Go. YMMV.

As for the “billion dollar mistake”, I understand the argument in the context of C or C++, but not in the context of Go. Once every few months, I get a nil pointer dereference notification from the monitoring stack. The middle network layer will report the error to the user, I fix it, and move on, a $0 mistake.

I used Rust in many other projects, where I would never use C++ or C. Rust has a higher cognitive load, more language complexity, and refactoring is painful but it’s a trade off.

Under Go’s memory model, there’s really only one form of undefined behavior: a program with a data race has no defined semantics. That’s pretty much it. Compare that to C or C++, it’s like I said, a different galaxy. I find the whole discussion around Go’s safety exaggerated, and more theoretical than what actually comes up in practice.



> Sure. With infinite energy, anything’s possible - we can prevent all bugs. The problem is, we don’t have infinite energy.

You don't need infinite energy but it is a significant undertaking. The UB seem to obey a power law in regards to their lifetime i.e. every X years number of UB caused problems halves.

> I can only recall one serious concurrency bug I’ve ever had (a data race) which took some debugging time to track down, but it wasn’t in Go. YMMV.

That's what is nasty about UB in data races. It's not trivial to find and it's a pain in the neck to reproduce. So even taking you at your word, you not having issues with data races isn't the same as "I wrote code free of data races".

> As for the “billion dollar mistake”, I understand the argument in the context of C or C++, but not in the context of Go.

Even without a chance of UB. You're adding an implicit `Type | null` to every piece that uses nullable code without handling the null case. Each time you forget to do it, you cause either an UB or a null pointer error. And the place it manifests is different from where it's generated.

Furthermore, listening to Tony Hoare's talk (https://youtu.be/ybrQvs4x0Ps?t=1682), he mentions that trying to avoid the null, in the context of a language like Java or C#, that permits them is also causing people to waste time working around them.

> The middle network layer will report the error to the user, I fix it, and move on, a $0 mistake.

Sure, but that's not a $0 mistake. It's {time to fix * hourly rate}. Even if you're doing for OSS, you could have been spending that time doing something else.

> Everything is a trade off.

Sure. But trading "ease of use" for "preventing errors" is not something I want in any language I use. Null/nil are about as good concepts today as silently skipping errors.


> Even without a chance of UB. You're adding an implicit `Type | null` to every piece that uses nullable code without handling the null case. Each time you forget to do it, you cause either an UB or a null pointer error. And the place it manifests is different from where it's generated. > Furthermore, listening to Tony Hoare's talk (https://youtu.be/ybrQvs4x0Ps?t=1682), he mentions that trying to avoid the null, in the context of a language like Java or C#, that permits them is also causing people to waste time working around them.

It's an insignificant amount of time in my experience, which makes this irrelevant in practice. I’ve never had to "work around them", this is either a theoretical argument or just sloppy programming.

> Sure, but that's not a $0 mistake. It's {time to fix * hourly rate}. Even if you're doing for OSS, you could have been spending that time doing something else.

Oh, come on, you’re smart enough to understand that a few minutes every few months may not be exactly $0, but it’s such an insignificant value that it can be treated as $0.

> Sure. But trading "ease of use" for "preventing errors" is not something I want in any language I use.

Well, you can choose your trade offs, and you can let others choose theirs. So I guess you use Ada for everything? or maybe you use Coq to prove everything? Of course you don’t, you’re also making trade offs, just different ones.

> Null/nil are about as good concepts today as silently skipping errors.

You exaggerate again. I do not agree with that framing, and I back my opinion with experience from practice.


> Oh, come on, you’re smart enough to understand that a few minutes every few months may not be exactly $0, but it’s such an insignificant value that it can be treated as $0.

You're smart enough to know that just because you sum small things, you can aggregate them over many people and a long time (circa 50 years) to get to massive numbers.

> Well, you can choose your trade offs, and you can let others choose theirs. So I guess you use Ada for everything? or maybe you use Coq to prove everything? Of course you don’t, you’re also making trade offs, just different ones.

Ada doesn't give you full memory safety. I think you need Ada.Spark. I can't find as much teaching material on it, but definitely on my radar. Also, I'm more of a Lean guy myself, but it has a different purpose than Rust. I.e. proving things.

And proofs aren't everything.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: