Hacker Newsnew | past | comments | ask | show | jobs | submit | wmf's commentslogin

AMD has 16 cores, Apple has 18, Qualcomm has 18, Nvidia N1X has 20, and Intel has 24. All else being equal you actually want as few cores as you can get away with because that's less likely to be limited by Amdahl's Law. Arguably Intel/Nvidia CPUs are poorly designed and benchmarks have no obligation to accommodate them.

(I'm not counting high-end workstation/server CPUs because, as others in this thread have explained, Geekbench isn't intended for them.)


Since M5 Max hasn't been released yet there's only one leaked benchmark so far which is Geekbench and everybody is (over)analyzing that score. https://www.tomshardware.com/pc-components/cpus/apples-18-co...

The article is probably right about text processing though. It sounds like they took an inherently parallel task with no communication and (accidentally?) crippled it.

I'm not sure what's going on with that subtest, and the lack of scaling is certainly egregious. But we've all encountered tasks that in theory could scale much better but in practice have been implemented in a more or less serial fashion. That kind of thing probably isn't a good choice for a multi-core test suite, but on the other hand: given that Geekbench has both multi-core and single-core scores for the same subtests (though with different problem sizes), it would be unrealistic if all the subtests were highly scalable. Encountering bad scalability is a frequent, everyday part of using computers.

We're not talking about unexplained bugs here. We're talking about a pointer that obviously has one bit flipped and it would be correct if you flipped that one bit back.

They're tweets.

Caches and registers are also subject to bitflips. In many CPUs the caches use ECC so it's less of a problem. Intel did a study showing that many bits in registers are unused so flipping them doesn't cause problems.

A common case is a pointer that points to unallocated address space triggers a segfault and when you look at the pointer you can see that it's valid except for one bit.

That tells you one bit was changed. It doesn't prove that single bit changed due to a hardware failure. It could have been changed by broken software.

[I work at Mozilla]

Yes, that's a confounding factor, and in fact the starting assumption when looking at a crash. Sometimes you can be pretty sure it's hardware. For example, if it's a crash on an illegal instruction in non-JITted code, the crash reporter can compare that page of data with the on-disk image that it's supposed to be a read-only copy of. Any mismatches there, especially if they're single bit flips, are much more likely to be hardware.

But I've also seen it several times when the person experiencing the crashes engages on the bug tracker. Often, they'll get weird sporadic but fairly frequent crashes when doing a particular activity, and so they'll initially be absolutely convinced that we have a bug there. But other people aren't reporting the same thing. They'll post a bunch of their crash reports, and when we look at them, they're kind of all over the place (though as they say, almost always while doing some particular thing). Often it'll be something like a crash in the garbage collector while watching a youtube video, and the crashes are mostly the same but scattered in their exact location in the code. That's a good signal to start suspecting bad memory: the GC scans lots of memory and does stuff that is conditional on possibly faulty data. We'll start asking them to run a memory test, at least to rule out hardware problems. When people do it in this situation, it almost always finds a problem. (Many people won't do it, because it's a pain and they're understandably skeptical that we might be sandbagging them and ducking responsibility for a bug. So we don't start proposing it until things start feeling fishy.)

But anyway, that's just anecdata from individual investigations. gsvelto's post is about what he can see at scale.


Broken software causes null pointer references and similar logic errors. It would be extremely unusual to have an inadvertent

    ptr ^= (1 << rand_between(0,64));
that got inserted in the code by accident. That's just not the way that we write software.

Except no one is claiming the bit flip is the pointer vs the data being pointed to or a non pointer value. Given how we write software there’s a lot more bits not in pointer values that still end up “contributing “ to a pointer value. Eg some offset field that’s added to a pointer has a bit flip, the resulting pointer also has a bit flip. But the offset field could have accidentally had a mask applied or a bit set accidentally due to the closeness of & and && or | and ||.

I think that if you hit the crash in the same line of code many times, you can safely assume it's your own bug and not a memory issue.

If it's only hit once by a random person, memory starts being more likely.

(Unless that LOC is scanning memory or smth)


Deduplicating and identifying the source of a crash point is surprisingly hard, to the point that “it’s the only crash of its kind” could be a bug in your logic for linking issues.

Also, in an unsafe language all bets are off. A memory clobber, UAF or race condition can generate quite strange and ephemeral crashes. Even if the majority of time it generates the “same” failure mode, it can still sporadically generate a rare execution trace. It’s best to stop thinking of these as deterministic processes and more as a distribution of possible outcomes.


Deduplicating and identifying the source of a crash point is surprisingly hard, to the point that “it’s the only crash of its kind” could be a bug in your logic for linking issues.

This is a bit vague to really reply to very specifically, but yes, this is hard. Which is why quite some people work in this area. It's rather valuable to do so at Firefox-scale.

Even if the majority of time it generates the “same” failure mode, it can still sporadically generate a rare execution trace.

This doesn't matter that much because the "same" failure mode already allows you to see the bug and fix it.


Supply will catch up to demand when demand goes down.

I've heard wired EarPods are great or a USB DAC is under $10. It's still easier to have the headphone jack though.

Probably Qwen 3.5 122B.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: