ARM was designed by the team at Acorn that had worked on the BBC Micro, which used a 6502. They decided to design a custom processor because bit felt none of the 16- or 32-bit processors on the market at the time met the standard set by the 6502 for simplicity and low cost. So, they designed their own architecture which took cues from both the cutting-edge RISC research in academia, and the simple practicality of the 6502.
(On a similar note: the 6502’s main competitor, the Zilog Z80, is an early ancestor of x86! The Z80 is an enhanced clone of the Intel 8080, which of course the 8086 was heavily based on.)
This legacy still shows up today in the instruction mnemonics: ARM uses “branch” naming (BEQ - branch if equal, BCS - branch if carry set, etc) because that’s what the 6502 used, whereas x86 spells it “jump” (JEQ, JCS, etc.). ARM uses LDR/STR to load and store registers from memory (like the 6502’s LDA/LDX/LDY/STA/STX/STY), whereas x86 just spells everything “MOV”. ARM only uses memory-mapped I/O to access hardware, whereas x86 has separate input and output ports.
The 6502 was a clone-ish of the Motorola 6800 made to be lower cost. The 6800 led to the 6809 (another competitor,used by the Tandy CoCo and IIRC the Dragon) and to the 68000 series, used by Apple in the Mac, Sun in its early systems, NeXT, Amiga, Atari in their later systems, and more. That led to the PowerPC partnership of Motorola, Apple, and IBM.
PowerPC was outliving its useful life due not to ISA, but manufacturing limitations. So Apple went to Intel, but that wasn't fit for mobile. Apple partnered with ARM to make their mobile chips. Then their mobile chips grew into the M1 and M2 along with ARM, bringing them back to a RISC-ish platform like they had with PowerPC. So it's sort of a dual path back to the same place.
I honestly don't think there's any kind of straight line from the 6809 to the 68000. They share little in common other than the '68' prefix and coming from the same company and being big endian. The instruction sets are very different. Designed by different teams. The peripheral chip set and bus management was different too.
The 68k shares more with 1970s minicomputers especially the PDP-11 and/or VAX architectures than any MPU that preceded it.
> So Apple went to Intel, but that wasn't fit for mobile. Apple partnered with ARM to make their mobile chips.
There’s a lot of interesting history there too: in 1990, after seeing the first-generation ARM CPU, Apple partnered with Acorn to co-found ARM Ltd and develop a mobile processor for the Apple Newton. Although the Newton was a failure, ARM was very successful and powered pretty much the entirety of the mobile device revolution — including of course the iPod and iPhone.
Apple’s co-founder status gives them a lot of influence over the ARM architecture — they led the AArch64 design process, and they seem to be allowed to do things that even other architectural licensees aren’t allowed to do, like implementing custom instructions in their ARM cores: https://news.ycombinator.com/item?id=29783549
And Apple’s iteration of ARM owes a lot to the PowerPC world as well — Apple’s processor design team was originally PA Semi, a company that designed PowerPC cores.
> arm64 is the Apple ISA, it was designed to enable Apple’s microarchitecture plans. There’s a reason Apple’s first 64 bit core (Cyclone) was years ahead of everyone else, and it isn’t just caches.
> Arm64 didn’t appear out of nowhere, Apple contracted ARM to design a new ISA for its purposes. When Apple began selling iPhones containing arm64 chips, ARM hadn’t even finished their own core design to license to others.
> ARM designed a standard that serves its clients and gets feedback from them on ISA evolution. In 2010 few cared about a 64-bit ARM core. Samsung & Qualcomm, the biggest mobile vendors, were certainly caught unaware by it when Apple shipped in 2013.
> Apple planned to go super-wide with low clocks, highly OoO, highly speculative. They needed an ISA to enable that, which ARM provided.
> M1 performance is not so because of the ARM ISA, the ARM ISA is so because of Apple core performance plans a decade ago.
Edit: I’m a bit puzzled by the claim that Apple was selling Aarch64 before Arm had finished their first design - A7 announced at end 2013 but A53 appeared in 2012?
It looks like A53 was announced in October 2012, but I’ve found no indication of whether the design was actually finished by then [0]. And remember that ARM just sells IP and other companies are responsible for manufacturing it; it doesn’t look like anyone actually produced A53 cores until 2015 [1] — whereas Apple was shipping actual consumer products with A7’s in them by October 2013.
Very fair point. OTOH there was a lot of detailed info on the A53 available in 2013 and SoCs were being announced with it.
I suspect this thread may be slightly exaggerating the position but certainly the case that Apple were well ahead of all the competitors - and no doubt they were deeply involved in the ISA design.
Apart from unique functionality such as the separate I/O port bus and the ability to access the 16-bit registers' two 8-bit halves, there are quite a few instruction encoding quirks that reveal the ancestry of the 8086:
* the x86 encodes the first four registers in the order AX, CX, DX, BX. This roughly matches the Z80 ordering of AF (accumulator), BC (used as counter for string operations), DE (no particular purpose), HL (the "main" address register).
* PUSH/POP operate only on 16-bit registers
* the encoding of flags (SZ0H0P1C on x86, SZuHuPNC where u is undocumented on Z80). The "auxiliary carry" and instructions such as DAA, and the "parity" flag, are particularly weird and common to both Z80 and 8086. Flags exclusive to the 8086 (interrupt, direction, overflow) are kept in the high bit of the flag register, so that the LAHF instruction makes AX look like the Z80 AF register.
* the eight conditional jumps of the Z80 are encoded in the same order in the 8086 (C<NC<Z<NZ<S<NS<PE<PO; the 8086 fits 8 more conditions in the holes)
The 8086 was designed to allow automated translation of 8080 assembly to 8086 assembly - so the instruction set may ‘look’ different but in fact has a lot in common.
Not quite right too to call the Z80 an ancestor of the 8086 but certainly closely related due to the common inheritance from the 8080.
ARM was designed by the team at Acorn that had worked on the BBC Micro, which used a 6502. They decided to design a custom processor because bit felt none of the 16- or 32-bit processors on the market at the time met the standard set by the 6502 for simplicity and low cost. So, they designed their own architecture which took cues from both the cutting-edge RISC research in academia, and the simple practicality of the 6502.
(On a similar note: the 6502’s main competitor, the Zilog Z80, is an early ancestor of x86! The Z80 is an enhanced clone of the Intel 8080, which of course the 8086 was heavily based on.)
This legacy still shows up today in the instruction mnemonics: ARM uses “branch” naming (BEQ - branch if equal, BCS - branch if carry set, etc) because that’s what the 6502 used, whereas x86 spells it “jump” (JEQ, JCS, etc.). ARM uses LDR/STR to load and store registers from memory (like the 6502’s LDA/LDX/LDY/STA/STX/STY), whereas x86 just spells everything “MOV”. ARM only uses memory-mapped I/O to access hardware, whereas x86 has separate input and output ports.