- Joined
- Dec 16, 2013
- Messages
- 222
- Motherboard
- GA-Z87MX-D3H
- CPU
- i5-4670K
- Graphics
- HD4600
What I'm interested to know is how did Apple get the single core performance to higher levels (430+ points) more than even the latest Intel Core i9 CPUs ? Has anyone read or seen anything about this ? I know that the M1 clock speeds are 400 MHz lower so what is different about M1 that produces these results ? My guess is that it's the SoC arrangement / layout with the ram in close proximity as well as the faster L2/L3 cache that makes data move faster at the lower clock speeds.
Way above my knowledge level but listened to a discussion on accidental tech podcast that I think is relevant. I think the info was from some technical interview apple had given about the new chips.
Short form: different instructions take different numbers of clock cycles, and apple seems to have optimised heavily for common instructions. Do that and you get much, much more from each clock cycle.
Longer form: the example was some instruction (fetching memory for example or some simple operation) that took eg 5 cycles on intel and eg 2 cycles on apple's ARM. The operation is exceedingly common and hence outsized gains from getting this right.
That doesn't mean the ram/cache proximity isn't a factor, but that apple having the compiler, the chip design, APIs and a lot of the core operating system in-house meant they could optimise really, really well. Obviously a lot more to it like predictive branching and blah blah blah as well.
(I seem to remember this was one of the originally-touted ideas behind RISC was to have fewer instructions that are used all the time and could be super-optimised).
Honestly way above my knowledge level but I still found interesting.
(Podcast worth listening to and fun, not too technical)