Contribute
Register

Z690 Chipset Motherboards and Alder Lake CPU

It is an Apple oriented benchmark. But we're talking Apple...

IMG_5391.jpeg


If benchmark performance is your measure, these numbers should have me scratching my head as to why the PC is no longer evolving.

I had suspected that with 12th gen Intel had accelerated some gaming desktop performance juju they might have held back to counter the buzz on AppleSi.

Now they're on a yearly generational release as AppleSi mobile can outpacing them. Are they just bumping major version numbers at this point.

I hope the 14th gen doesn't get yet another new socket.

I suggest wondering a bit more about all the support units employed by Apple, for which there are no handy benchmarks. Like why did the move the NVMe controller away from the flash? What's that neural engine really doing, etc.

I recently updated a 2012 Macbook Pro to Ventura. It works and is usable, but video playback is no longer accelerated, and background tasks like media analysis (photo scanning) cause high CPU usage, fans revving, battery drains rapidly. It's obvious why it's no longer supported: it's too inefficient when doing ordinary things an iPhone does on battery all day.

So how does new Intel or AMD kit figure into Apple's bag of tricks for total package? We should assume Apple is no longer optimizing for the accelerator features on IA. But if all you have to grok is Geekbench, you might not notice.

In late 2023, with 14th gen, is it now oranges and Apples?
 
In late 2023, with 14th gen, is it now oranges and Apples?
Can scores be compared across architectures? As far as I know they cannot be compared across major versions of Geekbench.

I hope the 14th gen doesn't get yet another new socket.
This is a minor refresh of Raptor Lake on the same socket, so @ori69 could just drop a brand new i7-14700KF into an existing hack.
After that Meteor Lake does not get a socket at all: It will be all soldered on mobile motherboards (and perhaps some all-in-one desktops). And then Lunar Lake will come to desktops with a new LGA1800 socket. And a new numbering scheme as well because Intel has grown tired of the "generation" game—or just find that it is not confusing enough for customers.
 
Can scores be compared across architectures? As far as I know they cannot be compared across major versions of Geekbench.

Yes. Geekbench is fair to compare architectures in the sense that it uses synthetic real-world workloads typical of modern OS and apps. What's not clear to me (simply because I've never bothered to investigate) is how accelerator HW units are-or-aren't utilized for optimizations depending on the architecture and platform.

My off-the-cuff take is that HW accelerators are applied to the extent that the compilers used to create Geekbench include such optimization based on such units. IOW, if the unit can be engaged by a GCC option, assume it is used.

I take Geekbench's tests at face value: the scope of the "CPU" test is the CPU and its built-in units; "Compute" is about graphics acceleration and compute resources within the GPU as expressed by OpenCL and Metal. This doesn't include measurement of video codecs very limited support for neural nets, etc. (Notice the GPU after-market has split between gaming and AI, while "workstation graphics" is slumping as a catagory.)

But there is a gap between the full system harmony of the OS and the scope of Geekbench. While Geekbench is a fair way to compare architectures within the above scope. since T1 on Intel, Apple has been gradually shifting its architecture towards its own secretive internal, product-focused modules with its own internally-defined expectations for performance. I'd go so far as to say, Apple's partners and customers have not been looking ahead. What's in this gap can be important to Apple, even though Geekbench can't see it.

My opinion is that Apple has been going much farther than Windows or Linux to establish new expectations about what's considered normal performance within this gap. (Google should be counted too, but their HW products are never ground-breaking.)

When looking at new x64 CPUs, what you see is defined by gaming, and media content creation. Today, most of the world is made happy by consoles, so PC gaming is a fat niche. WRT to media, Microsoft has never been able to act as a vertical mediator of total system harmony. PC's for media creation are not measured by any complete user-exeperience, but by application of a particular standard-bearer app.

Apple takes a more wholistic design approach; they always have. Apple has been a system-levvel standard-bearer for media content creation since the Mac's inception (not withstanding the interesting but passé Amiga+ VideoToaster).

With the iPhones 15 Pro, they pack an entire cinema-quality 4K capture and editing suite into a battery-powered device that also happens to be (to quote Jobs in 2007) "the best MP3 player, a great phone, and an internet communications device," and Apple sells it for half the cost of the original Mac in non-adjusted dollars, while the HW is at least (literally and conservatively) 1,000,000 times more powerful.

My post of the phone benchmark anchors my opinion that excitement for 14th gen should be tempered by the awarensss that Intel and AMD no longer define the user experience because gaming has moved to the consoles, Office is boring, the web is owned by Google, and what does Microsoft do again? IMO Windows and Linux have been doing nothing to establish a strong new center of user expectations. The IA desktop has been eclispsed by ARM designs for mobile.

Conclusion: I think it's remarkable that as of today the single most important old-school measure of PC desktop performance: Intel x64 single-core, upon the very first day of release for Intel's latest (14th) generation, is overtaken out of the gate by a previously-released Apple phone.
 
Yes. Geekbench is fair to compare architectures in the sense that it uses synthetic real-world workloads typical of modern OS and apps. What's not clear to me (simply because I've never bothered to investigate) is how accelerator HW units are-or-aren't utilized for optimizations depending on the architecture and platform.

My off-the-cuff take is that HW accelerators are applied to the extent that the compilers used to create Geekbench include such optimization based on such units. IOW, if the unit can be engaged by a GCC option, assume it is used.

I take Geekbench's tests at face value: the scope of the "CPU" test is the CPU and its built-in units; "Compute" is about graphics acceleration and compute resources within the GPU as expressed by OpenCL and Metal. This doesn't include measurement of video codecs very limited support for neural nets, etc. (Notice the GPU after-market has split between gaming and AI, while "workstation graphics" is slumping as a catagory.)

But there is a gap between the full system harmony of the OS and the scope of Geekbench. While Geekbench is a fair way to compare architectures within the above scope. since T1 on Intel, Apple has been gradually shifting its architecture towards its own secretive internal, product-focused modules with its own internally-defined expectations for performance. I'd go so far as to say, Apple's partners and customers have not been looking ahead. What's in this gap can be important to Apple, even though Geekbench can't see it.

My opinion is that Apple has been going much farther than Windows or Linux to establish new expectations about what's considered normal performance within this gap. (Google should be counted too, but their HW products are never ground-breaking.)

When looking at new x64 CPUs, what you see is defined by gaming, and media content creation. Today, most of the world is made happy by consoles, so PC gaming is a fat niche. WRT to media, Microsoft has never been able to act as a vertical mediator of total system harmony. PC's for media creation are not measured by any complete user-exeperience, but by application of a particular standard-bearer app.

Apple takes a more wholistic design approach; they always have. Apple has been a system-levvel standard-bearer for media content creation since the Mac's inception (not withstanding the interesting but passé Amiga+ VideoToaster).

With the iPhones 15 Pro, they pack an entire cinema-quality 4K capture and editing suite into a battery-powered device that also happens to be (to quote Jobs in 2007) "the best MP3 player, a great phone, and an internet communications device," and Apple sells it for half the cost of the original Mac in non-adjusted dollars, while the HW is at least (literally and conservatively) 1,000,000 times more powerful.

My post of the phone benchmark anchors my opinion that excitement for 14th gen should be tempered by the awarensss that Intel and AMD no longer define the user experience because gaming has moved to the consoles, Office is boring, the web is owned by Google, and what does Microsoft do again? IMO Windows and Linux have been doing nothing to establish a strong new center of user expectations. The IA desktop has been eclispsed by ARM designs for mobile.

Conclusion: I think it's remarkable that as of today the single most important old-school measure of PC desktop performance: Intel x64 single-core, upon the very first day of release for Intel's latest (14th) generation, is overtaken out of the gate by a previously-released Apple phone.
Geekbench can be called the benchmark with the largest database, but it certainly can not be called fair. The results are far from reality.

Gekkbench6Processor.png


Logic Bench and Cinebench load every core all the time.
For example, the M2 Ultra in Geekbench 6 has a very high score, while in Cinebench and Logic Bechmark it comes out weaker than the Intel Xeon 2019.

LogicBench457.jpg
 
Capture.png


Still worth it 12700k !

@ori69 How is to make the Logic Bench?
 
Last edited:
13900K w/ + overclock, p-cores only enabled w/ hyperthreading off. W/ hyper threading on, multiple core is around 16500.

Screen Shot 2023-10-20 at 6.38.03 AM.png
 
For example, the M2 Ultra in Geekbench 6 has a very high score, while in Cinebench and Logic Bechmark it comes out weaker than the Intel Xeon 2019.

The question was "is it fair to compare architectures with Geekbench" and my response was, "Yes it is fair to compare". The benchmark is explicitly designed to compare a range of SW workloads typical of common apps.

Geekbench is a far more relevant way to size up architectures for general use than Cinebench R23, which is a one-note-tune of a benchmark, highly dependent on CPU architecture-specific optimization.

Geekbench was developed to counter the singlemindedness of PC benchmarks.

Cinebench is an application-specific benchmark typical of Windows PC. It's precisely an example of how the Windows PC scene is defined by specific apps.

Cinebench R23 does not involve the GPU at all. And when R23 was released, the AppleSi Macs were brand new, while x86 had decades of tuning. So R23 is a less-fair comparison of architectures than Geekbench because it's workload is so narrow and its engine preceded the AppleSi scene.

This is the true point of a benchmark, to provide a point of comparison for a suitability for a purpose.

If you are purchasing workstations for Cinema 4D rendering, you want to know what R23 tells you about M2 Ultra, because you want the best performance for that one workload, That's fair. If you are looking for a workstation for Apple Motion production, Cinebench is not fair!

Last month (9/23), Maxon announced Cinebench 2024 which is still narrowly focused, but it now includes GPU acceleration. Will this help or hurt M2 Ultra's showing? Compilers have matured for AppleSi since Cinebench R23 so the results are less likely to be biased by inexperience with a new architecture. But PCs can be fit with hugest GPU, so maybe PC will always win?

Geekbench doesn't stress-test a system at all, while Cinebench does, exposing thermal/power limits. Case in point, early AppleSi Macbooks made a weak showing on R23 compared to GB because they're passively cooled. With a render workload, they heat up and slow down. The benchmark fairly exposes this weakness of the design! But maybe unfairly impugns the architecture. For example x86 laptops have similar thermal/power limits, but are usually actively cooled because x86 mobile is a notorious power hog. In other words, what makes the M1 weak for Cinebench compared to an x86 portable is desirable for portability! Passive cooling is only possible for low power designs and low power is good for battery life. In engineering this is known as "no free lunch."

Now I'm back to my point about Gerkbench iPhone 15 benchmark: The total package has the amazing quality of winning bragging rights on general workload single-core while actually supporting a 4K production studio that fits in your pocket. And the same architecture is applied from the iPhone through Apple's the top-end workstation which provides a stable continuum for app development . This is what Intel has dreamed about for decades, but PCs don't achieve it because the market is so diffuse and horizontal, while MSFT lacks the focus to organize consumer products vertically (plus they a guy named Steve Balmer guiding their direction back when the time was ripe to gain that focus).
 
This post is an outlier to this thread, but I'm posting here to provide an example of what a couple of posts ago I termed a performance "gap" between a benchmark program's measure of a system and significant underlying features of an architecture.

This is important in context because AppleSi continues its inexorable shift away from the designs that made hackintosh possible.

Windows enables Bitlocker on NVMe by default, but uses the CPU to do the crypto processing.


In the history of the PC, intel considered details like this to be a significant advantage of its powerful CPUs: Why pay for HW in the peripheral device? You can reuse your investment in the CPU. IOW the general purpose of IA with cycles to spare was a cost savings for a system integrator.

Times have changed. Silicon got orders of magnitude cheaper. Now running IO for a drive is a distraction for the valuable CPU and slows the peripheral device.

Back at the time of the T2 co-processor, Apple offloaded drive encryption into it, along with media codecs and crypto key store that supports a preboot environment, etc.

Today, the Ultra has a separate NVMe controller that can do this (and maybe other stuff) which on Intel Macs was offloaded to T2. Readers here will recall threads of agitation about how Apple SSDs are less modular than PC and questioned why Apple chose that design.

So here we find part of the explanation is that Apple makes a different performance trade-off. Their obsolescence of older HW is ensuring a baseline of total system performance for their SW products.

Windows is more backwards compatible, but can't lift old boats to new levels of customer expectation for built-in media functions.

We don't have a benchmark that can account for these factors.

Are these factors even important to any given individual? Maybe not. But they're very important to a company that intends to evolve consumer products and not disappoint its market by shining-on incompatible old HW.

This places the hackintosh user in a bind:

- If your excitement is about by an old-school CPU benchmark bump, what's so important to you about Mac?

- If you value Apple's evolution of the total experience, what does a CPU benchmark number have to do it?

Just liking what you like is a fine reason for a hack. But if this is your view, then benchmarks are irrelevant.

At 14th gen we may have to concede that there's no longer any clear connection between the leading edge of PC silicon and the Apple experience. No benchmark will capture this new era.

The right place to track PC generational advances is in Windows and Linux.
 
Hello there!

I've built another hackintosh, same motherboard and CPU as the one I have now. Everything works fine with the same EFI folder but the CPU on the new one, my 13900KF refuse to go further than 3Ghz in macOS. (same BIOS version too).
It does work fine in Windows, and also on my main hackintosh with the exact same file. Only the GPU changes.

What would be the reason to this? It drives me nuts.
 

Attachments

  • Screenshot 2023-11-14 at 20.04.37.png
    Screenshot 2023-11-14 at 20.04.37.png
    172.1 KB · Views: 72
  • EFI CLEAN DSP 13900K.zip
    7.8 MB · Views: 44
Hello there!

I've built another hackintosh, same motherboard and CPU as the one I have now. Everything works fine with the same EFI folder but the CPU on the new one, my 13900KF refuse to go further than 3Ghz in macOS. (same BIOS version too).
It does work fine in Windows, and also on my main hackintosh with the exact same file. Only the GPU changes.

What would be the reason to this? It drives me nuts.
What speed did you set it to in the BIOS?
 
Back
Top