Contribute
Register

Post macOS/OS X Geekbench Benchmarks

Status
Not open for further replies.
Would you check Activity Monitor when your system is idle: enable View > All Processes and under CPU sort by %CPU to show greatest usage at top.

Do you find kernel_task using about 30-40% continuously on otherwise idle system?

Thx
Screen Shot 2022-05-21 at 5.23.08 AM.png
 
Last edited:
Thx. When booted from OpenCore on same drive as macOS, I see kernel_task using CPU at same level as your windowserver. I am wondering if anyone else sees similar?

When i got this to stop, my 10900K single core GB5 went from 1320 to 1460.

In your screenshot, what do you think accounts for windowserver using half a core and do you think it's doing that when you run the benchmark?
 
Have a look at the A13 gaming performance in the $329 iPad 9.

 

My daily driver.

I upgraded it to 32G and GTX680, which shows GB5 Metal at 8000ish.

By comparison to 2008 Mac Pro, the 10900K plus W5700 is:

4x single
6x multi
9x graphics
4-10x drive IO (NVMe vs SATA II)

But somehow it -feels- about 2x faster overall. In other words it's just snappier.

On a high note, code optimizations lead ffmpeg to literal 10x media processing speed, so that's game-changing.

But overall it's hit/miss which programs see a glorious benefit. Fewer than I hoped.

I think Apple's accomplishments in mobile and total system picture really put Intel x86 on the spot as a regressive performer from a company that's always going with brute force to protect its lock on market instead of innovating.

That $329 iPad review demonstrates a shocking level of performance at a price that is on the edge of free. Srsly that iPad costs 35% less than this 10900K alone.

The CPU cooler costs 1/3 of that iPad!

Although, if I bought 2T of storage from Apple, my iPad would cost the same as 10900 kit :)

We can't win!

Looking forward to further Apple announcements.
 
GB5 Result from the Ryzen 7 5700X on my B550M motherboard, difference from A320 motherboard post #699. Multicore up, but single core down.
Screen Shot 2022-05-28 at 11.12.12 AM.png
 
ED0CB554-7742-48A1-ADCE-0F4A2B73CDBB.jpeg


8 cores, 3.2Ghz...

In a manila envelope that includes same hiperf NVMe storage as desktop... PCIe x4 out... plus keyboard and great display.

I'm at a loss about desktops these days, they're looking clunky.

A lazy search reported there's no "turbo" in M1. Assuming this is true,
if it could be overclocked like a Intel/AMD desktop to 5GHz these scores would go to 2700 / 11800.

Could a single core "turbo" fit into Air's power envelope? Apple might have considered that it serves no point, because there's no way to express the gain. It just makes battery life worse without making anything a customer sees work better. There are no game framerate wars, which is the only way PC users know how to compare the singlecore incremental generational gains that Intel / AMD offer.

To me, from every direction I've looked AppleSi makes more sense than x86. I think it's generally being underrated rather than hyped.
Then Apple released Ultra which has me scratching my head about what Intel and AMD have been doing lately, because AppleSi doesn't just add cores, it uses memory in a way that greatly opens doors to multiprocessor scaling, doors that Intel / AMD choose to avoid. Idk. There are always trade-offs and Apple's trade-offs make more sense outside of the PC hype market. It's like Intel and Microsoft don't care about advancing PCs.

As to upgradability, what do I miss out on these days with Apple?

I built a phat 10th gen i9 a year ago. It was a gift and I spared no expense on the kit. A year later it was outdated according to Intel, which released 2 full generations in a year and they changed the whole board on the second. So much for upgradability.

At the same time Apple released a 16in laptop with better performance than my top-end OC kit, at the same price: $3500

— Ok my DT has another 64G RAM, but I just ladled RAM into my choice because I could, its not needed. I got a huge case for spinners, but these could be attached by USB instead of SATA.

Unlike 15 years ago, the kit is generally up to the tasks that mere mortals perform. Today, degrees of refinement and interdependencies between subsystems show there's no reason to expect an upgrade path to higher performance of key subsystems, even on "modular" desktops. You can know if you need a lot of RAM and order it in your build.

And if you don't know if you need it, then what's the real problem? The I-can-upgrade-later-if-I-need-to mindset is passé; all it says is that you don't know what you're doing, which is a very Y2K mindset.

Today, nobody thinks that keeping some door open for a revolution in HW capabilities is around the corner because it's already happened. 4K screens are about as good as people can actually see. Full 3D already happened and is available in a 300$ Oculus - which many people think sucks 'cause it gives you a headache, plus you gotta wear silly goggles.

The entire world is interconnected at the granularity of "sand" (quoting Intel) and now the entire history of human discourse has been digitized and stored and is available in your pocket. But when you go to access it, at the very last moment, standing on the shoulders of the giants of every previous generation, all the heavy lifting being done, world-wide search engines guiding the retrieval, you click on the desired knowledge and see "access denied, please contact your institution" Or an ad for Walmart.

Today computing is about refinement to tasks. The HW has been fully delivered. We've already gone through a grotesque tulip-bubble of a defi digi currency system that's paradoxically one of the most centrally organized social constructs ever created, wastes more energy than a vast nation to do its ledger updates using computations that are intentionally designed to asymptotically approach incalcuability, and is by definition completely unsuitable for commerce. The high marks of this currency are that it's incomprehensibility is a point of pride and it's useless except for trading in other incomprehensible digital artifacts, which of themselves can never contain nor express real value. Thanks to the HW, cars can now be automatically crashed in ways heretofore unimaginable. Your patient records are in a diffuse global network but still not accessible to the next doctor you meet. But work at hot plugging your brain is well underway! Will your mind be connected by USB5? Will A.I. help you select component options for your next PC?

Research shows that if blindfolded and instructed to go in a straight line, people inevitably walk in circles.

Loops.

It's like the safest way to go is back where you started, around and around.
 
I conducted an experiment over the weekend with the brand new Ryzen 5700X. First I benched on the A320I-K and noticed that during a cinebench 10 minute run the CPU wouldn't go above 66W, is it capped?. To check I put the 5700X onto B550, Updated the BIOS to AMD AGESA V2 1.2.0.7 and ran another bench. The CPU wouldn't go above 64W on a 10 minute run. The results were single core performance seems to be better on the 320 than the 550, but multicore was better on the 550.
 
Status
Not open for further replies.
Back
Top