Contribute
Register

Apple Announces M1 Ultra CPU, Mac Studio and Studio Display

Parallels 1-time payment "Not subscription" limits the HDD speed to 60MB/s.

This was something I didn't realise. I've had licenses for Parallels from way back at the beginning but stopped with v.13. I only ever bought 1-time licenses or upgrades, never the subscription. Sad to learn this.

They lost my custom when they tried their hardest not to re-activate my install when I built a new PC running High Sierra. I didn't need a newer version but they persisted in suggesting I paid again. I resisted - and it took some doing - but finally they re-activated my copy.

Back on topic ... They now seem to suggest their software has been updated to work with Apple Silicon:

P1.jpg

That is only half true as when hosted on an ARM machine Parallels can't run an Intel Client.

If you need to run a Windows app on AS then a more useful utility is Crossover by Codeweavers. Okay, it can't run the same amount of stuff a VM can but it's darn good. I've run 16-bit, 32-bit and 64-bit Windows apps on Apple Silicon with it.

o_O
 
This was something I didn't realise. I've had licenses for Parallels from way back at the beginning but stopped with v.13. I only ever bought 1-time licenses or upgrades, never the subscription. Sad to learn this.

o_O
I can't say enough bad things about Parallels business model, akin to Adobe. But you can know if you need them, and they do everything they can to make sure you know. The whole idea of Windows on Mac is a goofy product zone that represents the needs of Microsoft and Apple not computer users. Where England colonized India and Africa, Microsoft add Apple colonize the "desktop". We are all in a position of serfdom paying rent to these owners of carefully curated supply chains. But the world truly has advanced when what used to be the crushing of your feudal livelihood by empire to wage war has been replaced by the crushing of your email store or a memo by MSFT Office or your CD collection by iTunes. I've watched in amazement as movies once bought for special occasions in theaters were re-channeled to re-bought on Cable, Beta, VHS, Laserdisc, DVD, HD-DVD, Blu-ray and streaming, and ultimately the artform converted into all-too-commonly mindless ongoing series designed to distract you from noticing your credit card is connected to an auto-billing siphon, which long-since having been perfected as an extraction device, has caused the industry to lose interest in the production "content", a la Netflix losing 50b in street value in a day because 200,000 people let their accounts expire. Blah blah
 
Apple Silicon has a clear advantage in power consumption, but that's not to say they aren't powerful. The M1 SoCs run circles around their Intel counterparts in FCPX and, yes, they do so while using much less power. How is this a disadvantage?
M1 integrates specialized accelerators for video creation/processing so it makes sense that it is faster that a generalized CPU. It was basically designed for this use case.

When I built my current hack in 2017, I put in 64GB and I have not had the need to upgrade.
You maxed out the DIMM slots and max supported RAM size from the get-go, so you wouldn't be able to upgrade even if you wanted to. The usual use case is to fill-out half of the DIMM slots (i.e. 32GB) and then add more in the future if/when the need arises. That's what I've done to all my systems over the years.

On the other hand, soldered or non-upgradeable SSD is a big no-no in my opinion, taking into account that flash memory degrades with every write. Theoretically you can kill an SSD with constant small writes in a matter of weeks (e.g. some malware could do this) and then you have to replace the whole machine.
 
M1 integrates specialized accelerators for video creation/processing so it makes sense that it is faster that a generalized CPU. It was basically designed for this use case.


You maxed out the DIMM slots and max supported ram size from the get-go, so you wouldn't be able to upgrade even if you wanted to. The usual use case is to fill-out half of the DIMM slots (i.e. 32GB) and then add more in the future if/when the need arises. That's what I've done to all my systems over the years.

On the other hand, soldered or non-upgradeable SSD is a big no-no in my opinion, taking into account that flash memory degrades with every write. Theoretically you can kill an SSD with constant small writes in a matter of weeks (e.g. some malware could do this) and then you have to replace the whole machine.

What's wrong with specialized accelerators?

GPUs are specialized accelerators for graphics. It was basically designed for this use case.

My Z390 has 4 DIMM slots. 128GB is max RAM capacity. I loaded it up with four 16GB DIMMs and never had the need to touch the RAM after completing the build.

I have never heard of an SSD that died in a week due to lots of writes. That's got to be some garbage NAND. My system stays on 24/7. My Western Digital SN750 has been in use for approx a year and my HP EX920 has been in use for well over 2 years. I use my HP EX920 as a scratch disk which means it gets written to a lot.

Screen Shot 2022-04-30 at 7.07.27 AM.png
Screen Shot 2022-04-30 at 7.06.49 AM.png


Besides, on the Mac Studios, the SSDs can be replace if/when the SSD dies.


Screen Shot 2022-04-30 at 7.16.06 AM.png


The following article is over 7 years old and, even back then, it took A LOT of writes to kill SSDs. In the case of the Samsung 840 Pro, it survived 2.4PB of data written. It took over 18 months of constant writes to kill it.

 
Last edited:
First of all, pastrychef, I'm not criticizing you or your decisions. I have nothing but respect for you.

What's wrong with specialized accelerators?
Nothing wrong with them; I was remarking upon the fact that performance in FCPX isn't really a fair "apples to apples" comparison to draw general conclusions about performance.

GPUs are specialized accelerators for graphics. It was basically designed for this use case.
Indeed they are and if you can use them, they are great. The problem with bundling big GPUs (that take most of the die area on an SOC) is that even if you don't need them (hence specialized), you still have to pay for them.

If I need a lot of cores to compile code on Xcode and go for the M1 Ultra, I'll be paying a considerable premium for those massive GPUs (+ all the other video processing accelerators) that's very unlikely that I'll ever put to use.

I chuckled at the Apple keynote when they described the Mac Studio as a modular computer. Perhaps "modular" means something different to Apple.

My Z390 has 4 DIMM slots. 128GB is max RAM capacity. I loaded it up with four 16GB DIMMs and never had the need to touch the RAM after completing the build.
You mentioned your 2017 hack (z370), which has a 64GB max memory limit. I was referring to that.

I have never heard of an SSD that died in a week due to lots of writes.
Chia crypto mining (which behaves like a storage drive virus) can allegedly wreck an SSD in as little as 6 weeks.

https://www.tomsguide.com/news/chia-crypto-mining-could-kill-your-ssd-in-40-days-heres-why

Imagine if this happened to the soldered SSD of your $6K+ MacBook Pro.

In the case of the Samsung 840 Pro…
The 840 Pro used MLC NAND. Almost all current commercial drives use TLC, which has significantly reduced endurance, and some use QLC (QLC is even worse in terms of endurance).

Under light, everyday use, TLC is fine, but with deliberate effort, it is possible to kill an SSD in a short amount of time. Wasn't there a report some time back, about some macOS file system bug (if I remember correctly) that wrote excessively on the drives of the first M1 macs?
 
Last edited:
First of all, pastrychef, I'm not criticizing you or your decisions. I have nothing but respect for you.


Nothing wrong with them; I was remarking upon the fact that performance in FCPX isn't really a fair "apples to apples" comparison to draw general conclusions about performance.


Indeed they are and if you can use them, they are great. The problem with bundling big GPUs (that take most of the die area on an SOC) is that even if you don't need them (hence specialized), you still have to pay for them.

If I need a lot of cores to compile code on Xcode and go for the M1 Ultra, I'll be paying a considerable premium for those massive GPUs (+ all the other video processing accelerators) that's very unlikely that I'll ever put to use.

I chuckled at the Apple keynote when they described the Mac Studio as a modular computer. Perhaps "modular" means something different to Apple.


You mentioned your 2017 hack (z370), which has a 64GB max memory limit. I was referring to that.


Chia crypto mining (which behaves like a storage drive virus) can allegedly wreck an SSD in as little as 6 weeks.

https://www.tomsguide.com/news/chia-crypto-mining-could-kill-your-ssd-in-40-days-heres-why

Imagine if this happened to the soldered SSD of your $6K+ MacBook Pro.


The 840 Pro used MLC NAND. Almost all current commercial drives use TLC, which has significantly reduced endurance, and some use QLC (QLC is even worse in terms of endurance).

Under light, everyday use, TLC is fine, but with deliberate effort, it is possible to kill an SSD in a short amount of time. Wasn't there a report some time back, about some macOS file system bug (if I remember correctly) that wrote excessively on the drives of the first M1 macs?

Specialized encoders/decoders are being included in an increasing number of GPUs. Even low end Arm SoC are starting to include decoders that enable 8K video playback (i.e. Rockchip RK3588 or Amlogic A311D2), something that even my 6600 XT struggles to handle. I feel that these accelerators can only benefit users. If, for example, my 6600 XT can't play back a video that the SoC of an SBC can, then the SoCs of those SBCs are better at that particular task.

Yes, there are no options for those who just want the extra CPU cores and don't want to pay for all the GPU cores. On the other hand, by packaging all these components (CPU, GPU, RAM) in a single package, it reduces cost. I would imagine that if all these components were not packaged together in a single SoC, the Mac Studios would be much more expensive.

The same is true of my Z370. I specced it with 64GB and never needed more which is why I went with 64GB again on Z390. If I felt I needed 64GB, I would not have done a Z370 build. RAM limitation was one of the primary reasons why I never considered purchasing an M1 Mac mini.

There's no mention of the brand of SSD that miner used. Following the links just led me to some Chinese page which also didn't mention what kind of SSD was used. It just said that it died after approx 256TB had been written. That seems extremely low. Maybe he'd have had better luck if he used a Mac...

Unfortunately, SLC and MLC SSDs are pretty much a relic of the past and I don't think any modern consumer level SSDs are made with them anymore. I couldn't find the endurance specs for the Samsung 840 Pro... Looking at the specs of the Samsung 980 Pro (a TLC SSD), I found that it's rated for 1.2PB of writes.

Screen Shot 2022-04-30 at 12.06.39 PM.png


Yes, I remember seeing reports of excessive writes on early M1 MacBooks. But even with those, I didn't see any widespread SSD failures. In fact, for all the years that Apple has been soldering SSDs to the main logic boards, I haven't seen any widespread complaints of failed SSDs.

Again, on the Mac Studios, SSD modules can be replaced if/when they fail.
 
Last edited:
AppleSi Ultra results show that Apple has made great design gains against Intel Architecture that are reframing price/performance. There's no report yet of any area where M1 is weak in competition, and many reports where it is very strong. In the case of the fluid-dynamics application reviewed in the post I offered (above in thread) it M1 Ultra doesn't just complete strongly, it reframes performance expectations. Apple is known for top-performing Intel designs with the Mac Pro — Apple never let Intel down WRT to design wins — but it appears that Intel has failed to deliver meaningful advances as compared to what Apple designers determined was possible, so Apple went their own way and the results speak well. AppleSi came out of the gate teaching Intel about new potentials, Every step Apple's of approach goes further to the point of total eclipse of Apple's Intel designs. And Apple just got started!

Intel, the defacto standard and at vast market scale, has nothing pulling demand on the consumer desktop except games, so phat GPUs and hideously inefficient overclocks rule because it's all about hot-rod bragging rights. Being the standard means customer takes what you give and gets used to it.

Over long haul, Apple being vertically integrated with more value to add on top is in a position to study specific vectors within its market and make architecture advances that reap gains that Intel leaves on the table because it is the standard bearer. Intel could see this coming from a long way off.

Intel invited Apple in to its design world as a prestige customer, as can be seen with Thunderbolt and Mac Pros. It appears Apple did not waste that time in relationship. Apple prolly had a few challenging spats too over defects.

Apple, unlike Intel — two very different companies — has been through total architecture upheaval before so they would be motivated to learn all they can. Apple's biggest success, the phone, never was about Intel in the first place.

As to points about modularity or fitness for a purpose, this is a red-herring.

Apple never marketed its Intel based products as better PCs. It was always Mac vs PC. It's Apple's more knowledgeable customers that make the comparison "Oh you know inside its like a PC". Apple never pitches this. The PC has always been defined by Microsoft. So there's no Apples to Apples (garr) view of this stuff.

Hachintoshers seem to feel a bit of confusion about this because they've just assumed that Apple has been withholding from them the egalitarian wealth of the PC market for the purposes of claiming a luxury tax. But we can see that in fact, Intel got lost in its assumption that it is the standard.

Apple is now showing they know as much as the entire PC industry about computer architecture and using this knowledge to score points with bold design, which they happen to be known for.

As to how FCPX just plays to their strengths?

That's the whole damn point!
 
Specialized encoders/decoders are being included in an increasing number of GPUs.
I'm mostly referring to the ProRes accelerators (basically apple integrated the Afterburner card into the SoC) which isn't something the average user would need. Then again, I think that the Mac Studio is mostly targeted at video editors/creators, since they are the ones that can take advantage of the GPUs and accelerators (although the M1 Ultra might be an overkill even for them).

I can't think of other uses for those GPUs. 3D rendering would be faster (most likely cheaper too) on windows/linux with a dedicated (and upgradeable) GPU and you can't really use them for serious gaming, nor for machine learning applications (at least not yet).

Looking at the specs of the Samsung 980 Pro (a TLC SSD), I found that it's rated for 1.2PB of writes.
That's for the 2TB model. The 500GB model has a 300TB endurance rating (the 250GB one is at 150TB), which is close to the 256TB that it took to kill that other drive. In any case this a rather extreme use case that the average user wouldn't encounter.
 
I thought this CFD benchmark was very interesting and decided to investigate a little.

First of all, the USM3D CFD software used here, isn't widely available to the public. To download it, you have to register and submit a request that needs NASA's approval.

CFD computation in general, is a memory bound workload, that quickly saturates memory bandwidth, which of course the M1 Ultra has in spades in order to feed its massive GPUs.

The M1 Max has a total of 409GB/s of memory bandwidth for the GPU, but only 224GB/s (204GB/s for the Pro) is available to the CPU performance cores. You can read more here.

So the M1 Ultra has 448GB/s of aggregate bandwidth for its 16 performance cores (those were used in the test), compared to the Mac Pro's ~85GB/s for its 28 cores, which explains this huge performance delta. As a side note, CFD is also well suited for GPU acceleration and NASA notes 4x-8x faster compute using Nvidia GPUs. The USM3D software used in the test above doesn't support GPUs.

I also found some additional benchmarks using the OpenFOAM CFD, from this thread:
Apple MacBook Pro 16" (2021, 32 GB, M1 Pro)
# cores - time (s)​
1 - 458.33​
2 - 257.38​
4 - 145.35​
6 - 118.88​
8 - 98.11​


Core i5-12600 DDR5 6000
# cores - time (s)​
1 - 399.94​
2 - 213.75​
4 - 131.87​
6 - 107.09​


Ryzen 7 5800X3D 2x8GB DDR4 3800
# cores - time (s)​
1 - 304​
2 - 188​
4 - 135​
6 - 124​
8 - 122​

Unfortunately, most common workloads can't really take advantage of increased CPU memory bandwidth, showing very modest improvements (if at all). For up to ~16 cores, dual channel DDR4/5 seems to be enough for the vast majority of workloads, with the only real-world exception being 7-Zip compression. You can read more here and here.

So I have to disagree with the article author that CFD computations are a "great real world benchmark". Far from it actually. It's a niche scientific/engineering workload.

As to points about modularity or fitness for a purpose, this is a red-herring.
I don't really understand the point you're trying to make comparing Intel/Microsoft/Apple and PC/Mac/Hackintosh.

What I'm saying is that the Mac Pro is a modular computer and the Mac Studio simply isn't and shouldn't be promoted as one; unless we've changed the definition of "modular".

…they've just assumed that Apple has been withholding from them the egalitarian wealth of the PC market for the purposes of claiming a luxury tax.
I know this has been beaten to death, but I just couldn't help myself:
  • 980 Pro: 500GB -> 2TB = +$180
  • Mac Studio: 500GB -> 2TB = +$600
So like you said: "customer takes what you give and gets used to it".

As to how FCPX just plays to their strengths? That's the whole damn point!
I'll try to reiterate my point. The large performance gains in FCPX are in part due to specialized video accelerators (i.e. ProRes) made specifically for it. Just because FCPX is e.g. 5x faster, it doesn't mean that everything else will also be 5x faster.

To be clear, I'd be a fool to bash Apple's achievements with the M1 SoCs; they are amazingly power efficient and perfect for battery powered devices. The Pro/Max versions are very well suited for video creators. The pricey M1 Ultra is probably overkill for most users.
 
Back
Top