Thanks for the video.
Yeah, the problem I have is different from his because I actually do multitask and use multiple apps at once working on high-end niche stuff that tends to get weird at times compared to normal workflows. On a typical day I would have open Photoshop, AE, Illustrator, Maya (and Pixar apps), DP (if doing A/V or driving things with audio), FCP (or Shake back in the day), mail, browser and a handful of obscure utility apps. I may also be running X11 stuff and a VM for conversions or other things. So yes, I use After Effects and might start using Premier and Adobe removing multi-threading makes it suck for tons of cores, but I don't use Audition like he does which is strictly audio tracks.
I use Digital Performer to write music, mix and do audio editing. As far as I know, virtual instruments (load as AU plug-ins, driven by MIDI) will each use one core per instance and DP will use multiple cpus if you have them. I use a bunch of VIs. I'm not sure how effects plug-ins are distributed though, but when using dozens of VIs (unless something has changed) I would expect a performance increase with more cores and multi cpu and on my old Mac Pro I would see near 100% CPU usage when mixing complex projects on a dual cpu system or just having a bunch of VIs running with audio tracks and effects until the cpu just couldn't take it.
Add to that the 3D stuff I do. I've never worked on a non-ECC system. ECC seems to be ideal when doing simulations. So if I'm doing particle simulations (that can take a long time when you bake them to disk) the last thing I'd want are errors that might translate into visual errors in the render, or even errors during rendering of non-sim stuff using Arnold or RenderMan. I'll have to contact Autodesk and the Pixar guys and ask them about this. It is possible Autodesk has changed Maya to account for non-ECC cpus and maybe they just suggest ECC render nodes.
So yeah, it seems that for an Adobe workflow that a dual cpu hyper-threaded system might indeed suck ass compared to a Skylake single and I'm not sure if doing AE renders to a farm makes better use of multiple cpus. RenderMan will suck up every core you throw at it as far as I know. It used to plateau but the latest version seems like they changed it to take as many cpus and cores as you throw at it from what I read on the forums last week. Working in Maya itself is single-threaded when setting up a scene, modeling, or animating. I used to use RealFlow for liquid sims for a long time, but not sure if I'd get back into it. Last I used it, it was optimized for more cores and multiple cpus.
I'll contact MOTU again to see if I can get some current info about how DP performs based on clock and cores. Besides running simulations and 3D renders, single processor higher clock with GPUs seems to be the ideal for a desktop today. It's just bizarre that there is not an ECC option for the most common cpus, boards and ram. I guess they expect us who need to do sims and renders to have separate machines. Ideal world I'd have a sim/render farm, but can't do that now.
I'm also spoiled with ECC stability from all my years on Apple hardware. I ran my towers into the ground 24/7 and rarely had a system crash until they were long in the tooth and had hardware failures. Perhaps things are different on Windows now but up to Windows 7 I still saw system crashes far too often or a corrupt registry out of nowhere. I assume (maybe wrongfully) that a non ECC system running Windows would be more likely to crash.
Core counts were much lower the last time I got a machine, so maybe I'm just too stuck in the old mindset.
P.S. sorry for the novel.