- Joined
- Dec 3, 2010
- Messages
- 460
- Motherboard
- Gigabyte GA-H55M-S2V
- CPU
- Intel i3-530
- Graphics
- HIS HD 6570
- Mac
- Mobile Phone
NVidia's on the other hand is more of a monolithic architecture which is also available as an online update (the "web drivers"), strongly suggesting that they don't collaborate. NVidia makes their OS X drivers as a branch of theirs for Windows, and simply passes them on to Apple when a release is due.
Hobbyists may be aware of existing issues with DDC/CI, where the NVidia interface for controlling a monitor (volume, brightness, and contrast changes) behaves differently, interpreting what should be nanoseconds of wait time for the monitor's response as milliseconds. Without a special conversion just for NVidia cards, the kernel locks up for hours waiting for a reply.
The separation widens when you get to OpenGL. Apple posts a capability matrix for OS X, letting developers and game designers know which new features are supported by which cards, so they can plan accordingly. Starting with Mavericks, Apple supports OpenGL 4.1 which brings with it a raft of changes meant to modernize the language. The matrix says all newer cards (AMD's Radeon 5xxx and up, NVidia's 6xx and up, and Intel's 4000 and up) support 4.1, then list all of the features available for each.
In at least one case (there are probably others), that little asterisk is undeserved. A headline feature of the WWDC 2013 talk on OpenGL was "Subroutine Uniforms", an esoteric feature that's a big deal to game engine developers. It means the "shader" programs that control how a game looks can be made modular and the speed of switching between them (most complex scenes require at least two passes, through different shaders) becomes incredibly fast.
It's little wonder game developers complain about porting their engines for OS X, but can we really say it's Apple's fault?