Contribute
Register

OS X OpenGL 4.1 Broken on NVidia for Almost 2 Years

Status
Not open for further replies.
Joined
Dec 3, 2010
Messages
460
Motherboard
Gigabyte GA-H55M-S2V
CPU
Intel i3-530
Graphics
HIS HD 6570
Mac
  1. iMac
Mobile Phone
  1. Android
NVidia is normally a go-to choice for hackintoshing since it basically requires no configuration, unlike AMD and Intel with their various framebuffers. But what about the driver architecture that goes into the kexts we rely on, are the more complicated AMD/Intel ones better somehow? Hackintoshers who have been following graphics for a while know that the "tight fit" between AMD/Intel and Apple is the result of long collaboration and just the sort of custom design Apple is known for: each supported card or configuration has its own framebuffer, which doesn't necessarily align with any card available to consumers. 507.png

NVidia's on the other hand is more of a monolithic architecture which is also available as an online update (the "web drivers"), strongly suggesting that they don't collaborate. NVidia makes their OS X drivers as a branch of theirs for Windows, and simply passes them on to Apple when a release is due.

Hobbyists may be aware of existing issues with DDC/CI, where the NVidia interface for controlling a monitor (volume, brightness, and contrast changes) behaves differently, interpreting what should be nanoseconds of wait time for the monitor's response as milliseconds. Without a special conversion just for NVidia cards, the kernel locks up for hours waiting for a reply.

The separation widens when you get to OpenGL. Apple posts a capability matrix for OS X, letting developers and game designers know which new features are supported by which cards, so they can plan accordingly. Starting with Mavericks, Apple supports OpenGL 4.1 which brings with it a raft of changes meant to modernize the language. The matrix says all newer cards (AMD's Radeon 5xxx and up, NVidia's 6xx and up, and Intel's 4000 and up) support 4.1, then list all of the features available for each.

matrix.png
In at least one case (there are probably others), that little asterisk is undeserved. A headline feature of the WWDC 2013 talk on OpenGL was "Subroutine Uniforms", an esoteric feature that's a big deal to game engine developers. It means the "shader" programs that control how a game looks can be made modular and the speed of switching between them (most complex scenes require at least two passes, through different shaders) becomes incredibly fast.

So does it work on all three manufacturers (hint: nope)? Try the small tester app below, or compile it yourself with the instructions in this thread. If you're using AMD/Intel, you'll see a blue triangle turn green, then flip back and forth. Screen Shot 2015-02-14 at 10.16.35 AM.png
If you're on NVidia, the triangle stays blue. Even worse, opening the app after the first time the triangle is green! Screen Shot 2015-02-14 at 10.18.21 AM.png

It's little wonder game developers complain about porting their engines for OS X, but can we really say it's Apple's fault?
 

Attachments

  • GLView.zip
    29.7 KB · Views: 745
Nvidia Draft

  1. In Xcode create a new Cocoa app
  2. Create a new NSOpenGLView subclass called GLView
  3. Paste the implementation below into GLView.m
  4. Uncheck One shot memory for the window in MainMenu.xib
  5. Add an NSOpenGLView to the window
  6. Change the subclass to GLView.
Code:
#import <OpenGL/gl3.h>

#define GLAttributePosition 0

@implementation GLView {
    GLuint _a[1], _b[1], _u, _pr;
}

-(void)awakeFromNib {
    CGLPixelFormatAttribute a[] = {kCGLPFADoubleBuffer, kCGLPFANoRecovery, kCGLPFAAccelerated, kCGLPFAOpenGLProfile, (CGLPixelFormatAttribute)kCGLOGLPVersion_GL3_Core, 0};
    self.pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:a];
    self.openGLContext = [[NSOpenGLContext alloc] initWithFormat:self.pixelFormat shareContext:nil];
    CGLContextObj c = self.openGLContext.CGLContextObj;
    CGLEnable(c, kCGLCECrashOnRemovedFunctions);
    CGLSetCurrentContext(c);
    GLint v = 0;
    glGetIntegerv(GL_MAJOR_VERSION, &v);
    NSAssert(v > 3, @"OpenGL version <= 3");
    NSLog(@"%s",glGetString(GL_RENDERER));
}

-(void)prepareOpenGL {
    [super prepareOpenGL];
    glClearColor(1, 1, 1, 1);
    GLuint v = glCreateShader(GL_VERTEX_SHADER), f = glCreateShader(GL_FRAGMENT_SHADER);
    const char *s[] = {"#version 410\n"
        "in vec4 vPosition;\n"
        "void main() {\n"
        "gl_Position = vPosition;\n"
        "}", NULL};
    glShaderSource(v, 1, s, NULL);
    glCompileShader(v);
    _pr = glCreateProgram();
    glAttachShader(_pr, v);
    s[0] = "#version 410\n"
    "subroutine vec4 colorType();\n"
    "subroutine uniform colorType Color;"
    "out vec4 fragColor;\n"
    "subroutine(colorType) vec4 Blue() {\n"
    "return vec4(0.0,0.0,1.0,1.0);\n"
    "}\n"
    "subroutine(colorType) vec4 Green() {\n"
    "return vec4(0.0,1.0,0.0,1.0);\n"
    "}\n"
    "void main() {\n"
    "fragColor = Color();\n"
    "}";
    glShaderSource(f, 1, s, NULL);
    glCompileShader(f);
    glAttachShader(_pr, f);
    glBindAttribLocation(_pr, GLAttributePosition, "vPosition");
    glLinkProgram(_pr);
    GLint l;
    glGetProgramiv(_pr, GL_LINK_STATUS, &l);
    NSAssert(l, @"Not linked");
    glUseProgram(_pr);
    [self validate];
    float r[] = {0.0,0.0, 0.0,-1.0, -1.0,0.0};
    glGenBuffers(1, _b);
    glBindBuffer(GL_ARRAY_BUFFER, _b[0]);
    glBufferData(GL_ARRAY_BUFFER, sizeof(r), r, GL_STATIC_DRAW);
    glGenVertexArrays(1, _a);
    glBindVertexArray(_a[0]);
    glVertexAttribPointer(GLAttributePosition, 2, GL_FLOAT, GL_FALSE, 0, NULL);
    glEnableVertexAttribArray(GLAttributePosition);
    [NSTimer scheduledTimerWithTimeInterval:4 target:self selector:@selector(flip:) userInfo:nil repeats:true];
}

- (void)drawRect:(NSRect)dirtyRect
{
    [super drawRect:dirtyRect];

    // Drawing code here.
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glBindVertexArray(_a[0]);
    glDrawArrays(GL_TRIANGLES, 0, 3);
    [self.openGLContext flushBuffer];
}

-(IBAction)flip:(id)sender {
    _u = !_u;
    glUniformSubroutinesuiv(GL_FRAGMENT_SHADER, 1, &_u);
    [self setNeedsDisplay:true];
}

-(void)validate {
    GLuint u = 0;
    glUniformSubroutinesuiv(GL_FRAGMENT_SHADER, 1, &u);
    glGetUniformSubroutineuiv(GL_FRAGMENT_SHADER, 0, &u);
    NSAssert(u == 0, @"Bad index");
    GLchar string[LINE_MAX];
    GLsizei length;
    glGetActiveSubroutineUniformName(_pr, GL_FRAGMENT_SHADER, glGetSubroutineUniformLocation(_pr, GL_FRAGMENT_SHADER, "Color"), LINE_MAX, &length, string);
    NSAssert(strncmp(string, "Color", length) == 0, @"Bad Uniform");
    glGetActiveSubroutineName(_pr, GL_FRAGMENT_SHADER, glGetSubroutineIndex(_pr, GL_FRAGMENT_SHADER, "Blue"), LINE_MAX, &length, string);
    NSAssert(strncmp(string, "Blue", length) == 0, @"Bad Subroutine");
}

@end
 
Then you guys will love this even more, the test linked does not run at all on a Mac Pro 4.1 with a genuine GT 120.
 
Mine stays blue. Not sure if that's a good or a bad thing?

Additionally, what's the downside of this broken OpenGL 4.1? Any examples that makes me understand this? Thanks!
 
Yeah I have a NVIDIA GeForce GTX 560 2048 MB... Only see the blue triangle.. Does that mean my card does't support OpenGL 4.1? The cards specs mention it supports 4.1.
 
My 660ti shows a green triangle only, but passes the 4.1 test on OpenGL Extensions Viewer. What does that mean?
 
Probably relevant: http://www.jwz.org/blog/2012/06/i-have-ported-xscreensaver-to-the-iphone/

I wrote this because you are all idiots.

Specifically, if you were involved in the OpenGL specification between 2003 and today, you are an idiot.

Allow me to explain.
…snip…
This is nonsense, and I have an existence proof.

Because I've implemented the OpenGL 1.3 API in terms of the OpenGL ES 1.1 API, and it works fine. I didn't have to install a new GPU in my iPhone to do it.

I did it all by myself, in about three days.

Not me and my team. Not ten years of committees working on hundred-page specifications. Just me. Just to prove a point.

So screw you guys.

There is no sensible reason that something very like the code that I just wrote could not have been included in the OpenGL ES API and library. If people didn't use the old parts of the API, it just wouldn't be linked in. No harm. No bloat. That's how libraries work! But if someone did use it, their legacy code could continue to function. That's how supporting your customers works!

If they really felt the need to go all "Second System Syndrome" and just start over, they shouldn't have pretended that OpenGL ES is still OpenGL. They should have named it something else, like, I don't know, DirectX.

Hurray for OpenGL \o/ #headdesk
 
Status
Not open for further replies.
Back
Top