Contribute
Register

4K UI performance Nvidia/AMD

Is your 4K UI experience smooth, and if so, what GPU do you use? (please specifiy in a post)

  • Yes! I'm a Nvidia user.

    Votes: 7 46.7%
  • Yes! I'm a AMD user.

    Votes: 2 13.3%
  • No! I'm a Nvidia user.

    Votes: 6 40.0%
  • No! I'm a AMD user.

    Votes: 0 0.0%

  • Total voters
    15
Status
Not open for further replies.
Alright, thanks. However, I'm kind of weary when it comes to fakeSMC, since the last time I tried to get that to work I killed my setup by simply installing the FakeSMC kext. I'll however give it a try again, since I'm not dependent on my Mac setup for now anyways.

FakeSMC.kext is the only REQUIRED kext for a hackintosh. You had to have it installed before you "installed" if you were already booting OS X. The additional plugins that Florian mentions are placed inside the FakeSMC.kext.
 
I wasn't aware that Cinebench weighs in your CPU when do a GPU graphics test. But I do agree that the 980ti crushes the R9 380 when it comes to gaming/compute benchmarks. I ran Heaven and my 980ti smashed it.

The one reason I keep the r9 380 (And this is a personal reason) is I do alot of graphical design. The 980ti does not have 10 bit color output. None of the Geforce cards do...you need to pickup the quadro line. So even though i have a nice 10bit monitor that I spent tons of $$ on...I can't use it to the max with the 980ti. :( I would pick up a quadro...but those are ridiculously priced. So I found a "cheap" alternative.

It's not so much that Cinebench directly measures CPU and then uses that to weight the GPU score. It's that the GPU score seems totally bottlenecked by the CPU single-core performance, meaning the GPU cores don't get saturated, and the test, in effect, only ends up being a measure of your CPU's single-core performance.

It's like having an eating (GPU) contest where each eater is getting his food from his own conveyor belt (CPU). All the conveyor belts, however, are too slow-- so that none of the eaters are even challenged. They can all eat between 100 and 130 cookies a minute, but their various belts are sending cookies out at rates between 30 and 40 cookies a minute. In such a scenario, the contest is actually a better measure of conveyor belt speed (CPU) rather than eating speed (GPU). A better test would find a way to get more food to the eaters (faster conveyor belts, or multiple belts per eater!).

That 10-bit issue sounds frustrating!
 
It's not so much that Cinebench directly measures CPU and then uses that to weight the GPU score. It's that the GPU score seems totally bottlenecked by the CPU single-core performance, meaning the GPU cores don't get saturated, and the test, in effect, only ends up being a measure of your CPU's single-core performance.

It's like having an eating (GPU) contest where each eater is getting his food from his own conveyor belt (CPU). All the conveyor belts, however, are too slow-- so that none of the eaters are even challenged. They can all eat between 100 and 130 cookies a minute, but their various belts are sending cookies out at rates between 30 and 40 cookies a minute. In such a scenario, the contest is actually a better measure of conveyor belt speed (CPU) rather than eating speed (GPU). A better test would find a way to get more food to the eaters (faster conveyor belts, or multiple belts per eater!).

That 10-bit issue sounds frustrating!
It's great that you guys help eachother out, really, but it'd be great if we could keep this post on-topic. Thanks :)
 
It's great that you guys help eachother out, really, but it'd be great if we could keep this post on-topic. Thanks :)

Cheers, and sorry for the digression!

On topic: I'm using an R9 290, and with one 4K monitor, it's fine-- even when I open professional apps like FCPX and Lightroom w/ GPU develop module. I don't have the other apps you mention. That said, I mainly use my ACD 27" alone, and only rarely hook the computer to my 48" Samsung 4K TV, because the R9 290 can only output 30hz at 4:4:4 chroma over HDMI (tv has HDMI 2.0a, but GPU doesn't). There is the lag associated with using 30hz, which isn't fun.

I hope to upgrade my GPU to something w/ HDMI 2.0 output, and better support than the R9 290, and better efficiency. It will likely be AMD, given their history re Apple pro apps, but NVidia 1070 isn't out of the question. The issue is that my R9 290 already does around 5-5.5tflops compute, an the 1070 is barely an increase from those numbers. I'd like to see what Vega has in store for us!
 
Status
Not open for further replies.
Back
Top