Contribute
Register

[READ ME FIRST!] Graphics Setup & Troubleshooting

is it a problem that i have to set property inject nvidia to true? an nv_disable to 1? I think the second one is wrong

The GT 730 DDR3 (I guess you have that one?) is a Fermi card, so InjectNvidia=True is correct. nv_disable=1 is wrong, this will disable your drivers.
 
Is there any workaround to have correct GPU temperature readings in HWmonitor on my R9290x ? I have Fakesmc with plugins installed in Library/Extensions.
 

Attachments

  • Screen Shot 2016-01-21 at 11.47.59.jpg
    Screen Shot 2016-01-21 at 11.47.59.jpg
    113.4 KB · Views: 472
Hello, I have an old g41 (gigabyte) motherboard with a 8400gs video card (gigabyte too), both have been working fine since Snow Leopard with nvidiaresman50 kexts, but with El Capitan they do not work anymore (get a KP and the machine reestarts before desktop) I tried with the multibeast installer for nvidia but the installer crashes every time so I´m stuck at this time, something that is very strange is that booting with the USB Unibeast, I get the graphics card working at 100%, can anyone help me with this? thank you in advance...
 
So I have done all the requirements that you listed. Now my graphics is shown as NVIDIA Chip Model 3 MB and my gpu is not working at full QE/CI.

My GPU is GTX 970.

Any help?
 
So I have done all the requirements that you listed. Now my graphics is shown as NVIDIA Chip Model 3 MB and my gpu is not working at full QE/CI.

My GPU is GTX 970.

Any help?

.
Rules said:
Profiles need to contain at least your primary system to assist others with helping you
 
Is there any workaround to have correct GPU temperature readings in HWmonitor on my R9290x ? I have Fakesmc with plugins installed in Library/Extensions.

And if you get a response, or a fix-- please share it here. Some peculiarities: HWSensors doesn't pick up the temperatures (it always reads "0"), and once the GPU is pushed to top clocks/load on its processor (at least according to HWsensors), HWsensors (via istat) reads that the GPU continues to run at full load. Given that the temps are clearly wrong, there's some reason to believe that the load information is also wrong. But it's peculiar that it seems to dynamically accept load until it hits full load... then stays there even if you quite all apps.
 
Can you add to the post that analog connections (VGA, some types of DVI) cause issues with the Nvidia web drivers? I wish I had known that from the start of my build (I spent a solid three months trying figure out why booting was so unstable when the fix was that simple).
 
Back
Top