Contribute
Register

Need Help on New Video Card Purchase (580 or 680)

Status
Not open for further replies.
Joined
Apr 13, 2011
Messages
153
Motherboard
Hackintosh
CPU
i-7 950
Graphics
NVidia GTX 285
Mac
  1. Mac Pro
Classic Mac
  1. 0
Mobile Phone
  1. iOS
Hey y'all!

Currently I'm running and already stable Hackintosh build. It's amazing and has been functioning since February of 2011.

However I'm now looking to upgrade my video card, and need some advice. It may seem silly, but the difference may greatly affect my workflow as a motion Graphics artist. The programs I use daily are Cinema 4D, Houdini, ZBrush, Photoshop and After Effects (<- Heavily). My choices are between the:

GeForce GTX 580

or

GeForce GTX 680

I remember seeing some early arguments that the 580 was better than the 680 for for the programs I have listed above because NVidia crippled the 680 cards due to the 580's being so close to performance of their Quadro line. However I also see that there are tons of people that are installing the 680, so I'm wondering now if things have changed maybe?

And with the 680 I'd be benefitting greatly from the 1.5k Cuda cores as a lot of the Video Copilot AE plug-ins ins I use utilize them.

So...what's the verdict??
 
It will depend on imbedded drivers. CUDA core count is irrelevant. There are architectural differences that make direct comparisons impossible. In general the 680 is at least as fast at the moment, and any driver improvements (if they happen) will certainly be focused on the Kepler architecture.

For CS6 and AE specifically, check out this link...

http://forums.creativecow.net/thread/2/1019643#1019643
 
Haha that test is so confusing.
It looks to me as it's saying that even though the 580 helps render out this much faster, the 680 is still a card to get.



This is kind false in my direct setup because even though most of the programs I listed dont use Cuda, the plug-ins that I use inside of AE for instance GREATLY rely on it.

The graph has everything normalized to a gtx580, the 680 is faster. If you look in the table below the graph, render time for the 680 is 14:00, 15:07 for the 580.

And yes, CUDA core count is irrelevant, because the they are not equivalent. There are architectural differences between Fermi and Kepler. Nvidia's goal was to have the 680 be ~10% faster at CUDA compared to the 580. Among other things they decreased the FP64 multiplier significantly (DP performance, which many rendering programs use) to further differentiate the geforce line from the Tesla line, and further decreased the memory bandwidth.

There are lots of articles explaining the difference, this post summarizes the technical issues well:

Some other integer operations and type conversions that are mostly used in compute applications have similarly reduced performance on 3.0.The 3.x architecture has been split into two separate product lines, one for compute and one for games. The gaming product has been carefully tuned so that it is not very useful for compute (see further notes below).
lVITL.png
Reply to comments about my statement that the GTX680 has been tuned to not being as useful for compute:Many (most?) compute applications are memory bound, and the GTX680 does not add any memory bandwidth to support its higher compute throughput. This makes the GTX680 good for running simple algorithms on pixels pulled from the texture cache -- just what games need. But it makes it hard or impossible to utilize the extra compute throughput in many or most compute applications.This is the memory throughput graph from the CUDA Programming Guide 4.2:
sFq1Z.png
This is the same graph from the 5.0 guide:
ET5i4.png
Note:

  • The GTX580 disappeared from the 5.0 graph and, had it been there, it would have shown that the bandwidth is slightly lower on the GTX680 than on the GTX580. Was this why NVIDIA removed the GTX580 from the graph?
  • All earlier architectures have had increases in bandwidth that correspond with their increase in compute throughput (both curves have been pretty much linear). But, when comparing to a linear progression, the GTX680 compute throughput goes up, while the the bandwidth goes down.
I believe, though we don't know for sure yet, that the compute capability 3.5 part will remedy this by increasing the memory bandwidth to correspond with the compute bandwidth, thereby making the compute bandwidth accessible to compute applications.In other words, the assignment of resources in the GTX680 only makes sense for games. For compute, the resources are completely lopsided. This is shown in practice by the number of posts on forums from people who have tried, and failed, to get any significant performance improvement in their CUDA apps on GTX680.
 
Wow. That is exactly what I was looking for. Thanks for that.

BTW you seem to have a solid grasp on this...what card do you have, and what do you use it for?
 
Wow. That is exactly what I was looking for. Thanks for that.

BTW you seem to have a solid grasp on this...what card do you have, and what do you use it for?

I primarily run computational simulations. I'm not particularly good at GPU coding, so I mostly just use CPUs. I do have several calls that use GPUs, a few still using CUDA, but I've been switching over to OpenCL (which is easier for me, and an easy decision with Nvidia neutering compute on their consumer line, and AMD increasing it). In any event, I don't actually have a graphics card in my hackintosh atm. I write the code on it, but then run it on a couple of 2P machines I have running Linux
(one has a gtx680, the other a 7970).

EDIT: Forgot to mention that while the compute performance is in general pretty similar between 580 and 680, the 680 uses a lot less power.
 
Status
Not open for further replies.
Back
Top