Contribute
Register

GTX 580 vs GTX 670 in ML

Status
Not open for further replies.
Joined
Feb 2, 2012
Messages
177
Motherboard
Mac Mini
CPU
2.4Ghz Core 2 Duo
Graphics
320M
Mac
  1. Mac mini
Classic Mac
  1. 0
Mobile Phone
  1. 0
I'm looking for anybody that might have experience with either card (GTX 580 or GTX 670) in ML. I'm mostly looking for ease to set up, and system functionality. Does sleep/wake, screen resolution issues, etc. If you want to get specific, I am looking at these two cards:

580:
http://www.amazon.com/MSI-DisplayPort-PCI-Express-N580GTX-LIGHTNING/dp/B004XVMAKA/ref=sr_1_1?ie=UTF8&qid=1347258296&sr=8-1&keywords=gtx+580+msi+lightning

670:
http://www.amazon.com/GIGABYTE-GV-N670OC-2GD-GeForce-Windforce-Graphics/dp/B0080I06WQ/ref=sr_1_2?ie=UTF8&qid=1347258333&sr=8-2&keywords=gtx+670

I would be using it for CS6 Premiere Pro and the benchmarks say go GTX 580.
http://ppbm5.com/DB-PPBM5-2.php


Thanks!
 
They both work OOB.
 
Basically the 6XX cards suck at double precision floating point operations (math stuff) that you need for 3D work and scientific calculations (it can be as bad as 1/8th the performance). They're generally slightly better than the 5XX at single precision math, but not enough to be worth going with one over a 5XX card if you need the double precision math performance.

I suspect that the 5XX nVidia cards were so good at GPU acceleration, that they were cutting into sales of the Quadro ($$$$) cards. As they were providing 80% of the performance for 20% of the cost. So somehow, the 6XX cards wound up essentially crippled in ways that meant they were useless for a lot of the Pro applications that the Quadros were aimed at.
 
Basically the 6XX cards suck at double precision floating point operations (math stuff) that you need for 3D work and scientific calculations (it can be as bad as 1/8th the performance). They're generally slightly better than the 5XX at single precision math, but not enough to be worth going with one over a 5XX card if you need the double precision math performance.

I suspect that the 5XX nVidia cards were so good at GPU acceleration, that they were cutting into sales of the Quadro ($$$$) cards. As they were providing 80% of the performance for 20% of the cost. So somehow, the 6XX cards wound up essentially crippled in ways that meant they were useless for a lot of the Pro applications that the Quadros were aimed at.

Thank you for the clarifying the difference. Does the amount of VRAM that a card has affect performance greater in 3D work/scientific calculations or gaming? The 670 seems better for gaming, and worse for double precision operations yet has a larger VRAM, whereas the 580 is better for double precision operations and worse at gaming yet smaller in VRAM. Only by 0.5GBs albeit. Do you think that is anything to cry over?
 
Thank you for the clarifying the difference. Does the amount of VRAM that a card has affect performance greater in 3D work/scientific calculations or gaming? The 670 seems better for gaming, and worse for double precision operations yet has a larger VRAM, whereas the 580 is better for double precision operations and worse at gaming yet smaller in VRAM. Only by 0.5GBs albeit. Do you think that is anything to cry over?

The performance difference in the amount of memory between the two cards is probably not significant. It's the architecture of the GPU and the memory bandwidth that really affect things.

Generally speaking, extra ram on a graphics card (past that on stock cards) is essentially useless. There are some edge conditions where it does provide an advantage (such as very high levels of anti-aliasing at a very high resolution), but typically, if you are using a card for tasks where the extra ram would provide a significant advantage, you are probably pushing it beyond what it is really intended to operate at.

After all if 2 x RAM did provide for some substantial boost, 2 x RAM would be the stock memory amount. nVidia and AMD don't pick the amount of ram they have on the cards at random after all. In general, the amount of ram is going to be some multiple of the width of the memory bus and this is one of the main reasons why doubling or tripling the ram has limited effects at best.

Video cards need to shove ENORMOUS amounts of data into the GPU and back into their memory, so how wide that pipe is between the GPU and the on card ram, has a major effect on how much memory you can really use. Since it does you no good to have the ram if you can't push the data through it quickly enough to be of use. When you double the ram, the width of the memory bus is not increased. So you don't get better performance. In fact, one of the main ways you'll see them artificially create differences between graphics cards with essentially the same GPU chip is to restrict the width of the memory bus on the lower end card or sometimes they will use a ram with a lower throughput like GDDR3 vs GDDR5.

Now there are a few caveats to that, depending on the software and possibly even the specific application.

If what you are doing directly uses the ram on the card, say for storing texture information in a 3d app that processes the texture data on the card, having more memory directly and quickly accessible will result in an improvement vs having to fetch that data from the computer. This can be a major improvement if the application uses this properly and it is the correct kind of data. GPUs tend to be great at highly parallel tasks, since that is essentially what they are designed to do with 3D games.

This sort of improvement tends to be highly specific to individual applications and can be wildly affected by the specific architecture of the graphics card as we see with the double precision math on the 5XX vs 6XX nVidia cards. However, it is one of the reasons why some of the high end "pro" cards like the Quadros or FirePros can have as much as 12gb of ram (those cards do cost an arm, a leg, a kidney, a pancreas....).

On a non-"pro" card, they stick extra ram on the cards mostly because people don't always know the difference between individual GPU chips or architectures, but they do understand that 4 > 2. So hey that 4gb 630 has to be better than that 2gb 680 right?
 
Move to the correct forum section. tonymacx86 forum index

Desktop Compatibility -> Buying Advice (forum sections)​


Desktop Compatibility == for existing systems

Buying Advice == for yet to be purchased system planning



----------------------------> Forum Rules
<--
 
Firstly, Thank you ModBot! No one ever thanks that guy . . . . or girl . . . .

I have several questions:

Assuming that a Graphics Card with 256Mb VRAM, 128 bit bus, slow memory bandwidth, obsolete GPU architecture, and slow GPU clock speed could have six 2560 x 1600 monitors hooked up to it (which is silly but assuming), it would be likely that monitors would have poor image quality (say a slideshow of some kind) because:

1) The information stored on VRAM would be insufficient to meet the demands of the displays.
2) The information could not be driven to the memory fast enough because of the slow memory bandwidth.
3) The information could not be processed fast enough by the GPU.

I believe this is right? Correct me if I'm wrong ggeorge.

Even if you have a GPU clock speed, memory clock speed, correct GPU architecture, and bus configuration to meet the demands of the displays, it could still not display the image properly if the VRAM was still 256Mb, right?
 
Firstly, Thank you ModBot! No one ever thanks that guy . . . . or girl . . . .

I have several questions:

Assuming that a Graphics Card with 256Mb VRAM, 128 bit bus, slow memory bandwidth, obsolete GPU architecture, and slow GPU clock speed could have six 2560 x 1600 monitors hooked up to it (which is silly but assuming), it would be likely that monitors would have poor image quality (say a slideshow of some kind) because:

1) The information stored on VRAM would be insufficient to meet the demands of the displays.
2) The information could not be driven to the memory fast enough because of the slow memory bandwidth.
3) The information could not be processed fast enough by the GPU.

I believe this is right? Correct me if I'm wrong ggeorge.

Even if you have a GPU clock speed, memory clock speed, correct GPU architecture, and bus configuration to meet the demands of the displays, it could still not display the image properly if the VRAM was still 256Mb, right?

Computers are a bit like chains, they are only as strong as their weakest link. So if any of the aspects you describe were insufficient, then yes they would cause issues, but it takes a lot less to do what you are talking about than you'd think off hand.

It takes vastly less processing power and ram to display simple bit mapped images, which is what you seem to be talking about doing. I don't know exactly what the requirements are to display y by x sized pictures, but an off the cuff calculation says that a 2560 x 1600 pixel display in 32 bit color would need about 16mb of ram (2560x1600x32/8) and thus six monitors would require less than 128mb. Also back in the day when resources were very limited, people got very creative about ways to do things with seemingly inadequate resources.

So compared to computing 3d geometry and texturing the pixels, it's pretty much negligible. It's really 3d stuff that has been driving the vast increase in GPU power and memory.

When you say "slide show" are you talking about an insufficient refresh rate (ie. low frame rate as in a game) or an actual slide show?

Answer 1) - This is not generally the case simply because if it had insufficient ram to display the images on all those monitors, the card would probably not have the ability to drive that many monitors.

Answer 2) - A 128 bit memory bus is not actually that narrow. A 680/670 only has a 256 bit bus, though that does limit them somewhat wt hi-rez (2560x1440 or greater) gaming vs the 384 bit memory bus on the Radeon 79XX cards.

Answer 3) - This is the most likely source of issues, but it was really only with the advent of 3d graphics that you started getting powerful GPUs. Before that they did handle some things, mostly off loading graphics API calls and things like determining what should be shown when windows overlapped.
 
A 128 bit memory bus is not actually that narrow. A 680/670 only has a 256 bit bus, though that does limit them somewhat wt hi-rez (2560x1440 or greater) gaming vs the 384 bit memory bus on the Radeon 79XX cards.

I just bought my 670 for hi-res 2560x1440 gaming! We will see how hard I can push her ;)

Side question, mind explaining how SLI can help?
 
I just bought my 670 for hi-res 2560x1440 gaming! We will see how hard I can push her ;)

You should be able to crank it up pretty much all the way on most games. Well, at least within the limits of the games under OSX, a substantial number are very poorly optimized for OSX. Civ 5 is STILL as slow as molasses in the end game on my machine.

Side question, mind explaining how SLI can help?

Under OSX, not at all I'm afraid. None of the multi-gpu gaming technologies are supported by Apple I'm afraid. It is at least possible to have the SLI rig set up and installed under OSX now, which didn't use to be the case, but they will only function as two separate graphics cards.

Under windows, it basically splits up the task of rendering things between the two GPUs. It's not a linear scaling so 3 cards are not 3 times as fast, but 2.XX times as fast. These days it is mostly helpful for gaming on multiple monitors, as the high end cards can handle most games on a single monitor.
 
Status
Not open for further replies.
Back
Top