Contribute
Register

DBP's "lammergeier": i7-3770K / GA-Z77X-UP5-TH : photography beast

Status
Not open for further replies.
Installing an additional GPU, and revisiting the case layout

Even though the HD4000 graphics was enough for my current work, I decided to experiment with adding a GTX 650 card to the system. So I had to look at how I would mount it in lammergeier. The machine is housed in a Corsair Obsidian 550D case, which is a roomy case with easy access. Since the initial build I had tidied up the cabling so that very few cables ran above the motherboard, and this was partly in order to facilitate the airflow to cool the Northbridge/etc chips. They weren't running hot, but I didn't want to waste any efficiency.

This is how I first put the GTX 650 in the system:
45417-stock-gtx-650-case.jpg

There're just enough slots for the GPU! Here you can see the Gigabyte GV-N650OC-2GI with its default heatsink and fan. I installed it this way at first to make sure things worked on the software side (which was easy).

There are three PCIe SATA cards in there (one's hiding just above the GPU), and I've used a couple of 1m-long SATA cables to run behind the motherboard between the cards and the hot-swap bay without cluttering up the airflow. A single cable isn't going to have a big effect on the airflow, but once a few SATA and USB cables start heading in different directions it does get messy very quickly. Two of the shorter SATA cables would be eliminated if I had used one 2-port eSATA card, but there's still enough room for everything. There's a 2-port USB2 bracket at the bottom. By this time I was no longer using the FW400 (1394a) port so was able to remove the cable+bracket for that, freeing up a slot.

Incidentally, the SATA colour scheme in this case is currently:
  • Yellow: SATA3 to SSDs.
  • Black: SATA2 to HDDs.
  • Red: eSATA/hot-swap feeds.

Fans

Multiple fans are in this system:
  • Two Noctua 120mm fans at the front draw air across the drives and into the lower part of the case.
  • One Noctua 140mm fan at the bottom of the case pushes air up across the motherboard and towards the CPU.
  • A Noctua 80mm fan at the rear of the 3-drive hot-swap bay draws air across the drives and pushes it towards the CPU. This fan is only on when any of those drives are powered up.
  • The NH-D14 cooler on the CPU has two fans, blowing air towards the rear of the case. The central 140mm fan protrudes down below the heatsink fins, and helps pull air across the motherboard components nearby.
  • One Noctua 140mm fan at the rear of the case pushes air out.
All the fans are running fairly slowly: 500-800 rpm for the case fans, and the CPU fans ramp up and down between 500-1300 rpm. The gentle flow of air is very quiet, with just a faint hum audible because the case is positioned on a shelf at desk height. If the case was buried below a desk this noise would probably disappear, but having it up here makes it very easy to service (e.g. when I clean the dust filters) and keeps it away from some of the the cat hair closer to the floor. And it's a pretty black monolith. :)

While I had the case out on the table I took some more photos showing where the front fans fit into the setup:
45419-front-fans.jpg
45420-front-grill.jpg
45421-front-covered.jpg
45422-front-closed.jpg


PCIe bandwidth

Choosing which cards went into which slots wasn't just a case of where things would physically fit: I first chose these positions based on PCIe speed. The GA-Z77X-UP5 TH motherboard has a total of six PCIe slots:
  • Three v2 x1 slots, although they are all switched into a single PCIe lane to the host.
  • One v3.0 x4 slot.
  • One v3.0 x8 slot (which runs at x4 if the x4 slot is occupied).
  • One v3.0 x16 slot (which runs at x8 if the x4 or x8 slots are occupied).
I have three x1 PCIe v2.0 SATA cards (using ASM1061 chips) which each have two SATA3 (600 MB/s) ports. One lane of PCIe v2 can handle 500 MB/s, so while I'm unlikely to be needing 600 MB/s concurrently to both SATA ports (these ports are used for my external HDDs and backup drives), I definitely don't want to have all three cards sharing a single switched 500 MB/s lane to the CPU. The motherboard's Marvell 2-port SATA3 controller gets its own PCIe v2 lane. So to give each card as much PCIe bandwidth as possible I've arranged the cards so that:
  • One SATA card is in a x1 slot.
  • One SATA card is in the x4 slot.
  • One SATA card is in the x8 slot.
  • A GPU card can go in the x16 slot (although it will run at x8). With PCIe v3.0 the theoretical bandwidth of this is 7.8 GB/s, which shouldn't be a significant bottleneck.

Here's the view from the back:
45418-rear-case.jpg


So that's all fine, but I did mention I'm a tinkerer didn't I? I had bigger plans for the card...

A silent GPU!

The Gigabyte GTX 650 has a fan which is generally regarded as quiet. However it was noticeable as I already had a fairly quiet machine, especially when the GPU was working hard. The 9-bladed GPU fan ramped up to ~1900 rpm (32 revs/second) which might explain some of the harmonics I could hear in the noise. Despite the card being inside the Obsidian 550D case, about half the GPU fan's output is through slots in the rear panel so noise from there can bounce off the nearby wall. Also after a few days there was a new sound. I stopped the GPU fan with my finger, and the noise went away. The fan started up again, but without the noise. So it seems to have been an imbalance in the fan.
The GPU temperatures weren't bad. The hottest I saw it go was almost 60C, which is less than my overclocked i7 goes. At idle it was in the low 30s.

I replaced the Gigabyte heatsink with a quieter solution: the Arctic Accelero S1 Plus (which cost me AU$29). This beast is completely passive, with no fan of its own (although an additional fan module is available for higher-spec GPUs). However any airflow will help the fins dissipate heat, and the existing flow of air through my case should be perfect for it. It's actually rated to be able to cool the 110W GTX 650 Ti with no fan, so the 64W GTX 650 should be no trouble.

Fitting the heatsink involved unbolting the existing heatsink/fan, cleaning the board, gluing on (with the provided thermal glue) heatsinks to all of the RAM and VRM chips, and bolting the massive heatsink to the GPU (GK107) chip. If you're gluing heatsinks to both sides of the board it's easiest if you don't mix all of the G-1 glue at once. Do about half, and fix the heatsinks to one side. Then wait the 5 hours for it to set. To be sure I then cured the board in an oven around 70C for 10 minutes. Then mix the rest of the glue and do the same for the other side of the board. Prior to starting the gluing process I carefully decided which heatsinks were going to go where on the voltage regulators, and used a hacksaw to cut up some of the VRM heatsinks so every regulator chip got a heatsink (even if some only have two fins). When positioning the heatsinks be careful to not short out any of the SMT capacitors (some of which are taller than the nearby regulator chips). While all this was going on the machine was happily running with the HD4000 graphics of course.

The heatsink comes with a pre-applied layer of thermal paste, however I felt it was too thick and in order to optimise the heat transfer I cleaned it off and used a dab of Noctua NT-H1 paste. Incidentally, while the original heatsink made contact with the GK107 chip, the VRM and RAM chips were not actively cooled except by airflow (especially the ones on the rear of the board!). All of this is removable later, so I stowed the Gigabyte heatsink and fan safely in my parts bin. Once everything was set, back into the case it went:

45423-gtx-650-cooler-1.jpg

45424-gtx-650-cooler-1.jpg


As you can see, it seems perfectly positioned in this case to take advantage of the fan in the bottom of the case (although I may add a guide to send more air down to the rear of the cooler).
The thin black cable you can see just above the GPU is something I made that connects to the now-unused fan connector on the GTX 650, letting me feed in sense information from another fan.

The GPU temp used to idle around 30C and peak at 60C. After the fan removal and heatsink upgrade the idle temp is around 28C and the highest I've seen (running Heaven) is 58C. For AU$29 it's quieter, and no hotter.

45425-tempsnfans.jpg


The GPU clocks ramp up under load automagically.
In this sample (I will probably change the cabling later) the fans are:
Fan 1: 120mm fan on NH-D14 CPU cooler.
Fan 2: Upper front 120mm fan.
Fan 3: 140mm fan on NH-D14 (driven by the same voltage from the CPU_FAN motherboard header as the 120mm fan).
Fan 4: 140mm exhaust fan.
Fan 5: 140mm bottom intake fan.
GPU Fan: Lower front 120mm fan.​
Having the GPU fan sensor is useful, as I've used all the motherboard's sensors. Now the only fan in the system not being monitored is the 80mm one on the hot-swap bay. However as it's usually off when the system boots, whatever fan sensor I hooked it up to would be ignored by the system as inactive.

Details of software configuration and benchmarks will come later.
 
Last edited by a moderator:
Just to update you, I figured the monitor issues out.

I did some playing around and changed GraphicsEnabler=no in the boot.plist. When I rebooted I noticed that the graphics were at a low res however the monitor booted right up as it should. So I got to thinking that maybe it dosent like the low res to hi res switch. I changed that plist back and added this:

<key>Graphics Mode</key>
<string>2560x1440x32</string>

Now I get the boot loader in hi res (although it takes a bit longer to count down), the grey apple boot screen in high res and then right into desktop.

So far so good...


BTW thats a sick build. !!
:headbang:
 
GPU software installation
I've skipped a step, which is configuring the software side of things. The first part of this was easy. Although OS X 10.8.2 supports this GPU out of the box, I downloaded and installed the latest drivers from nVidia along with their CUDA driver. The driver installers are picky about the machines they will run on, but my MacPro3,1 system definition was in the default allowed list. If I was using a different definition I could have edited a text file in the installers and added it to the allowed list.

Then I installed the card and booted. Voila!
I did change the boot options, at first by having the BIOS initialise the "PEG" and then typing "GraphicEnabler=No PCIRootUID=0" as the boot flags. Once that was working I updated /Extra/org.chameleon.boot.plist to include:
Code:
<key>GraphicsEnabler</key>
<string>No</string>
<key>Kernel Flags</key>
<string>darkwake=0 PCIRootUID=0</string>

Using one GPU

At this stage I was able to use both DVI ports and the HDMI port on the GTX 650. Everything ran fine, with a couple of drawbacks:
  • I'm not able to use the DVI, HDMI, or mDP ports on the motherboard. A bit of a waste.
  • I'm not able to use the processing power of the HD4000.
That second one might sound a bit vague, so some detail might help:

In my setup I have a 24" monitor (Dell U2410, at 1920x1200) which is the primary and where I do most of my work. To the right of that I have a 23" monitor (LG IPS235) rotated into vertical position so I have a 1080x1920 screen. This is definitely the "secondary". I have the monitoring windows from Activity Monitor and HWMonitor sitting at the top, and it's where I "park" things I'm not actively interacting with but want to keep an eye on (Skype is one example). When working in Photoshop the tool panels, histogram, etc are over on this monitor, etc.

During early testing at one point I had the Unigine Heaven benchmark running on the main screen with the sound muted, and all quality settings at max. I could see in the monitor windows that the GPU was working hard but the CPUs weren't stressed. But an EyeTV window on the second window (playing a live DVB-T broadcast) was quite jerky. Now, in reality I'm not likely to be playing games and watching TV at the same time, but it encouraged me to get the second monitor connected to the HD4000 and see how that went.​

Using two GPUs

This was a problem for a while. Using Chimera 1.11.1 the HD4000 code injection by Chimera wasn't working with the PEG active. But I've been provided with a solution by a friend. Unfortunately while it's working I have to admit I don't understand it: it's "white man magic" and I know if I change something it will break.
But while I learn more about this here's the solution that's working for me:

  • In the BIOS set it to initialise the IGFX first, enable the IGFX, and allocate 32M RAM to it.
    This does mean that thereafter the BIOS will be displayed on the HD4000, but if I set it to PEG the graphics don't work quite right in OS X with the rest of this change.
  • Add the following lines to the org.chameleon.boot.plist:
    Code:
    <key>device-properties</key>
    <string>7f0000000100000001000000730000000200000002010c00d041030a000000000101060000027fff04002c0000004100410050004c002c00690067002d0070006c006100740066006f0072006d002d006900640000000800000005006201140000006800640061002d0067006600780000000d0000006f6e626f6172642d31</string>
    This is the "magic" bit that I'm learning more about. But it works.
Note that I expect this is specific to this motherboard and BIOS firmware revision.
EDIT: I've been told this also worked on a GA-Z77-DS3H board!
I did not have to change any other files. Just the BIOS config and that funky device-properties line (and making sure GraphicEnabler was set to No).
2gpus.jpg

With this running, so far I've been able to have monitors work whichever port I connect them to. I haven't tested them all, but I have:
  • two dual-link DVI ports (capable of handling 27/30" screens with up to 2560x1600) on the GTX 650,
  • a single-link DVI port (up to 1920x1200) on the HD4000,
  • two HDMI ports for 1920x1200: one on each GPU, and
  • two mini-DisplayPort ports each capable of 2560x1600 on the HD4000 (but only one if the HD4000's DVI is used).
CUDA processing is now available to software that needs it, and all monitors have OpenCL/OpenGL GPU accelleration (some faster than others). If I was so inclined I could hook up four 30" 2560x1600 screens and two 24" 1920x1200 screens. But that might be overkill. :)

I currently have my 24" 1920x1200 monitor connected to a GTX 650 DVI port, and my 23" 1080x1920 monitor (it's mounted sideways) connected to the motherboard's DVI. I tested Heaven + EyeTV together again, and now the video doesn't stutter at all. Note that the monitor connected to the GTX 650 is the primary/default display (the one with the OS X menu bar on it).


Photoshop stability?

In an Adobe Knowledgebase FAQ there's advice that if you're having problems with multiple adapters, "remove the additional cards." So I was cautious. I've been running with GPU acceleration turned on (not my normal configuration in Photoshop) and interestingly it seems to pick up on the HD4000 OpenCL acceleration rather than the GTX 650:
45747-photoshop-cs6-detects-wrong-gpu.jpg

But so far it's been running great. That FAQ also mentions that "Using two video adapters does not enhance Photoshop's performance." However while it makes sense that Photoshop's not using GPU acceleration from multiple cards, there can be some inherent load-sharing in the system (as evidenced by my earlier Heaven+EyeTV example).

Benchmarks

Actually, it turns out that the identification of the HD4000 as the active accelerator may just be a cosmetic thing (affected by the HD4000 coming up first in the display list?). Check out the Cinebench results:
45748-cinebench.jpg
Despite the code reporting that it's using the HD4000, that 47.5 fps result is right in line with results reported by other GTX 650 users. While the benchmark was running, certainly the GPU clock ramped up and the GPU temperature climbed: it wasn't idle!

I wanted to check the CUDA interface, but right now I don't have software that actively uses it. So CUDA-Z to the rescue:
CUDA-Za.jpgCUDA-Zb.jpgCUDA-Zc.jpg

6.2 GB/s transfers between the host and the GPU, across the PCIe v3.0 x8 bus with its theoretical limit of 7.8 GB/s: I'm quite happy about that actually.
 
Last edited by a moderator:
Dude. This was the answer that everybody else has been telling me was not possible. Your white magic just saved my Thunderbolt Display as I was packing it up and getting ready to take it back. Got a 24" gateway monitor DVI off of my gtx 570 card and now the Thunderbolt Display off the HD4000, but like you said, as far as I can tell the gtx is driving both monitors. Haven't gotten to test in full loads yet (adobe after effects, final cut) but I for the first time am optimistic thanks to you!
 
I'm glad it helped someone! That's why we share.

I've edited the earlier post and noted that apparently the same funky device-properties string has worked on a different Z77 motherboard, although we don't know how completely, as I suspect not every port was used.

Hopefully soon we can move this science back out of the "magic" classification. :)
 
Got a 24" gateway monitor DVI off of my gtx 570 card and now the Thunderbolt Display off the HD4000, but like you said, as far as I can tell the gtx is driving both monitors.
The GTX cannot be driving the Thunderbolt Display in that scenario, although software that tries to identify the GPU will probably be reporting the incorrect one 50% of the time. But the HD4000 performance on a TB Display is not too shabby for most things anyway.
You can find discussion of these issues over in the Graphics forum.
 
Hye DBP,

Noob question here

how did u manage to install MT. lion, did u use from snow leopard to Mt. lion method?

my build is similar like yours, i installed snow leopard and get through the whole process, and it reboot nicely at the first time using iboot and successfully installed the snow leopard, run combo update for 10.6.8, i install multibeast and tick easybeast, system utilities, and chimera but after i reboot it load and stuck at apple screen logo.. . FYI i still can reboot with iboot and get to my desktop.
 
how did u manage to install MT. lion, did u use from snow leopard to Mt. lion method?

See post #9 in this thread where it spells out the process.
UniBeast to directly install Mountain Lion.

Going via Snow Leopard to Mountain Lion is a hack, as Snow Leopard doesn't support the CPU and motherboard properly so you're limping along until you can download and install ML. Much easier to go direct if you can.
 
Status
Not open for further replies.
Back
Top