Contribute
Register

Intel Network adapters on OS X: Small Tree drivers

To be honest, I've never needed to revert, so I don't know how to flash it back using the backup file. However, what you can do is another backup of the eeprom in its present form, and then use a hex editor to compare it against the original. That'll tell you what bits are different, and thus what you'll need to manually revert back. It's a bit more of a pain, but there shouldn't be much that was changed anyway so it shouldn't be too big of a deal.

Before you do that though, answer me this: Had your card previously worked on Catalina or any prior MacOS, or is Big Sur the first OS you've tried it on?

If the latter, then as a quick sanity check, run this on the card in Linux:

Code:
lspci -nn -vvv | grep Ethernet

Your card should identify as something like this:


The [xxxx:xxxx] part is what we've been manipulating. However, some cards (particularly Dell) might have a different DEV-ID as well. It's rare, but I've ran into a few. Doesn't hurt to make sure.
Thank you for all your help and I apologize for the late reply.

You had sent the code and around the same time before I saw it, I figured it out on my own. I can verify what you sent is correct and it did work with the Sonnet driver. The whole reason for this was because I was having some network issues when I upgraded a new Hackintosh to Big Sur. I figured out that it had nothing to do with Big Sur, my network, or this Intel NIC. My issue lies with this particular Hackintosh.

In the end, I flashed the NIC back so it could use the Small Tree driver. I haven't noticed any difference and if you're leaning towards the Small Tree driver potentially being slightly newer (or even the writer of the code that Sonnet is using in the first place), it made more sense to stick with Small Tree.

We'll see how it goes with whatever macOS version follows Big Sur. I would like to keep using these 10G cards, but at the same time I'm upgrading the network to 100G now.
 
We'll see how it goes with whatever macOS version follows Big Sur. I would like to keep using these 10G cards, but at the same time I'm upgrading the network to 100G now.

100G?!?! Wow, I'd love to hear about your 100G MacOS experiences in a separate thread. You using ATTO, or Chelsio? Are you tuning, and if so, what parameters?

And here I thought I was hot stuff rocking 40G.
 
100G?!?! Wow, I'd love to hear about your 100G MacOS experiences in a separate thread. You using ATTO, or Chelsio? Are you tuning, and if so, what parameters?
I'm going with Chelsio since I run a TrueNAS and I've been running a 10G Chelsio NIC in it for years without issue. I've had Chelsio in the TrueNAS and an Intel NIC using the Small Tree driver in my hacks. This time I'm going with Chelsio for both.
 
I'm going with Chelsio since I run a TrueNAS and I've been running a 10G Chelsio NIC in it for years without issue. I've had Chelsio in the TrueNAS and an Intel NIC using the Small Tree driver in my hacks. This time I'm going with Chelsio for both.

Your situation might be similar to mine. I've got a pair of 528TB (raw) TrueNAS/FreeNAS arrays serving a small fleet of video edit workstations.
 
Very similar. I work with a lot of audio/video people and live with a producer. I also recently purchased a Mac Pro 7,1 and these are the numbers I'm pulling via the same Intel x520-DA1 10G card I've been using for years (the hacks get the same speed, but nice to know the Intel 10G was plug 'N play) along with the Chelsio card (which only cost $25 years ago). This benchmark is to a spindle pool with no added NVMe for cache, etc.

Straight up, "slow," spindle drives and they are all only 5400 RPM.
 

Attachments

  • Screen Shot 2021-03-30 at 12.06.22 PM.png
    Screen Shot 2021-03-30 at 12.06.22 PM.png
    289.2 KB · Views: 141
Very similar. I work with a lot of audio/video people and live with a producer. I also recently purchased a Mac Pro 7,1 and these are the numbers I'm pulling via the same Intel x520-DA1 10G card I've been using for years (the hacks get the same speed, but nice to know the Intel 10G was plug 'N play) along with the Chelsio card (which only cost $25 years ago). This benchmark is to a spindle pool with no added NVMe for cache, etc.

Straight up, "slow," spindle drives and they are all only 5400 RPM.

Hmm. If you're on spinning rust (and 5400rpm at that), I have some doubts that a 100G card will get you much unless you're running a giant stripe of them and tune the crap out of the cards at both ends. I've got 44 7200rpm 12TB drives in my largest array and never once have I come close to maxing out the performance of a 40G card.

100G isn't worthwhile unless you're building arrays out of NVMe/U.2 drives.
 
I was lucky to find out my Supermicro X9 motherboard was capable of x4x4x4x4 bifurcation (not all of them are), so I ordered two, 4 port NVMe adapters and created an NVMe pool. I'm excited to see what numbers it will pull with the 100G cards. I'm guessing my backplane will probably be my next bottle-neck, so I may have to upgrade that.
 
I was lucky to find out my Supermicro X9 motherboard was capable of x4x4x4x4 bifurcation (not all of them are), so I ordered two, 4 port NVMe adapters and created an NVMe pool. I'm excited to see what numbers it will pull with the 100G cards. I'm guessing my backplane will probably be my next bottle-neck, so I may have to upgrade that.

You're going down one hell of a rabbit hole, my friend. Good luck.
 
What do you mean? I already have it running smooth. The 100G cards will be here tomorrow.
 
What do you mean? I already have it running smooth. The 100G cards will be here tomorrow.

I'm sure you'll get the cards functioning at least as well as a 10G cards, and yeah, on paper a handful of NVMe SSDs and a 100G card sounds great, but you'll be going down a rabbit hole of sussing out weak points, starting with the bus speed. Does your mobo support PCIe 3.0, not 2.0? Does it have a 16x slot for that 100G card, and is that slot truly 16x, or does it become 8x if slot 2 is populated? What about the 16x slot you intended to install the four SSDs into?

And that's before we start getting into the capabilities of a legacy Xeon architecture. I'm running a Xeon E5-1650-v3 on my FreeNAS/TrueNAS server and it gets pushed pretty hard, and I have yet to ever saturate its 40GbE card, even though on paper I should be able to.

You're only as fast as your weakest link, and the faster you go, the more weak points get exposed.
 
Back
Top