xentr_theme_editor

  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

How to get the best LAN speeds between PC's and server

For me it was actually far cheaper to buy used 10gbe stuff off ebay than buy 2.5G nics and switches everywhere. 🤷‍♂️

But before all that, I'm kinda surprised no one asked this: jumbo packets enabled / set to 9000? Is that even a thing anymore? My 1Gb nic on one PC doesn't show the option anywhere, oddly enough. But the 10gb nic on the same PC did have the option for it, so I set it to 9000 there (same jumbo setting on both ends obviously).

Noticeable? Actually yes!
My setup: a "source " PC, directly connected to my file server via a 10gb nic (using static IPs on both sides of course). Source PC is just a single SSD (2.5"), and the destination a 6- disk Raid 6.
Speeds:
@1Gb between both: ~120MBps. So it was hitting the 1gb limit for sure.
@10gb (with jumbo set to 9000): I've seen around 270-280MBps. So just shy of the theoretical limit of 310 for 2.5gbps nics.

Again, obviously 10gb stuff is total overkill right now. But it was far cheaper than 2.5gb stuff (by a considerable amount)! Also, I'm actually already not far from hitting the limit of 2.5gbps stuff, so I've got the headroom for speed upgrades in the future too. Once my source PC is upgraded and running an nvme drive, and once I add in a few more storage drives to the raid 6 (I think the write speeds do continue to increase with Raid 6 as you add disks, albeit slowly, until you max out the raid card, gotta check its max potential speed) I might actually surpass the limit of 2.5gbps! We'll see... someday.

🤷‍♂️ Hope it helps 👍
 
xentr_thread_starter
jumbo packets enabled / set to 9000?

Funily enough I tried this earlier to set the MTU value after watching a video on YT about using iperf, but alas no difference in speed (tested with iperf3 this time round between server and client). Maxing out at around 1GB. Fairly sure it's either a setting in OPNSense or because I've bridged the LAN ports there.
 
Funily enough I tried this earlier to set the MTU value after watching a video on YT about using iperf, but alas no difference in speed (tested with iperf3 this time round between server and client). Maxing out at around 1GB. Fairly sure it's either a setting in OPNSense or because I've bridged the LAN ports there.

Don't know about iperf stuff, so sorry, can't help ya there. Switches have to support jumbo packets as well though, and it isn't always enabled by default. Besides which, im not even sure if it would be supported using bridged connections and/or lagg. I'm just not enough of a network guy to know that stuff 🤷‍♂️
 
xentr_thread_starter
Well turns out that led me to a solution so thanks!

Under Network adapter > Properties > Configure > Advanced > Speed & Duplex it was set to 1Gbps Full Duplex. Set that to 2.5Gbps Full Duplex and now results are as follows:

https://i.postimg.cc/wvf7WpTQ/OST-results.png

A step in the right direction banana Should I expect to see 2.5Gbps up as well as down?
 
Linux really does handle above 1G so much better. To get SFP+ stuff working I also had to manually set the speed, should have thought to suggest that earlier. Auto never quite worked right but should work better for copper amd you should be able to get away with it.
 
xentr_thread_starter
Is auto-negotiation not working?
Ah right sorry I understand what that is now. I had it set on 1Gbps Full Duplex so I've now changed that to Auto Negotiation as suggested.

Any idea's why I'm not getting the full 2.5gb upload? My new switch has just arrived so will connect that up and unbridge the LAN ports to see what effect it has. What kind of speeds are realistic when transferring to/from the server if the shares that I transferring to are using the Samsung NVMe drive as Cache?

Once (or if) I am getting 2.5gb upload and download, what would be the next step to find any bottlenecks?
 
Keep an eye on the CPU usage at all points (OPNsense, server, client) when you're running the test.

I might question the "CAT8" cables you used too, a lot of the stuff out there really isn't certified to CAT8 standards. A high-quality CAT6 cable is more than sufficient for 10Gbps, and I'd be looking for pure copper cables too, not "CCA" (copper clad aluminum).
 
If the cache drive is not either overheating or fully saturated then you should see transfer rates over 1Gbps, even using 10G optical I never broke 2Gbps on writes in my testing with my unit. Once it is either saturated or is overheating then the transfer rates will crash hard. It took me about 10-30 seconds to overheat the RAID1 cache drives in my testing when they didn't have heatsinks.

and +1 to what @JD mentioned. Unofficial Cat7/Cat8 is not any better than legitimate pure copper cat6 in this situation.

Edit: My write speeds were close to what you posted but my cache drives are M.2 sata would have expected NVMe cache drive to be a bit faster and I suspect I can squeeze more out of my setup but I feel like I might be missing a setting still.
 
Last edited:
Back
Top