xentr_theme_editor

  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

How to get the best LAN speeds between PC's and server

Strict9280

Member
Joined
Aug 28, 2024
Messages
20
Reaction score
4
xentr_thread_starter
Hi,

I'm looking for some advice on how I can get the best speeds on my internal network between PC's and server. Here is my network diagram.

Basically I'm a bit of novice when it comes to networking, just trying hard to figure stuff out as I go. My main purpose is to get fast link between server and PC's (see orange dotted border) so I can use files on the server for audio and video editing without performance issues, and fast transfer of files to & from the server.

I don't currently own a switch which I why the LAN ports on my OPNSense router have been bridged to act as a switch.
A question I have is, if I were to exchange network cards in the PC's and server from 2.5 to 10GB cards, and then buy a managed 10GB switch, would all data transfer between the PC's and the server be handled by the switch? Does this sound like a sensible option for best LAN performance? Is it overkill ? Would it be costly?

Maybe there's a cheaper route or better options that will make a big difference from my current setup?
What are the bottlenecks / limiting factors of my current setup?

I'd be grateful for any help...simple answers would probably be best otherwise you might loose me! o_O

Many Thanks
 
Last edited:
Are you actually suffering from shuddering and buffering while watching videos off your server or you just looked at the transfer speed and then start twitching because it's not going as fast as you like? I'm not a network expert but from my understanding, the only benefit you get out of a managed switch in a home setting is adjusting channel speed.

Looking at your diagram, you only have 2 devices feeding off the server so what exactly are you doing? Just serving media or some kind of dev work that offload massive amount of data between machines? I would suggest doing some digging at the STH website, lots of device reviews there.

 
I went SFP+ kinda a pain to get windows to recognize it properly.

Jumping from 2.5g to 10g for your use case probably not make a huge difference. I suspect that your probably not even fully utilizing your 2.5g speeds due to the speed and performance of the harddrives
 
Completely didn't factor in drive speed. Since SATA 3 could do 6Gb/s as a protocol, the 16TB Ironwolf is about 240MB/s (1.95Gb/s) max sustained. Then if you factor in overhead like drive being in a RAID or network condition, you will never hit the 2.5 Gb network speed. Also, just because it say 2.5 Gb, it will never hit that speed. Unless you have the money to buy a bunch of Samsung PM1643 30TB SSD which can hit 2100 MB/s.
 
The bridging in OPNsense might have some performance penalty, I would try running iperf3 between the Unraid server and the client system to make sure you are getting 2.5Gbps first.

Generally I'd agree with the others and say your RAID array probably isn't fast enough. I have a 5 disk setup (IronWolf 10TB) in a QNAP using ZFS (should be similar to Unraid) and only hit about 350MB/s (2.8Gbps). Assuming your array is 3 disks (the ones that say "Shares), I don't think that's enough to get beyond 2.5Gbps.
 
xentr_thread_starter
Thanks for everyone’s replies, it is appreciated.

The bridging in OPNsense might have some performance penalty, I would try running iperf3 between the Unraid server and the client system to make sure you are getting 2.5Gbps first.

I installed a tool called OpenSpeedTest on the server as an alternative to iperf3 where you can then navigate to the server/port through the browser to get the results. The results achieved were 980Mbps up and down so no – nowhere near 2.5Gbps.
When transferring files to the server from the windows client I’m maxing out at about 113MB/s which makes sense because 980Mbps = 122.5 MB/s. Do you think the bridged ports are responsible for that, or is it the drives as people have suggested?

I have a 5 disk setup (IronWolf 10TB) in a QNAP using ZFS (should be similar to Unraid) and only hit about 350MB/s (2.8Gbps). Assuming your array is 3 disks (the ones that say "Shares), I don't think that's enough to get beyond 2.5Gbps.

Yes I have 3 HDD drives where my shares reside which are using XFS filesystem as follows:

1x 16TB Ironwolf ST16000NE000 (max sustained transfer rate = 250MB/s) connected to SATA 3 port

https://www.seagate.com/www-content/datasheets/pdfs/ironwolf-pro-16tb-DS1914-10-1905GB-en_GB.pdf

2x Toshiba X300 HDWE140 4TB (Max read speed = 244 Mb/s) connected to SATA 3 port

https://smarthdd.com/database/TOSHIBA-HDWE140/FP2A/

+ 1x Samsung 970 EVO Gen 3 NVMe 500GB cache drive where the files go to before being moved to the HDD array (btrfs filesystem)

https://www.samsung.com/us/computin...s/ssd-970-evo-nvme-m-2-1tb-mz-v7e1t0bw/#specs

https://i.postimg.cc/xd3zDGh1/Unraid-Drives.png

So are you meaning that my speeds relate to the number of drives in the array or have I misunderstood?
 
for my own needs i upgraded my Nas in my sig with a 10gb nic and my main rig the speeds went up significantly thats with 7x18tb Seagate Exos drives and 2 2tb 970 pro nvmes as cache.
 
Cache drives help but it can only do so much.

As a side note the general recommendation seems to be to never use write caching on a single drive. Supposedly due to how write caching works... a failure on the write cache can cause critical faults in the raid itself and to only use single cache drives as read cache only.

The recommendation I always see is if your going to enable write caching to always do it on a pair of drives in RAID 1.

Less of a risk if you have proper backups. RAID is not a backup :)
 
So are you meaning that my speeds relate to the number of drives in the array or have I misunderstood?
Yup, more disks generally translates to better performance as the read/write operations are split across more of them.
I installed a tool called OpenSpeedTest on the server as an alternative to iperf3 where you can then navigate to the server/port through the browser to get the results. The results achieved were 980Mbps up and down so no – nowhere near 2.5Gbps.
Not familiar with this tool, but if it's testing only the direct connection between devices, then something surely isn't performing to it's fullest at the moment since it looks like you have 2.5Gb connections between everything.

It would probably be a good idea to get a switch though, something that's multi-gig (2.5/5/10). I wouldn't upgrade the NICs quite yet until you get a full 2.5Gbps working. Only needs to be managed if you also have VLANs configured.
 
xentr_thread_starter
Cool. I've ordered a 2.5gb 8 port unmanaged switch so will unbridge the LAN ports and put that in the circuit to see what results I get over the next few days and will report back.

Thanks for all advice thus far.
 
Back
Top