What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

QNAP launches 'affordable' 5 port 2.5Gb switch

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
12,817
Location
Grande Prairie, AB
yeah Samba needs a bunch of tuning and is very CPU bound. I can get pretty good performance from mine (ubuntu) in optimal conditions. But that pegs the cpu (drive is a 4 ssd mirror pool) as samba is single threaded.

View attachment 29202
What have you done for tuning?

I straight up copied this and my seqential reads are now about 220MB/s. But it should still be higher.

And how did you get CDM to be able to bench a network drive? Mine won't show up.

EDIT: So CDM won't show network drives if it runs as admin...but mine runs as admin, even w/o 'run as admin' checked. :confused:
 
Last edited:

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,878
Location
Kelowna
CDM is annoying. You have to run it without admin rights. Why that matters ??? (should be the opposite). if you have UAC on, just hit no at the pop up. If you have UAC off (or set to be less annoying) you have to run the app from a low privilege prompt. The easiest I found was just a normal command prompt and run the diskmark64 exe.

as for tuning I will grab it when I am home tonight unless I can figure out how to mark in terminus on a phone
 
Last edited:

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,878
Location
Kelowna
Code:
[global]
security=user
log file = /var/log/samba/log.%m
max log size = 1000
syslog = 0
panic action = /usr/share/samba/panic-action %d
dns proxy = no
netbios name = JEEVES
socket options=TCP_NODELAY IPTOS_LOWDELAY IPTOS_LOWDELAY IPTOS_THROUGHPUT
guest ok = Yes
guest only = Yes
ready only = No
writeable = Yes
use sendfile = yes
large readwrite = yes
create mask = 0744
client max protocol = SMB3

i must of stopped messing with the buffers and let the OS handle it. Seems to be happy. I will see what else I tuned.
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
12,817
Location
Grande Prairie, AB
I had to re-enable UAC to run CDM w/o admin. Even a non-admin PS / CMD prompt wouldn't work.

I played w/
Code:
aio read size = 65536
aio write size = 65536
and it seems to have made the biggest difference.

1GB will fit inside the ARC, so this will be entirely read/written from ram. Not to bad for a 10 year old i3-2100.
1.PNG


16GB should be larger than it can fit in ram. But i guess it is still using ram as a buffer as these are still too high. Shouldn't be more than 500/150 at the best for sequentials. Whatever, not going to complain.
2.PNG


EDIT: Real world numbers are WAY different. 12GB file doing about 220MB/s reads from server. And about 120MB/s writing to server. I guess them's the breaks.
 
Last edited:

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,878
Location
Kelowna
A ssd write cache may help, but the top numbers look great. far more closer I would expect as a best case. I Usually like to test with a ram drive as you eliminate zfs and it’s own tuning. if you have a bit of free ram you could make a tmpfs one quickly at least to confirm the write aio tuning.
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
12,817
Location
Grande Prairie, AB
A ssd write cache may help, but the top numbers look great. far more closer I would expect as a best case.
There really is no such thing as an SSD write cache w/ ZFS. What people call a write cache, an SLOG, will do absolutely nothing unless you are doing synchronous writes. Really only beneficial if you are running an OS/applications off the pool. I force async writes, so writes are cached in ram and get written to disk every 5s. If I had an SLOG, it would be idle 100% of the time. I get like 800MB/s write speeds for about 2s, then drops down to normal speed.

Then there is the L2 ARC 'read cache'. I've tried this a few times and I've never seen it actually ever be used in a home server scenario. The SSD just ends up being wasted.
 

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,878
Location
Kelowna
Indeed that is a very good point. I have a SLOG for both my pools as Vmware does sync writes but that is not a normal case.

Funny enough on the L2 Arc, according to arcstats despite me having one on my slow pool it has never been used.
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
12,817
Location
Grande Prairie, AB
Indeed that is a very good point. I have a SLOG for both my pools as Vmware does sync writes but that is not a normal case.

Funny enough on the L2 Arc, according to arcstats despite me having one on my slow pool it has never been used.
Are you using NFS or iSCSI for the ZFS share? Sync write commands are ignored by iSCSI. You would need to force sync=always, which would obviously make all writes synchronous.
 

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,878
Location
Kelowna
Are you actually using any 2.5G devices? A few of us over @ servethehome are all having similar performance issues w/ 2.5GbE adapters.

2.5G USB-C adapter on PC, IPolex SFP+ module on a Mikrotik switch, 10G DAC ConnectX-3 on Ubuntu Server 19.10.
Late reply, but I finally got a chance to play with the AQN-107 and how it plays with the Netgear switches, namely the MS510TX.

On an aside Ubuntu 20.0.4 LTS (server version) works flawlessly out of the box, no compiling of drivers needed. So does windows, it pulled the driver automatically while i was still on my 1G nic. .

VM -> VM Host -> (SFP+) -> MS510TX -> (10G RJ45) -> MS510TXPP -> (2.5G/5G RJ45) -> PC
(Yes lots of hops but I am lazy and I wanted to do an end to end lol)

2.5G
Code:
Connecting to host 192.168.3.93, port 5201
[  4] local 192.168.3.252 port 46076 connected to 192.168.3.93 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   281 MBytes  2.36 Gbits/sec   94    427 KBytes
[  4]   1.00-2.00   sec   280 MBytes  2.35 Gbits/sec   79    296 KBytes
[  4]   2.00-3.00   sec   280 MBytes  2.35 Gbits/sec   61    373 KBytes
[  4]   3.00-4.00   sec   280 MBytes  2.35 Gbits/sec   41    383 KBytes
[  4]   4.00-5.00   sec   280 MBytes  2.35 Gbits/sec   72    294 KBytes
[  4]   5.00-6.00   sec   280 MBytes  2.35 Gbits/sec   33    379 KBytes
[  4]   6.00-7.00   sec   280 MBytes  2.35 Gbits/sec   61    380 KBytes
[  4]   7.00-8.00   sec   280 MBytes  2.35 Gbits/sec   60    291 KBytes
[  4]   8.00-9.00   sec   276 MBytes  2.32 Gbits/sec   73    385 KBytes
[  4]   9.00-10.00  sec   280 MBytes  2.35 Gbits/sec   34    378 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  2.73 GBytes  2.35 Gbits/sec  608             sender
[  4]   0.00-10.00  sec  2.73 GBytes  2.35 Gbits/sec                  receiver

Right where we expect it.

5G
Code:
Connecting to host 192.168.3.95, port 5201
[  4] local 192.168.3.252 port 37572 connected to 192.168.3.95 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec   529 MBytes  4.43 Gbits/sec  1112    389 KBytes
[  4]   1.00-2.00   sec   557 MBytes  4.67 Gbits/sec  1188    379 KBytes
[  4]   2.00-3.00   sec   546 MBytes  4.58 Gbits/sec  1056    385 KBytes
[  4]   3.00-4.00   sec   557 MBytes  4.67 Gbits/sec  1011    426 KBytes
[  4]   4.00-5.00   sec   552 MBytes  4.63 Gbits/sec  1175    433 KBytes
[  4]   5.00-6.00   sec   524 MBytes  4.40 Gbits/sec  976    395 KBytes
[  4]   6.00-7.00   sec   557 MBytes  4.67 Gbits/sec  1311    375 KBytes
[  4]   7.00-8.00   sec   549 MBytes  4.60 Gbits/sec  1255    399 KBytes
[  4]   8.00-9.00   sec   509 MBytes  4.27 Gbits/sec  1142    590 KBytes
[  4]   9.00-10.00  sec   529 MBytes  4.44 Gbits/sec  1247    395 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  5.28 GBytes  4.54 Gbits/sec  11473             sender
[  4]   0.00-10.00  sec  5.28 GBytes  4.53 Gbits/sec                  receiver
Bit slower than expected should be ~4.75.

I really do think the # of Retr is tied to having a faster network push to a slower one. The same 5G in reverse

Code:
Connecting to host 192.168.3.95, port 5201
Reverse mode, remote host 192.168.3.95 is sending
[  4] local 192.168.3.252 port 37582 connected to 192.168.3.95 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   561 MBytes  4.70 Gbits/sec
[  4]   1.00-2.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   2.00-3.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   3.00-4.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   4.00-5.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   5.00-6.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   6.00-7.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   7.00-8.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   8.00-9.00   sec   561 MBytes  4.71 Gbits/sec
[  4]   9.00-10.00  sec   561 MBytes  4.71 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  5.48 GBytes  4.71 Gbits/sec                  receiver

Is perfect.

I even get them on 1G to onboard Intel Nic

Code:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec  111             sender
[  4]   0.00-10.00  sec  1.10 GBytes   942 Mbits/sec                  receiver

Anyways I can confirm that the MS510 are indeed working properly in Mutigig mode.
 
Last edited:

Latest posts

Top