What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

QNAP TS-H973AX

JD

Moderator
Staff member
Joined
Jul 16, 2007
Messages
10,477
Location
Toronto, ON
Did you actually find the L2 ARC being hit at all? L2 ARC should only be useful if your working set / hot data is larger than your L1 ARC. The few times I used an L2 ARC, it was useless.
I'm inclined to say no, but there didn't seem to be any statistics for it.

The 983 DCT was $150USD, so it's more of a "why not" since one of the QNAP reps on Reddit suggested them as the "cheap" option.
 

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,814
Location
Kelowna
Support got mine up and running. Not sure what they did but it’s fixed lol.

you can setup a read cache drive no problem with a single disk, so your fine there. However you cannot create a slog (+cache it does both) without using 2 disks. 🙁 Which just won’t do.

Fortunately you can do it from the command line ;) and it seems to survive reboots just fine. You just loose the fancy UI stats but that is a worthwhile trade off until maybe 4801x m.2. get cheap.

So if the UI is giving you hassles, ssh your problems away 😆
 
Last edited:

JD

Moderator
Staff member
Joined
Jul 16, 2007
Messages
10,477
Location
Toronto, ON
@Entz, what would the SSH commands be out of curiosity?

I got the 983DCT today, set it as read cache for all I/O, my iSCSI yields this (CDM on Server 2019):
Code:
[Read]
Sequential 1MiB (Q=  8, T= 1):  1039.183 MB/s [    991.0 IOPS] <  8059.31 us>
Sequential 1MiB (Q=  1, T= 1):   545.038 MB/s [    519.8 IOPS] <  1922.05 us>
    Random 4KiB (Q= 32, T=16):   358.173 MB/s [  87444.6 IOPS] <  5844.23 us>
    Random 4KiB (Q=  1, T= 1):    10.426 MB/s [   2545.4 IOPS] <   392.13 us>

[Write]
Sequential 1MiB (Q=  8, T= 1):   361.928 MB/s [    345.2 IOPS] < 16911.49 us>
Sequential 1MiB (Q=  1, T= 1):   336.372 MB/s [    320.8 IOPS] <  3103.18 us>
    Random 4KiB (Q= 32, T=16):   141.592 MB/s [  34568.4 IOPS] < 12855.04 us>
    Random 4KiB (Q=  1, T= 1):     6.560 MB/s [   1601.6 IOPS] <   623.51 us>

And before with no cache:
Code:
[Read]
Sequential 1MiB (Q=  8, T= 1):   911.912 MB/s [    869.7 IOPS] <  9186.11 us>
Sequential 1MiB (Q=  1, T= 1):   326.119 MB/s [    311.0 IOPS] <  3213.95 us>
    Random 4KiB (Q= 32, T=16):   315.068 MB/s [  76920.9 IOPS] <  6645.32 us>
    Random 4KiB (Q=  1, T= 1):    10.560 MB/s [   2578.1 IOPS] <   387.20 us>

[Write]
Sequential 1MiB (Q=  8, T= 1):   199.845 MB/s [    190.6 IOPS] < 17397.04 us>
Sequential 1MiB (Q=  1, T= 1):   187.465 MB/s [    178.8 IOPS] <  5493.63 us>
    Random 4KiB (Q= 32, T=16):    36.539 MB/s [   8920.7 IOPS] < 56874.41 us>
    Random 4KiB (Q=  1, T= 1):     5.108 MB/s [   1247.1 IOPS] <   798.70 us>

Not really sure why the write speeds appear to be better now though.
 

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,814
Location
Kelowna
My guess it it is using it as a write cache as well or you just happen to have more memory free at the time. ZFS with cache whatever it can.
Normal disclaimer about SLOG are really only useful if your doing sync writes and your normal cache is fine otherwise other than memory pressure due to L2ARC as mentioned (assuming the cache is actually L2ARC and not something else)

Not sure if you have a single pool or multiples but it goes along the lines of :
- Use fdisk to create as many partitions as you need for your pools. If you have 2 pools you want a slog for create 2 partitions. Don't need to be large. Needs to hold 5s of writes so typically 8-10GB is fine for most, I usually do 32GB just because the disk is massive compared to what's needed.

If the disk was installed when the NAS was initialized it will already have like 6 partitions on it because reasons. You can use any one of those that is big enough as well.

The QuTS "OS" doesn't have /dev/disk/by-id which is annoying so you will likely need to do a fdisk -l | more to try and find it.

The name will be along the lines of /dev/nvme1n1 or /dev/nvme0n1 etc. The partition path is that plus p1, p2, p3 etc. i.e. /dev/nvme1np1

- command you are looking for is zpool [pool name] add log [device name]
you can use zpool list to get the names or my favorite zpool status -v (which shows the layout, scrub info etc)

i.e. zpool add zpool2 log /dev/nvme1n1p1

You should then see it via zpool status -v

you can remove a log at any time by doing a zpool remove [pool] /dev/nvme1np1


One command I love is zpool iostat -v 1 which shows the actual pool read and writes refreshed per second. So you can see how it hitting the disks.

The QNAP way of doing sync is broken or perhaps VMWare is. But even if a NFS share is set to sync it just ignores it anyways and does a async write which defeats the purpose so if you really want to be sure force the "ZIL synchronized I/O Mode" to always. This could be because I am doing things outside of the UI.

Another thing to note is that since it doesn't have /dev/disk/by-id adding a drive can cause the log to break as the devices ID can change (nvme1 to nvme0 etc) so before you do that I suggest just removing them. Making the change then adding them back.

QNAP gets around this by faking /dev/disk/by-id via the /dev/qzfs/enc_#/ folder which is name to device mappings but you can't control those without the UI.
 
Last edited:

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,814
Location
Kelowna
JD with your cache drive can you do a zpool status -v . I am curious if the "Cache Only" you add via the UI is actually L2ARC or if its some form of their existing caching tech.
 

JD

Moderator
Staff member
Joined
Jul 16, 2007
Messages
10,477
Location
Toronto, ON
JD with your cache drive can you do a zpool status -v . I am curious if the "Cache Only" you add via the UI is actually L2ARC or if its some form of their existing caching tech.
Yeah, I figured it was zpool commands in my brief look last night. From what I can deduce...
zpool1 is the "system" pool that is 2x480GB IronWolf 110's.
zpool2 is my "storage" pool of 5x10TB "RAID6" (though I guess actually RAID-Z2).
zpool256 looks like maybe its the built in DOM? It's only 512MB

I would have expected to see the 983DCT as "Cache" under zpool2 right? So it must be using QNAP's caching rather than ZFS? Might explain why the writes seem better too.

Not sure what "features" are missing on the pools either...

Code:
zpool list
NAME       SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zpool1     413G  38.5G   375G     9%  1.17x  ONLINE  -
zpool2    45.2T  14.7T  30.5T    32%  1.00x  ONLINE  -
zpool256   512M  1.07M   511M     0%  1.00x  ONLINE  -

Code:
zpool status -v
  pool: zpool1
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: none requested
 prune: last pruned 0 entries, 0 entries are pruned ever
        total pruning count #1, avg. pruning rate = 0 (entry/sec)
config:

        NAME                                        STATE     READ WRITE CKSUM
        zpool1                                      ONLINE       0     0     0
          mirror-0                                  ONLINE       0     0     0
            qzfs/enc_0/disk_0x9_5000C5003EA1A40F_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x8_5000C5003EA1C2B9_3  ONLINE       0     0     0

errors: No known data errors

  pool: zpool2
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: none requested
 prune: last pruned 0 entries, 0 entries are pruned ever
        total pruning count #1, avg. pruning rate = 0 (entry/sec)
config:

        NAME                                        STATE     READ WRITE CKSUM
        zpool2                                      ONLINE       0     0     0
          raidz2-0                                  ONLINE       0     0     0
            qzfs/enc_0/disk_0x1_5000C500B5FFA6D3_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x2_5000C500B681573D_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x3_5000C500B67DDF65_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x4_5000C500B59BEFC0_3  ONLINE       0     0     0
            qzfs/enc_0/disk_0x5_5000C500C44ABDD8_3  ONLINE       0     0     0

errors: No known data errors

  pool: zpool256
 state: ONLINE
  scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        zpool256                                 ONLINE       0     0     0
          qzfs/enc_0/disk_0x6_S48CNC0N400373B_2  ONLINE       0     0     0
        cache
          qzfs/enc_0/disk_0x6_S48CNC0N400373B_3  ONLINE       0     0     0

errors: No known data errors
 

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,814
Location
Kelowna
Interesting, the DOM doesn't show up for me under ZFS but that could just be a model difference. The odd thing is there is no cache attached to zpool1 or zpool2 which means its not L2ARC so they must be doing something Q-Tier style with it. Maybe zpool256 is their "cache" tier or something.

I can pull my SLOG off and add it as a cache and see if it suddenly adds a zpool256.
 

Entz

Well-known member
Joined
Jul 17, 2011
Messages
1,814
Location
Kelowna
Yeah adding a cache created a zpool256 . Instead of adding L2ARC to a pool (cache)

Code:
zpool256   512M  1.07M   511M     0%  1.00x  ONLINE  -

pool: zpool256
state: ONLINE
  scan: none requested
config:

        NAME                                              STATE     READ WRITE CKSUM
        zpool256                                          ONLINE       0     0     0
          qzfs/enc_23/disk_0x170001_CVFT73630050400PGN_2  ONLINE       0     0     0
        cache
          qzfs/enc_23/disk_0x170001_CVFT73630050400PGN_3  ONLINE       0     0     0

So I am not exactly sure what they are doing ...
 
Top