What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Curious: just how fast can you get? Nvme Raid 0?

CMetaphor

Quadfather
Joined
May 5, 2007
Messages
6,817
Location
Montreal, Canada
So while casually looking into low-cost nvme drives that offer decent performance (for the thin clients that will become my htpcs) I noticed a few Lots of matched nvme drives (one new on Amazon, and several used on ebay), and the idea of getting 10x or so nvme drives brought up a question that I've always wondered about: the fast and the feasible? Does an nvme software raid 0 scale really well, as long as you don't get into multiplexing pcie lanes? Is it linear or close-to? I'm really wondering because I may or may not already have Behemoths bigger brother's (Project Odyssey) parts, that have pcie bifurcation, laying about. Could easily get 8+ nvme drives running in Raid 0 with very little effort... if it's worth it? Hence the topic question: if it scales well, should I do it?

Further details:
Sadly(?), since even my wildest projects are done in a "frugal" manner, Odyssey's mobo only has pcie 3.0. So approx 1GB/s approx bandwidth theoretical, per pcie lane. Despite that, pcie 3.0 nvme drives that use 4x lanes should be have about 4GB/s bandwidth, which is actually faster than pcie3.0 nvme drives typically go? Yes, it'll take two drives in Raid0 to match one Pcie4.0x4 nvme drive (max theoretical comparison), but again, I could do as many as 8x (likely more, as Odyssey's GPU will probably be on a riser) pcie3.0x4 nvme drives, so it Might scale to be quite fast indeed...

Hmmm 🤔🤷‍♂️
 

JD

Moderator
Staff member
Joined
Jul 16, 2007
Messages
12,637
Location
Toronto, ON
Doesn't look like they sell this without the drives, but I suppose an example nonetheless: https://www.highpoint-tech.com/rocketaic/ra7505hw

A PCIe 5.0 drive technically can do 16,000MB/s by itself... so boils down to costs I'd say. You'd need 3-4 drives just to match that, but on the plus side you'd likely be at a higher capacity.

The other question would probably be do you have a workload that actually requires such speeds and whether or not the CPU and RAM can keep up.
 

CMetaphor

Quadfather
Joined
May 5, 2007
Messages
6,817
Location
Montreal, Canada
@JD There are many 16x pcie cards that can hold 4x pcie nvme drives, 4x lanes each, but they require bifurcation to work. Some have heatsinks/a fan that cover all 4 at once too.

Thinking raid 0 might actually have advantages beyond just space, surely having soooo many lanes for parallel writes would be faster than a single, really fast drive, could write, right?

Thinking about it.
 

Izerous

Well-known member
Folding Team
Joined
Feb 7, 2019
Messages
4,737
Location
Edmonton
Raid 0 scales pretty well especially on SSDs. However the last time I saw someone really test raid 0 with NVMe drives.
1 => 2 wasn't quite linear gains but was really good
2 => 3 0% gains.

Turned out 2 drives had already fully saturated things. Done with older drives and older PCIe versions, but newer drives are even faster to match the faster bandwidth on the motherboards so you might still cap out on any gains fairly early on.
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
13,663
Location
Grande Prairie, AB
For home use, it doesn't scale. Like zero real world difference. Since SSDs store almost all data non sequentially, what matters is random read performance. In desktop and Home server applications, you'll never see more than 1-3 queue depth. So pretty much the only relevant metric is low queue depth 4k reads.

RAID0 just doesn't increase 4k small queue depth performance. The bottleneck is elsewhere. I know there is a cpu bottleneck. As there is a substantial different depending on single threaded cpu performance. Intel absolutely has an advantage here. The only time I've ever seen RAID0 scale well was with FusionIO cards. And I honestly have no idea why.


Now if you are hosting a huge data base with 20 constant users accessing data, none of this is relevant. As you can keep a much deeper queue depth and you will see scaling improve. But you'll still hit a cpu bottleneck pretty quick.
 

Izerous

Well-known member
Folding Team
Joined
Feb 7, 2019
Messages
4,737
Location
Edmonton
About 10 minute mark. The graphs show in file copies and benchmarks show very close to linear gains. 2 drives almost 2x performance. This isn't the original I referred to since he didn't do a 3 drive raid 0 but I can't seem to find that one.


However if you move a little further down the line to where things like software load times are the gain are less showing. So specific usage is important. If it is your game drive the few seconds you might save here and there probably not really noticeable. Copying large files around more noticeable.
 

CMetaphor

Quadfather
Joined
May 5, 2007
Messages
6,817
Location
Montreal, Canada
I guess the other part of the equation is price, aka how I'm always trying to get the best bang for my buck. @12:08 in the video above... 500 euros... yeah haha.

For those curious, the drives I'm considering (because theyew so cheap) are these: 10x Timetec drives for $369. So, 3x to use in my 3x new thin clints, and 7x to play with? Sadly they're only rated for 2GB/s though, which is supposedly half of 1GB/s/lane of Pcie 3.0. However... I can't seem to find top-end pcie3x4 nvme drives anymore? Everything approaching 4GB/s is Pcie4.0x4... does nothing fully saturate pcie3.0x4? Are most nvme drives only really using 2x lanes? Hmm. Maybe I messed up my math somewhere... but when Gen4 drives are rated up to 7.7GB/s, that sounds like those are fully using the most of pci4x4?

Will have to research more in general, no money spent yet. Small queue depths and such are beyond where I've researched so far. What I really like, and its something I've only ever felt with Behemoth, is smooooothness. Behemoth, with "mere" 4x SSDs in Raid 0, felt smoother than any other PC I've ever used, including my current gamer*. When you've got 64 cores, you can open anything and everything, at the same time. The second you see your windows desktop, open 5-6 programs at once... and they're just There.

*Admittedly, it's been ages, and my current PC is a lowly 3700x with one PCIe4.0 1tb drive (I think, might be a 3.0). But the impression if smoothness from B, especially (haha oh man I'm old) ten-ish years ago, was totally unrivaled, and even toying around on a friend's much faster PC (faster than my current gamer, PS: I DO have better parts for my current gamer waiting... bleh) didn't feel as smooth. First couple of programs start to open... but where are the other two? Why am I waiting an extra 10ish seconds for them to open in 2025? Time is money(? Just a saying, I'm not serious).

So yeah, that's why I'm trying with this idea, possibly for Both Behemoths final upgrade (as a storage drives, no UEFI for direct OS loading) and Odyssey's first build (as it's primary OS + storage drive), it's an interesting idea to toy with, and something I've always wanted to try.

Still thinking...
 

CMetaphor

Quadfather
Joined
May 5, 2007
Messages
6,817
Location
Montreal, Canada
Yeah, Amazon didn't have proper 3.0x4 1tb drives, but newegg still does... starting at around $120 per drive for around 3.3GB/s. So for around $500 one could test if 4x of them could match one $600-700 pcie5.0x4 drive? Right around the 12GB/s speed mark. But 6x of the slow, 40 -dollar timetec drives could theoretically match that - and have 50% more space to boot - but would require 2 of those 4x4 bifurcation cards running altogether in a big raid 0. And 6 of them is only around $240... might as go well for 8x at that point and fully use two such 4x4 cards. And if/ when it doesn't scale well, especially between two cards, I can just use 1 card / 4x nvme drives for Behemoth and the other card, with 4 more nvme drives, for Odyssey? Give them both a really nice drive each?

Yup. More thinking... tomorrow. Not being and to sleep at 430am sucks. Gnight for now HWC.
 

Izerous

Well-known member
Folding Team
Joined
Feb 7, 2019
Messages
4,737
Location
Edmonton
Honestly the last time I bothered with straight up raid 0 was when 120GB 7200k rpn platters were $120 CAD each, short stroked them + raid 0 was a huge boost to boot times in the mid 2000s.

Actually got my hands on 2x 10k raptor drives a while later and did the same.

Moment SSDs landed I never bothered in a desktop again.

Since you mentioned these are for your mini PCs I don't think you will see quite the same benefit except in the initial copy of data.

Fun thing though since your doing it in triplicate and ordering 3 drives anyways... load first up with raid 0, load second with a single drive, and leave third without a drive and do some basic tests. That was you have a literal side by side with the software and usecase your planning.

If you like the raid0 order more drives, if not only have to disable the raid and more a drive.
 

FreeKnight

Well-known member
Joined
Jul 8, 2009
Messages
5,039
Location
Edmonton, AB
I did RAID 0 with some vertex SATA SSDs back in the day, but to be honest, it seemed to rapidly accelerate their failure rates and killed the drives.

Unless you're doing massive media creation, like you're rendering and saving IMAX footage at a professional level, I can't see the use case for most people. Think it's more likely that you're creating additional headache for yourself than anything. If it's a "I want to do it because it's neat", go for it. A more modern dual cpu build and maybe you'd have enough CPU room to benefit, but I'd be skeptical.
 

Latest posts

Top