What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

hardware raid is dead says Wendell from level 1 tech.

lcdguy

Well-known member
Folding Team
Joined
Mar 17, 2007
Messages
2,334
Location
An undisclosed location
eh, yeah saw that, perhaps for the average consumer/smb/ but don't agree with it on the large true mid-range/enterprise solutions.
 

lowfat

Moderator
Staff member
Joined
Feb 12, 2007
Messages
12,080
Location
Grande Prairie, AB
eh, yeah saw that, perhaps for the average consumer/smb/ but don't agree with it on the large true mid-range/enterprise solutions.
I thought it was this video that said that hardware RAID controllers just can't keep up to modern day drives. They end up being a serious bottleneck.
 

lcdguy

Well-known member
Folding Team
Joined
Mar 17, 2007
Messages
2,334
Location
An undisclosed location
yeah, hard drives (regardsless of type nvme, sas, nl-sas, sata, etc) are the slowest part in a large array. But i'm also not talking about smaller solutions like equallogic, tintri, nimble, etc. but the large multi PB solutions from companies like EMC, NetAPP, Hitachi, etc i don't see hardware raid going anywhere any time soon on those solutions.

Having said that i run a ZFS nas at home using "hardware raid" controllers in passthrough mode. :)
 
Last edited:

lcdguy

Well-known member
Folding Team
Joined
Mar 17, 2007
Messages
2,334
Location
An undisclosed location
Not really on these large systems, they also generally don't support things like raid 50, raid 60 since there is so much redundancy and spare disks you generally won't need anything beyond raid 6, possibly raid 10 if an application requires it. You are also almost never writing data (from the clients perspective) directly to disk, and reads rarely come from the lowest tier too often, if that is happening then something is seriously wrong with the array (ie: underscaled for the work load, client/host is hammering the array). But something lower end that doesn't have all of that fancy high end tech would be a different story.

But before you even start to look at what to buy/build you need to know what the expected workloads are going to be and then add room for growth (say 15-30% as a healthy margin). There are a lot of things to account for beyond raid level and capacity. But this is all based on something being used by a business. You also need to look at what protocols are going to be used (ie: CIFS, NFS, iSCSI, FC, FCoE, etc).

For home use i don't have high capacity or performance needs so this is what i built.

I went with a RAID-Z2 ZFS pool.

5x 2TB SATA WD Red's in the Data vdev
3x 500GB Samsung SSD in the Cache vdev
1x 2TB SATA WD Red in the Spare vdev

of course this is all just my 2 cents on the matter, feel free to ignore the ramblings of a crazy person :)
 

gingerbee

Well-known member
Joined
Jan 22, 2009
Messages
9,340
Location
Orillia, Ontario
hoping Unraid will be adding ZFS soon, I have heard they have a plugin but would like to see it running native and then I may switch from my basic drivepool+snapraid for parity, simple and works for me.

I use an archive server that only turns on maybe 4 times a month
6x4TB Seagate SMR drive ( oh know there not good for lots of writing which I don't do) its setup as a W.O.R.M for my use and works well.
But I also have loads of backup drives for several different things, I.E I have a one to one copy of every drive in the server plus music/pics/data OS drives and stuff like that

I Just thought Wendel had some interesting things to say about hardware raid, Some of it was over my head, but to be honest I never thought software raid would be used as much as it seems to be but I guess it is all these huge core CPU's we're seeing from AMD/Intel.

Thanks for everyone's info always interested to hear from those who know more than me ( so you know just about everyone🤪😆 )
 

lcdguy

Well-known member
Folding Team
Joined
Mar 17, 2007
Messages
2,334
Location
An undisclosed location
yeah, and possibly my comparison with the large storage systems was not a great one as they generally run highly customized hardware/software that is not in reach of the average person :). Pretty sure most of can't afford multi-million dollar solutions to store all those "Linux Images" in our basements. :)

ZFS gives a great value since you don't really need any fancy raid controllers just semi reliable HBA's :)
 

Shadowarez

Well-known member
Folding Team
Joined
Oct 4, 2013
Messages
3,322
Location
Arctic Canada
Is data scrubbing a requirement each month if you only 6x18Tab Exos drives with 2x2TB nvme cache drives. I have a separate external 18TB backing up the entire nice.
 

Latest posts

Top