Not really on these large systems, they also generally don't support things like raid 50, raid 60 since there is so much redundancy and spare disks you generally won't need anything beyond raid 6, possibly raid 10 if an application requires it. You are also almost never writing data (from the clients perspective) directly to disk, and reads rarely come from the lowest tier too often, if that is happening then something is seriously wrong with the array (ie: underscaled for the work load, client/host is hammering the array). But something lower end that doesn't have all of that fancy high end tech would be a different story.
But before you even start to look at what to buy/build you need to know what the expected workloads are going to be and then add room for growth (say 15-30% as a healthy margin). There are a lot of things to account for beyond raid level and capacity. But this is all based on something being used by a business. You also need to look at what protocols are going to be used (ie: CIFS, NFS, iSCSI, FC, FCoE, etc).
For home use i don't have high capacity or performance needs so this is what i built.
I went with a RAID-Z2 ZFS pool.
5x 2TB SATA WD Red's in the Data vdev
3x 500GB Samsung SSD in the Cache vdev
1x 2TB SATA WD Red in the Spare vdev
of course this is all just my 2 cents on the matter, feel free to ignore the ramblings of a crazy person