xentr_theme_editor

  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Poor RAID 5 Performance on Dedicated Hardware

EmeraldFlame

Active member
Joined
Dec 2, 2012
Messages
41
Reaction score
0
xentr_thread_starter
So this has been my first foray into raid 5, I have done a lot of things in the past with raid 0, 1, and 10 before but felt that raid 5 was a better fit for the use case I was going for in this build.

I have built a small home server, mainly for media and storage, running Windows Server 2012 R2 Datacenter (got a free copy through school a few years back), and for storage I am using 4x 3TB Western Digital Red hard drives, all connected to an LSI MegaRAID 9266-8i raid controller.

The problem I have ran into is that the RAID 5 performance has turned out to be horrid, and I know this hardware should be doing better than it is, so hopefully someone else can point out what I have done wrong.

So first off, before building the RAID, I tested each drive simply by running CrystalDiskMark. Each hard drive gave me pretty close to identical performance numbers so they all seemed to be in good and operating condition. Below is an image of an individual drives performance.

3TBWDRed-Perf.png


As you can see, individually, these drives perform about as well as you can expect a mechanical drive to perform.

However, the RAID 5 has much worse numbers than I ever expected. The reads seem to be decent, but the writes are just abysmally slow.

RAID5-Perf.png


I know that RAID 5 has a good bit of overhead when it comes to writes, and that writes don't scale nearly as well as reads do in this setup, but to see writes drop from 150MB/s per drive to just 15MB/s for the entire RAID just can't be right.

So hopefully someone who is more familiar with LSI and their MegaRAID cards and software can help me out on how to improve performance. Below you will find a breakdown of what I did to set up the RAID, along with screenshots of its current settings.

So first off I installed the card into a PCI-e x16 slot, then booted up the server, logged in, installed the newest drivers, installed LSI's storcli command line tool, and installed their MegaRAID Storage Manager Software. Once that was installed, I started up MegaRAID Storage Manager and updated the LSI cards firmware to the newest they had available, which I followed up with a reboot, despite the software telling me it wasn't actually needed, but it made me feel better.

After everything was booted back up, I opened the software again and clicked the "Create Virtual Drive' option, followed their simple setup wizard for my Raid 5, and told the software to start initializing the raid.

The initialization took about 50 hours. After the initialization was done however, I was a bit confused, and I feel this may be the part where things went wrong, and hopefully someone will be able to tell me. I was under the impression, that when the initialization finished that Windows would then recognize the drive and assign it a drive letter and allow me to access it. However, the initialization completed, but the drive remained inaccessible. I opened windows Disk Management tool, and upon opening it, it told me that I needed to initialize the disk with a GPT partition table before I could format it or assign it a drive letter, so I went ahead and let it. Then formatted it as NTFS, and gave it a drive letter. This process took about 10 minutes. At this point I could access it, but performance was poor as shown above. Should windows have recognized the drive after MegaRAID Storage Manager's initialization without me having to open the Windows Disk Management tool? Should the Disk Management tool have asked me to initialize the drive, even though MegaRAID just did? Did letting Disk Management initialize things, effectively make the work the MegaRAID card had done over the past 50 hours null? What can I do to improve performance? Should I start over on building the RAID? Should I just reinitialize it through the MegaRAID software?

Honestly, I would be happy just to hit 100MB/s as most of the interaction with this server will be through Gigabit LAN, very rarely will I have more access bandwidth than that anyway.

The current status information the MegaRAID Storage Manager gives on the drive (note you can see in the bottome the the Initialization does complete, about an hour after that time is when I opened the Windows Disk Manager):
VirtualDrive-Status.PNG


The current settings for the RAID:
VirtualDrive-Options.PNG


And the current list of commands that the software allows me to use on the virtual drive on it right click context menu:
VirtualDrive-Commands.PNG


Any help would be much appreciated, and if you need anymore information at all, I would be happy to get it to you. I have tried to be as thorough as possible above, but if I missed something, just let me know.
 
Isn't there an option to do the RAID 5 matrix directly from the integrated menu of the card? Like those second screen info that appears at boot, after the motherboard information? Why bother with the windows based software?
 
xentr_thread_starter
Isn't there an option to do the RAID 5 matrix directly from the integrated menu of the card? Like those second screen info that appears at boot, after the motherboard information? Why bother with the windows based software?

Yes it does have a kind of BIOS style boot menu. From everything I had read when researching RAID cards however, it seemed people highly recommended their MegaRAID Storage Management software as it does actually interface with the card and allows you to do the same tasks in a much easier way.
 
MSM is pretty good, stick to it.

I just ran CDM on my Raid-6 array (7x 1Tb REDs (+1 hot spare), using a 3Ware 9650SE-8LPML card), and got 304 / 286.
Your setup looks fine, but indeed it's weird that the OS requested to initialize the array once more. When I set up my array using the BIOS menu, once created the array was upon reboot recognized by the OS. Granted, performance was abysmal with no initialization done, but it was 'seen'....

Maybe others can chime in, ones that use a LSI card (unlike I). Lowfat has good experience with raid setups too.
 
ones that use a LSI card (unlike I).

Hasn't LSI bought 3Ware? I would say to give it a try also with the integrated menu of the card, and create the array from there. also I think there is an option that needs to be enabled, something like write back or something like that, for best performance. It is an option present in the windows properties of the drive in the device manager iirc.
 
xentr_thread_starter
MSM is pretty good, stick to it.

I just ran CDM on my Raid-6 array (7x 1Tb REDs (+1 hot spare), using a 3Ware 9650SE-8LPML card), and got 304 / 286.
Your setup looks fine, but indeed it's weird that the OS requested to initialize the array once more. When I set up my array using the BIOS menu, once created the array was upon reboot recognized by the OS. Granted, performance was abysmal with no initialization done, but it was 'seen'....

Maybe others can chime in, ones that use a LSI card (unlike I). Lowfat has good experience with raid setups too.

Well at least you confirmed my hunch that Windows not recognizing things wasn't normal. Like I said, I have never done a RAID 5 so I wasn't 100% sure, but it definitely seemed abnormal compared to the other raids I had done (0, 1, and 10 using on-board chips). I may try and scrap the RAID and start over, It doesn't actually have any data on it yet so it doesn't really matter. just a big time suck haha.

Maybe Lowfat will have some advice as you said too.
 
The only time I have seen slow speeds with my RAID 5 usage over the years. It usually turned out to be one of the drives in the array was going bad or was bad.

I would suggest running the WD Data Lifeguard Diagnostic on each drive. I would do the Long test...if I remember correctly.
 
So first off I installed the card into a PCI-e x16 slot, then booted up the server, logged in, installed the newest drivers, installed LSI's storcli command line tool, and installed their MegaRAID Storage Manager Software. Once that was installed, I started up MegaRAID Storage Manager and updated the LSI cards firmware to the newest they had available, which I followed up with a reboot, despite the software telling me it wasn't actually needed, but it made me feel better.

What PCI-e x16 slot? the first one on the board? if so try moving it down or play with the configuration. Running a dedicated RAID card on a desktop motherboard works fine most of the time, but does not get checked at all by the manufacture if its a server RAID card. So you may have to play with UEFI, slot configuration etc.

The initialization took about 50 hours. After the initialization was done however, I was a bit confused, and I feel this may be the part where things went wrong, and hopefully someone will be able to tell me. I was under the impression, that when the initialization finished that Windows would then recognize the drive and assign it a drive letter and allow me to access it. However, the initialization completed, but the drive remained inaccessible. I opened windows Disk Management tool, and upon opening it, it told me that I needed to initialize the disk with a GPT partition table before I could format it or assign it a drive letter, so I went ahead and let it. Then formatted it as NTFS, and gave it a drive letter. This process took about 10 minutes. At this point I could access it, but performance was poor as shown above. Should windows have recognized the drive after MegaRAID Storage Manager's initialization without me having to open the Windows Disk Management tool? Should the Disk Management tool have asked me to initialize the drive, even though MegaRAID just did? Did letting Disk Management initialize things, effectively make the work the MegaRAID card had done over the past 50 hours null? What can I do to improve performance? Should I start over on building the RAID? Should I just reinitialize it through the MegaRAID software?

When you initialized the disk via the RAID controller that was the RAID card setting up the RAID. When windows wanted to initialize the "disk" it was actually just setting up the partition table and formatting the drive. They both needed to be done. Nothing bad there.

Make sure your motherboard is running the newest bios as well as the RAID card's firmware is updated. Play with x16 slots if you have a few. Disable UEFI and run a traditional BIOS as a troubleshooting step. Also if that is a server RAID card with a passive heat sink, get some airflow on it, a server case usually has much better airflow then a desktop.

As I write this I didn't check your RAID config but if you changed any of the default settings in the array setup I would go back and redo it with defaults and deviate from there if it needs to be tweaked differently.

a 4 drive RAID 5 should give you probably ~250-300 on the writes and probably 350-375 on the reads... give or take.

Edit

Step 1 check for compatibility on the motherboard http://www.lsi.com/downloads/Public.../MegaRAID_Value_Feature_Interop_List_SAS2.pdf If its not listed you are dealing with fiddling and black magic to get it to work. It may also work fine if its not listed but you want to double check with other users that have that RAID card series and your motherboard.
 
Last edited:
xentr_thread_starter
Sorry its taken me so long to get back to everyone. But seriously thanks for all the help so far.

Also if that is a server RAID card with a passive heat sink, get some airflow on it, a server case usually has much better airflow then a desktop.

Anyway, I believe this is probably my. Thinking back on too, its kind of a 'duh I'm an idiot' moment, I know better. The heatsink on the card was physically very hot to the touch, I didn't grab my laser thermometer or anything, but it was hot enough to burn you if you touched it for more than a few seconds, so yeah...

Heck, I have done work and maintained servers and I know the torrential amount of air they push through them and I should have realized this when I was building this little desktop case server, but for some reason it just kind of slipped my mind.

Anyway, I am fabricating a fan mount that will attach to an expansion slot, and then I will be putting that in the slot right below the card, and hopefully that should solve my problems, as it will be getting direct airflow then. I should be able to report back to you guys sometime later tonight on how things went.

--FAN INSTALL UPDATE--

So I got the fan bracket done and it seems to be making a huge difference in performance. However, I also ran into something quite odd. Benchmarks like CrystalDisk and ATTO are not reading transfer speeds correctly at all. My first set of benchmarks was close to what I was getting in real world file transfers, so I assumed it was within the margin of error. However after getting the fan on the card and running the benchmarks again, Crystal Disk jumped up to 721MB/s read (which should be impossible seeing as this is 4 drives each maxing out at 150MB/s) and CrystalDisk reported reads went DOWN to 12MB/s. However, when I actually transfer files it maxes out my gigabit network with 100MB/s transfers. Running ATTO was even funnier. At the larger chunk sizes (256-8192KB) ATTO was reporting I had a read speed of 2.7GB/s, that isn't a typo and I seriously mean it is reporting 2700MB/s, and a write speed of between 50-100MB/s.

So I really don't know how fast the drives seem to be running as nothing seems to be clocking it correctly, but it can max out my gigabit network in both uploads and downloads now, and you can't burn yourself on the heat sink anymore.

Thanks for all the help everyone.
 
Last edited:

Latest posts

Back
Top