xentr_theme_editor

  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Raid on test rig

Yup, card has 2 IDE connectors. So I'll connect the drives as Masters and Slaves, using the IDE cables.

I'm sorry, but I don't understand your last sentence.

John

When you have a mirror (raid 1), if there was a write underway on one of them that doesn't get written to the other they will be out of sync.
Since three is no journal to record which sector is out of sync, or perhaps also because the raid software hasn't marked the drives as in sync at shutdown, on restart the software will copy the most current drive to the least current.
with raid 0 turned on as well, it might mean copying 2 drives to the other 2.

The system will continue to run while the copying happens in the background, but you might notice a performance hit while it is underway.
 
xentr_thread_starter
So unless they are in sync (software-related?), there might be a temporary slow down in performance. I can live with that. As I wrote earlier, there won't be much data on the drives, so the mirroring of data should be fast.

Thanks for your help.

John
 
The mirroring is done at the lowest level ... so the raid software is unaware of how much data is on the drive. Drive rebuild time is a function of the size of the drive.

Since the master & slave will both be in use on both your ide interfaces there will be some i/o contention. But your drives are fairly small so hopefully it won't take that long.

The intial build of my raid 5 on 3 x 250gb drives took maybe 1/2 day.
 
xentr_thread_starter
Ok. We're talking 4x 13Gb - small 0+1 array. 1 hour maybe? I've got plenty of time.

Also, if the Raid card can support it, would it be better to go for Raid 1+0 instead? I just can't seem to understand the difference between the 2.
 
afaik there is only 1 0+1 mode ... 2 drives operate as raid 0 for performance and the other 2 mirror the first 2.

Depending how sophisticated the raid implementation is it potentially can give very fast performance for reads by picking the drive that already has the head positioned closest to the sector to be read. I expect this kind of optimization tho is only implemented on high end raid cards designed for servers. There are a lot of parameters in play that would affect how different applications/access paths perform vs caching levels, file layouts etc.

So the only way to really tell which is best is to measure or feel how it works for you in you normal usage patterns.

Raid 0 is usually gonna be best for perf but I don't think i would care much about a slight perf improvemnt that may not be noticable to me if a single drive failure requires a reinstall or causes data loss. Raid 0+1 avoids the single drive failure, but will suffer a (slight?) perf penalty on writes since 2 drives need to be written. If i did a lot of benchies i might want raid 0.

Where I use raid 0 on my new setups if for a small volume used for temp files in ripping/encoding and for the pagefiile since i don't care if i lose them.

In your case, since you have a bunch of slower 5400 drives, and drive space isn't too important, 0+1 seems like a good idea.

For me, with large 7200 sata drives, raid 5 seems optimal since it only loses 1/3 the space for redundancy vs 1/2 for raid 1 and i only need 3 drives vs 4 for raid 0+1.

Older raid implementations may not let you convert from 1 raid to another.

But if it does, it might be an interesting test if you have the time to try a single drive first, convert it to raid 0 and then convert it to raid 0+1. That would let you see what kind of performance differences you would see and whether the extra complexity of 0+1 is really worth it.

If you do some testing like that, also test out drive rebuild scenarios by unplugging a drive while running. Most ide setups won't support hot plug/unplug but it is physically safe to just power off a drive. system restart is required to plug the drive back in.

I did some of this kind of testing to get comfortable with raid when i first started using it. It is nice to have a test rig to do it on so you can do other things while reinstalling os'es and rebuilding raids. Much better to learn how to do it when you have time, rather than when a drive actually fails.
 
Last edited:
xentr_thread_starter
I'll see to it and sure can do, since I won't receive the Raid card for another week. I'll boot the system probably this afternoon and start working on the tests.

Once the Raid "whatever" is setup, you can change it? I didn't know that.
 
whther you can change it depends on the bios/management software for the card.

Intels Matrix storage manager supports some upgrades of raid configs without data loss but it partly depends on which raid chip is in use and which raid config you start with.

My experience with older cheap ide raids was that they did not allow raid changes without a complete os reinstall. I never used any of the high end ones.

So it depends on your raid card and also whether you are running your os on it.

Have fun :biggrin:
 
xentr_thread_starter
Well, the card is a cheap copy bought in HK. I should receive it in 1 week. I know that the ECS mobo supports Raid 0+1 using SATA, not IDE.

As for an OS, I'll be using Win2000, so re-installing should be fairly quick.
 
high end server raid cards usually demand you re-install your OS when making significant RAID changes as well... this is mostly due to the partitioning of the drives being done at the hardware (raid) level, not in the Network Operating System NOS. But who would want to modify a RAID 5 to a RAID 0 or RAID 1? heh
 
xentr_thread_starter
I should have gave another title to this thread... like which HDDs for raid in test rig.

I bought a lot of 5 Seagate 13Gb for 10.99 with s/h. Well, to cut the story short all 5 are damaged. So I bought 2 WD Caviars SATA 80Gb. I'll still install them as Raid 0.

As for the rig itself, here are a few pics:

7034671ed490bf62.jpg


7034671ed490ab8c.jpg


7034671ed4903823.jpg


I added a few holes to help in cable management and have connectors exactly where I want them. It took me 3 hours to set it up. As for oc'ing, not too many options on that ECS mobo. I'll have some fun once the WDs are delivered.

As a bonus, here's my folding W/C rig:

7034671eef993f0c.jpg


John
 

Latest posts

Back
Top