AkG
Well-known member
- Joined
- Oct 24, 2007
- Messages
- 5,270
IOMETER / IOMeter Controller Stress Test
IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 que depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular que depth that is heavily weighted for single user environments.
To be honest, the lowered performance shown here was completely expected and so too was the shallow que depth “blip” which has the Phoenix doing better than the others. You can negate to a great extent any firmware limiters emplaced by the controller maker by using faster NAND, but once the que depths start to get deeper, IOMeter will find these differences in how the controller is handling things and highlight them. To be precise, what IOMeter is telling us is that while G.Skill has masked the reduced power of the SF1200 controller, the problem still exists. The fact of the matter is that the Vertex 2 is still the better drive for heavily stressed systems, where the que depths are going to be deep.
In our usual IOMeter test we are trying to replicate real world use where reads severely outnumber writes. However, to get a good handle on how powerful the controller is we, we have also run an additional test. This test is made of 1 section at que depth of 1. In this test we ran 100% random. 100%writes of 4k size chunks of information. In the past we found this tests was a great way to check and see if stuttering would occur. Since the introduction of ITGC and / or TRIM the chances of real world stuttering happening in a modern generation SSD are next to nill; rather the main focus has shifted from predicting "stutter" to showing how powerful the controller used is. By running continuous small, random writes we can stress the controller to its maximum, while also removing its cache buffer from the equation (by overloading it) and showing exactly how powerful a given controller is. In the .csv file we then find the Maximum Write Response Time. This in ms is worst example of how long a given operation took to complete. We consider anything higher than 350ms to be a good indicator that the controller is either relying heavily on its cache buffer to hide any limitations it possess or the firmware of the controller is severely limiting it.
According to this, the controller is certainly not working as efficiently as it could be. This difference may only be minor in appearance but is really is enough to bring the results down as far as it did in our typical IOMeter suite of tests. What SandForce has done is basically lower the I/O performance of their drive by making the controller simply less efficient at handling things. This could be as simple as telling it to “pause” in between commands or could be a much more elegant solution (our guess is they simply patched in older alpha code into the firmware to make it less efficient).
However, what G.Skill has done is quite interesting. They have a controller that is working at less than its optimum speed so instead of giving up and paying a penalty for a breaking a contract they have paired the controller to FASTER storage. The results tend to speak for themselves.
IOMETER
IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 que depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular que depth that is heavily weighted for single user environments.
<img src="http://images.hardwarecanucks.com/image/akg/Storage/Phoenix/IOM.jpg" border="0" alt="" />
To be honest, the lowered performance shown here was completely expected and so too was the shallow que depth “blip” which has the Phoenix doing better than the others. You can negate to a great extent any firmware limiters emplaced by the controller maker by using faster NAND, but once the que depths start to get deeper, IOMeter will find these differences in how the controller is handling things and highlight them. To be precise, what IOMeter is telling us is that while G.Skill has masked the reduced power of the SF1200 controller, the problem still exists. The fact of the matter is that the Vertex 2 is still the better drive for heavily stressed systems, where the que depths are going to be deep.
IOMeter Controller Stress Test
In our usual IOMeter test we are trying to replicate real world use where reads severely outnumber writes. However, to get a good handle on how powerful the controller is we, we have also run an additional test. This test is made of 1 section at que depth of 1. In this test we ran 100% random. 100%writes of 4k size chunks of information. In the past we found this tests was a great way to check and see if stuttering would occur. Since the introduction of ITGC and / or TRIM the chances of real world stuttering happening in a modern generation SSD are next to nill; rather the main focus has shifted from predicting "stutter" to showing how powerful the controller used is. By running continuous small, random writes we can stress the controller to its maximum, while also removing its cache buffer from the equation (by overloading it) and showing exactly how powerful a given controller is. In the .csv file we then find the Maximum Write Response Time. This in ms is worst example of how long a given operation took to complete. We consider anything higher than 350ms to be a good indicator that the controller is either relying heavily on its cache buffer to hide any limitations it possess or the firmware of the controller is severely limiting it.
<img src="http://images.hardwarecanucks.com/image/akg/Storage/Phoenix/stutter.jpg" border="0" alt="" />
According to this, the controller is certainly not working as efficiently as it could be. This difference may only be minor in appearance but is really is enough to bring the results down as far as it did in our typical IOMeter suite of tests. What SandForce has done is basically lower the I/O performance of their drive by making the controller simply less efficient at handling things. This could be as simple as telling it to “pause” in between commands or could be a much more elegant solution (our guess is they simply patched in older alpha code into the firmware to make it less efficient).
However, what G.Skill has done is quite interesting. They have a controller that is working at less than its optimum speed so instead of giving up and paying a penalty for a breaking a contract they have paired the controller to FASTER storage. The results tend to speak for themselves.
Last edited by a moderator: