What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

OCZ Vertex 3 Max IOPS 240GB SSD Review

Status
Not open for further replies.

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
OCZ has been a dynamo in the SSD arena over the last few months. Alongside their recent acquisition of the controller manufacturer Indilinx, drives like the Agility 3, Vertex 3 and Solid 3 drives have taken the SATA 6Gbps segment by storm. We’ve already looked at an engineering sample of high performance Vertex 3 and followed up that review with a test of the retail version but now OCZ has something new for us: a Vertex 3 on steroids dubbed the Max IOPS Edition.

Upon first glance there really aren’t all that many differences between the 240GB versions of the “standard” Vertex 3 and its new Max IOPS sibling. Most of the performance figures are the same as well other than one key aspect: the Max IOPS features better small file performance. Basically the 5K extra write IOPS and 15K bump in read speeds will result in increased real world application load times and file transfers.

OCZ was able to increase the performance of the Vertex 3 Max IOPS 240GB by simply using different NAND. The typical SandForce 2281 found in the Vertex 3 series (and other competing SSDs) is usually paired with Intel/Micron 25nm NAND since the 32nm modules are becoming harder and harder to source. OCZ on the other has used 32nm ultra high performance Toggle Mode NAND. This alone will make the Max IOPS tempting to many as in one fell swoop OCZ has given their model a major advantage over the competition: increased life span.

While the Max IOPS version can be considered a “special edition” it is just as readily available as the standard version. This is also a top bin product which carries a price premium over the standard Vertex 3 so it will be firmly outside the budget of many. All things considered though, its current price of around $590 isn’t all that bad considering its capacity of 240GB. $590 also makes the Max IOPS version a mere $60 more than its slower sibling.

mfg.jpg

 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Specifications

Specifications



specs1.jpg


specs2.jpg


specs3.jpg

 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Introducing the SandForce SF2000 Family

Introducing the SandForce SF2000 Family


sf_lg.jpg


As you are probably well aware by now, there are actually many different models which make up the next generation of SandForce controllers. Much like Intel’s socket 1155 i3/i5/i7 series of processors, all these different SandForce numbers represent slightly different tweaks and features, but all are basically built upon the same SF2000 foundation.

In grand total there are eight SF2000 iterations, but for the most part we won't see most of them in the retail channel. Take for example the SF2141; this is a cut down 4 channel, 24bit RS ECC, SATA 3GB/s controller which probably wont see much fan fare outside of truly budget SSDs. The easiest way to think about this one is to consider it the low end of SF2000 drives. Stepping up a level to 8 channels (and 55bit BCH ECC) but still SATA 3GB/s only is the SF2181 which you can consider the mid range of this generation. This one will probably be featured in more mid-tier next generation SSDs as it has better error correction abilities, yet cannot directly compete with the true stars of the SF200 consumer line: the SF 2281.

The only difference between the two “real” consumer grade SF2000 SATA 6G controllers most likely to be seen (the SF2281 and SF2282) is the one -the 2282- is only for extra large 512GB and higher drives (though the SF2281 can handle 512GB of NAND) and is a larger chip. These are the two flagship products as such have received all the features and all the tweaks which are going to become synonymous with the SF2000 consumer class controllers.

The other four controllers are for enterprise environments and boast features such as eMLC compatibility, Military Erase, SAS and super capacitor capabilities.


Features


enhance_lg.jpg


The SF2000 controller series is built upon the same architecture as the original SF1000 series. You get DuraWrite, RAISE and all the other features but these have all undergone enhancements and tweaking.

rs.jpg


The original SF1000 series had ECC of 24bits per 512byte sector of ECC; whereas the new controller has 55bits. The type of ECC has changed as well. The original used the more simplistic Reed-Solomon (aka “RS”) ECC code which is probably best known from its use in CDs.

bch.jpg


Compare and contrast this with the fact that the new controller uses Bose-Chaudhuri-Hocquenghem (aka “BCH”) for its ECC code; which is a more elegant version that targets individual errors. It is also faster and easier for the controller to correct these errors making for a lowered performance impact. AES encryption has also doubled from 128 to 256

sata.jpg

The most important of these new features for consumers is of course the new SATA 6Gb/s capabilities. This larger bus instantly translates into much higher sequential performance. The second generation of flagship SandForce controllers has also received a boost on the small file performance end of things thanks in no small part to a 20% increase in IOPS. The first generation SF1200 was rated for up to 50,000 IOPS whereas the new controller family has a rating of 60,000 IOPS.

The other interesting feature which all but the most basic of the SF2000 models boast is SLC NAND abilities. In the past, a manufacturer had to step up the enterprise SF1500 to get SLC compatibility but now they don't have to. Add in lowered power consumption and you can see that while the SF2000 series builds upon the same basic foundation as the previous generation, they are not all that similar when you take a closer look.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
A Look at DuraWrite, RAISE and More

A Look at DuraWrite, RAISE and More


Corsair_Force_sandforce_logi.jpg

Let’s start with the white elephant in the room and explain why this 240GB drive is in reality a 256GB drive. The OCZ Vertex 3 MI has sixteen 16GB NAND chips onboard which gives it a capacity of 256GB, but is seen by the OS as 240GB. Manufacturers use this to help increase IOPS performance and also extend life via wear leveling (as there are always free cells even when the drive is reported as “full”) and even durability since the drive has cells in reserve it can reassign sectors to as the “older” cells die. While 16GB worth of cells set aside for a SandForce drive is not that much compared to some previous models, this is still a lot of space.

dura.jpg


As we said, over-provisioning is usually for wear leveling and ITGC as it gives the controller extra cells to work with for keeping all the cells at about the same level of wear. However, this is actually not the main reason SandForce sets aside so much. Wear leveling is at best a secondary reason or even just a “bonus” as this over-provisioning is mainly for the Durawrite and RAISE technology.

Unlike other solid state drives which do not compress the data that is written to them, the SandForce controller does real time loss-less compression. The upside to this is not only smaller lookup tables (and thus no need for off chip cache) but also means less writes will occur to the cells. Lowering how much data is written means that less cells have to be used to perform a given task and this should also result in longer life and even fewer controller cycles being taken up with internal house cleaning (via TRIM or ITGC).

Corsair_Force_Fact5.jpg


Longevity may be a nice side effect but the real purpose of this compression is so the controller has to use fewer cells to store a given amount of data and thus has to read from fewer cells than any other drive out there (SandForce claims only .5x is written on average). The benefit to this is even at the NAND level storage itself is the bottleneck for any controller and no matter how fast the NAND is, the controller is faster. Cycles are wasted in waiting for data retrieval and if you can reduce the number of cycles wasted, the faster an SSD will be.

Compressing data and thus hopefully getting a nice little speed boost is all well and fine but as anyone who has ever lost data to corruption in a compressed file knows, reliability is much more important. Compressing data means that any potential loss to a bad or dying cell (or cells) will be magnified on these drives so SandForce needed to ensure that the data was kept as secure as possible. While all drives use ECC, to further ensure data protection SandForce implemented another layer of security.

Corsair_Force_Fact4.jpg


Data protection is where RAISE (Redundant Array of Independent Silicon Elements) comes into the equation. All modern SSDs use various error correction concepts such as ECC. This is because as with any mass produced item there are going to be bad cells while even good cells are going to die off as time goes by. Yet data cannot be lost or the end user’s experience will go from positive to negative. SandForce likes to compare RAISE to that of RAID 5, but unlike RAID 5 which uses a parity stripe, RAISE does not. SandForce does not explicitly say how it does what it does, but what they do say is on top of ECC, redundant data is striped across the array. However, since it is NOT parity data there is no added overheard incurred by calculating the parity stripe.

Corsair_Force_Fact2.jpg


According to SandForce’s documentation, not only individual bits or even pages of data can be recovered but entire BLOCKS of data can be as well. So if a cell dies or passes on bad data, the controller can compensate, pass on GOOD data, mark the cell as defective and if necessary swap out the entire block for a spare from the over-provisioning area. As we said, SandForce does not get into the nitty-gritty details of how DuraWrite or RAISE works, but the fact that it CAN do all this means that it most likely is writing a hash table along with the data.

SandForce is so sure of their controller abilities that they state the chances of data corruption are not only lower than that of other manufactures’ drives, but actually approaches ZERO chance of data corruption. This is a very bold statement, but only time will tell if their estimates are correct. In the mean time, we are willing to give the benefit of the doubt and say that at the very least data corruption is as unlikely with one of these products as it is on any modern MLC drive.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
OCZ’s Vertex 3 Max IOPS Under the Microscope

OCZ’s Vertex 3 Max IOPS Under the Microscope


OCZ_Vertex3_MaxIOPS_box_f_sm.jpg
OCZ_Vertex3_MaxIOPS_box_b_sm.jpg

The box which houses the Vertex 3 Max IOPS certainly looks good but there really isn’t much (other than a small logo) which distinguishes it from the standard Vertex. On the positive side, it does come with a 2.5” to 3.5” adapter which is a nice bonus.

OCZ_Vertex3_MaxIOPS_top_sm.jpg
OCZ_Vertex3_MaxIOPS_bottom_sm.jpg

As with the packaging there is nothing besides a small “Max IOPS” moniker to hint at the extra performance hidden within this drive. Since the drive will most likely be housed within your case, we don’t really care about its outside appearance but it is still good to see OCZ using a metal enclosure.

OCZ_Vertex3_MaxIOPS_open_sm.jpg

Upon first glance the PCB and its components look like the typical standard fare with sixteen 16GB NAND flash chips and one SandForce SF 2281 controller chip. But as is befitting the Max IOPS moniker, there are some telltale differences.

OCZ_Vertex3_MaxIOPS_nand_sm.jpg

Instead of using 25nm Intel/Micron NAND, OCZ has implemented Toshiba branded Asynchronous Toggle Mode 1.0 MLC NAND onto the PCB. While Toggle Mode 1.0 NAND has a slightly slower interface speed of 133MB/s compared to 200MB/s for ONFi 2.x NAND, this is still more than enough to saturate the 8 channel SandForce controller's abilities which top out in the low 500MB/s range on reads (or about what half of what this NAND's interface can provide). As such, the Max IOPS does lose out (on paper at least) in sequential speeds to ONFi NAND typically found in other SandForce drives.

Toggle Mode NAND does have some advantages though. Toshiba claims that it is much faster at writes than some SLC NAND which should give it an edge in small file write performance. Toggle Mode 1.0 NAND also has lower power requirements than typical ONFi MLC modules. If this is indeed the case, this NAND should help make compressible data transfers faster and net a huge performance increase in uncompressible data; an area SandForce drives usually take a large hit in.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Vista load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a Phoneix Pro 120GB Solid State Drive.

For synthetic tests we used a combination of ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD and PCMark Vanatage.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.


For all testing a Asus P8P67 Deluxe motherboard was used, running Windows 7 64bit Ultimate edition (or Vista for boot time test). All drives were tested using AHCI mode using Intel RST 10 drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase or OCZ SSDToolbox and then quick formatted to make sure that they were in optimum condition for the next test suite.


Processor: Core i5 2400
Motherboard: Asus P8P67 Deluxe
Memory: 8GB Mushkin DDR3 1300
Graphics card: Asus 5550 passive
Hard Drive: 1x Seagate 3TB XT, OCZ 120GB RevoDrive
Power Supply: XFX 850


SSD FIRMWARE (unless otherwise noted):

OCZ Vertex: 1.6
OCZ Vertex 2 100GB: 1.33
OCZ Vertex 3 240GB: 2.09
Mushkin Callisto Deluxe 40GB: 3.4.0
Corsair Force F90: 2.0
OCZ Vertex 3 MI 240GB: 2.09
Crucial C300 128GB: 006
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Read Bandwidth / Write Performance

Read Bandwidth


For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture. We don’t put much stock in Burst speed readings and thus we no longer included it. The most important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.

read.jpg


With an average read speed of nearly 420MB/s this drive is basically the same speed as the standard Vertex 3 240GB. It may not be noticeably faster, but considering the Max IOPS edition’s main claim to fame is not sequential speed but small file random performance it was nice to see it keeping pace here.


Write Performance


For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.

write.jpg


While the Max IOPS’ average write speed is in line with expectations, the minimum write speed is a shade better. This to us is a more accurate gauge on a given drive’s real world performance.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Crystal DiskMark / PCMark Vantage

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your hard drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.

cdm_r.jpg


cdm_w.jpg


On the write performance side of things the Max IOPS is simply better in all but sequential write file performance. The 3x que depth increase is noticeable better, but so too was the single que depth 4k write file performance.

In terms of read performance, the single que depth 4K numbers are only slightly better than the standard version but the 3x que depth numbers are simply in another league. Remember, Crystal Disk Mark uses uncompressible data this is where the Toggle Mode NAND in the Max IOPS edition can really shine.



PCMark Vantage



While there are numerous suites of tests that make up PCMark Vantage, only one is pertinent: the HDD Suite. The HDD Suite consists of 8 tests that try and replicate real world drive usage. Everything from how long a simulated virus scan takes to complete, to MS Vista start up time to game load time is tested in these 8 core tests; however we do not consider this anything other than just another suite of synthetic tests. For this reason, while each test is scored individually we have opted to include only the overall score.

pcm.jpg


As with Crystal DiskMark, the PC Mark Vantage numbers are impressive to say the least. This drive really is a great example of what can be accomplished when you couple a high performance controller with high performance NAND.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
AS-SSD / Access Time

AS-SSD


AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.

asd_r.jpg


asd_w.jpg


As with the other synthetic test results, the Max IOPS simply laughs at compressible and uncompressible data alike. With the exception of sequential write speed this drive simply outclasses all competitors.


Access Time


To obtain an accurate reading on the read and write latency of a given drive, AS-SSD was used for this benchmark. A low number means that the drive’ data can be accessed quickly while a high number means that more time is taken trying to access different parts of the drive.

lat.jpg

Much like Crystal DiskMark, AS-SSD uses data which cannot really be compressed, thus any differences between two SF2281 controller based drives is due solely to the NAND used in them. This Toshiba NAND seems to be the stuff dreams are made of.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER

IOMETER


IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 que depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular que depth that is heavily weighted for single user environments.

iom.jpg


With lower que depths the Vertex 3 Max IOPS varies between being only slightly faster to much, much better than its sibling. When the que depths start to get deeper, the difference seems to settle down to a minor yet still noticeable improvement.
 
Last edited by a moderator:
Status
Not open for further replies.

Latest posts

Top