What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

OCZ Vertex 3 240GB Solid State Drive Review

Status
Not open for further replies.

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
If there is one area in this industry which seems to be evolving at a break neck pace it is the storage market and its new poster child: Solid State drives. In the SSD arena, yesterday’s champion can be today’s “also ran”. In fact, this niche is evolving so quickly that unless you are right on top of things it is quite easy to become dazed and confused with terms like Over-Provisioning, TRIM, cell wear and write life. Unfortunately, this also means there are few –if any- “safe” choices out there that can protect your investment in the months ahead.

With that being said, one company’s flagship model proved to be a well placed choice though many didn’t know it at the time. Vertex series of drives from OCZ continue to weather the storm of quickly expanding SSD technology and still offer some of the best performance around. Today we are going to look at the brand new OCZ Vertex 3 240GB to see if this new iteration lives up to the “Vertex” brand name.

Due to past experiences, our expectations for this drive are naturally quite high. After all, it sports the latest 25nm NAND and SandForce’s newest SATA 6 Gb/s controller so the potential for groundbreaking speeds is certainly there. Unfortunately, there are some concerns bumping around the market regarding the lifespan afforded by 25nm NAND but we’ll get to that a bit later.

Helping balance out concerns over the NAND is the fact that your hard earned dollar does go further with this drive than it has ever gone before. The 240GB version of the Vertex 2 was very expensive, and while a real world online price of about $450 for the Vertex 3 is not exactly “cheap”, it is now low enough to be within the realm of feasibility for many.


mfg.jpg
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Specifications

Specifications



specs.jpg


specs2.jpg


specs3.jpg


specs4.jpg

 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Introducing the SandForce SF2000 Family

Introducing the SandForce SF2000 Family


sf_lg.jpg


As you are probably well aware by now, there are actually many different models which make up the next generation of SandForce controllers. Much like Intel’s socket 1155 i3/i5/i7 series of processors, all these different SandForce numbers represent slightly different tweaks and features, but all are basically built upon the same SF2000 foundation.

In grand total there are eight SF2000 iterations, but for the most part we won't see most of them in the retail channel. Take for example the SF2141; this is a cut down 4 channel, 24bit RS ECC, SATA 3GB/s controller which probably wont see much fan fare outside of truly budget SSDs. The easiest way to think about this one is to consider it the low end of SF2000 drives. Stepping up a level to 8 channels (and 55bit BCH ECC) but still SATA 3GB/s only is the SF2181 which you can consider the mid range of this generation. This one will probably be featured in more mid-tier next generation SSDs as it has better error correction abilities, yet cannot directly compete with the true stars of the SF200 consumer line: the SF 2281.

The only difference between the two “real” consumer grade SF2000 SATA 6G controllers most likely to be seen (the SF2281 and SF2282) is the one -the 2282- is only for extra large 512GB and higher drives (though the SF2281 can handle 512GB of NAND) and is a larger chip. These are the two flagship products as such have received all the features and all the tweaks which are going to become synonymous with the SF2000 consumer class controllers.

The other four controllers are for enterprise environments and boast features such as eMLC compatibility, Military Erase, SAS and super capacitor capabilities.


Features


enhance_lg.jpg


The SF2000 controller series is built upon the same architecture as the original SF1000 series. You get DuraWrite, RAISE and all the other features but these have all undergone enhancements and tweaking.

rs.jpg


The original SF1000 series had ECC of 24bits per 512byte sector of ECC; whereas the new controller has 55bits. The type of ECC has changed as well. The original used the more simplistic Reed-Solomon (aka “RS”) ECC code which is probably best known from its use in CDs.

bch.jpg


Compare and contrast this with the fact that the new controller uses Bose-Chaudhuri-Hocquenghem (aka “BCH”) for its ECC code; which is a more elegant version that targets individual errors. It is also faster and easier for the controller to correct these errors making for a lowered performance impact. AES encryption has also doubled from 128 to 256

sata.jpg

The most important of these new features for consumers is of course the new SATA 6Gb/s capabilities. This larger bus instantly translates into much higher sequential performance. The second generation of flagship SandForce controllers has also received a boost on the small file performance end of things thanks in no small part to a 20% increase in IOPS. The first generation SF1200 was rated for up to 50,000 IOPS whereas the new controller family has a rating of 60,000 IOPS.

The other interesting feature which all but the most basic of the SF2000 models boast is SLC NAND abilities. In the past, a manufacturer had to step up the enterprise SF1500 to get SLC compatibility but now they don't have to. Add in lowered power consumption and you can see that while the SF2000 series builds upon the same basic foundation as the previous generation, they are not all that similar when you take a closer look.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
A Look at DuraWrite, RAISE and More

A Look at DuraWrite, RAISE and More


Corsair_Force_sandforce_logi.jpg

Let’s start with the white elephant in the room and explain why this 240GB drive is in reality a 256GB drive. The OCZ Vertex 3 has sixteen 16GB NAND chips onboard which gives it a capacity of 256GB, but is seen by the OS as 240GB. Manufacturers use this to help increase IOPS performance and also extend life via wear leveling (as there are always free cells even when the drive is reported as “full”) and even durability since the drive has cells in reserve it can reassign sectors to as the “older” cells die. While 16GB worth of cells set aside for a SandForce drive is not that much compared to some previous models, this is still a lot of space.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/Vertex3/dura.jpg" border="0" alt="" />

As we said, over-provisioning is usually for wear leveling and ITGC as it gives the controller extra cells to work with for keeping all the cells at about the same level of wear. However, this is actually not the main reason SandForce sets aside so much. Wear leveling is at best a secondary reason or even just a “bonus” as this over-provisioning is mainly for the Durawrite and RAISE technology.

Unlike other solid state drives which do not compress the data that is written to them, the SandForce controller does real time loss-less compression. The upside to this is not only smaller lookup tables (and thus no need for off chip cache) but also means less writes will occur to the cells. Lowering how much data is written means that less cells have to be used to perform a given task and this should also result in longer life and even fewer controller cycles being taken up with internal house cleaning (via TRIM or ITGC).

Corsair_Force_Fact5.jpg


Longevity may be a nice side effect but the real purpose of this compression is so the controller has to use fewer cells to store a given amount of data and thus has to read from fewer cells than any other drive out there (SandForce claims only .5x is written on average). The benefit to this is even at the NAND level storage itself is the bottleneck for any controller and no matter how fast the NAND is, the controller is faster. Cycles are wasted in waiting for data retrieval and if you can reduce the number of cycles wasted, the faster an SSD will be.

Compressing data and thus hopefully getting a nice little speed boost is all well and fine but as anyone who has ever lost data to corruption in a compressed file knows, reliability is much more important. Compressing data means that any potential loss to a bad or dying cell (or cells) will be magnified on these drives so SandForce needed to ensure that the data was kept as secure as possible. While all drives use ECC, to further ensure data protection SandForce implemented another layer of security.

Corsair_Force_Fact4.jpg


Data protection is where RAISE (Redundant Array of Independent Silicon Elements) comes into the equation. All modern SSDs use various error correction concepts such as ECC. This is because as with any mass produced item there are going to be bad cells while even good cells are going to die off as time goes by. Yet data cannot be lost or the end user’s experience will go from positive to negative. SandForce likes to compare RAISE to that of RAID 5, but unlike RAID 5 which uses a parity stripe, RAISE does not. SandForce does not explicitly say how it does what it does, but what they do say is on top of ECC, redundant data is striped across the array. However, since it is NOT parity data there is no added overheard incurred by calculating the parity stripe.

Corsair_Force_Fact2.jpg


According to SandForce’s documentation, not only individual bits or even pages of data can be recovered but entire BLOCKS of data can be as well. So if a cell dies or passes on bad data, the controller can compensate, pass on GOOD data, mark the cell as defective and if necessary swap out the entire block for a spare from the over-provisioning area. As we said, SandForce does not get into the nitty-gritty details of how DuraWrite or RAISE works, but the fact that it CAN do all this means that it most likely is writing a hash table along with the data.

SandForce is so sure of their controller abilities that they state the chances of data corruption are not only lower than that of other manufactures’ drives, but actually approaches ZERO chance of data corruption. This is a very bold statement, but only time will tell if their estimates are correct. In the mean time, we are willing to give the benefit of the doubt and say that at the very least data corruption is as unlikely with one of these products as it is on any modern MLC drive.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Wading Into The 25nm Endurance Debate

Wading Into The 25nm Endurance Debate


When 25nm NAND was first introduced, we all heard the doom and gloom about decreased write / erase endurance of the NAND cells. It certainly made for some nail biting reading material but the truth is far less dire than it first seemed.

For understandable reasons, manufacturers like Micron don’t seem to be all that open when it comes to discussing the average write / erase cycle endurance of their drives. That’s actually a stretch though since most refuse to publish actual average cycle endurance and rely on somewhat shady MTBF (mean time before failure) figures. However, with a number of publications discussing potential endurance limitations of 25nm NAND without citing any sources, we began wondering whether these numbers being thrown around were accurate and if they should really matter to the end user.

25nm.jpg

Let’s begin with endurance. Generally speaking the industry standard endurance for 34nm MLC NAND seems to be somewhere around the 5,000 to 7,000 write / erase cycle mark after which the cells refuse to change states and essentially die, leaving them as read-only cells. 54nm MLC NAND on the other hand is normally listed as having an endurance of around 10,000 w/e cycles. See a pattern emerging? As density increases and overall chip side decreases, write endurance tends to degrade.

Most of the time these seeming low figures were used as a rallying cry for SSD detractors but there’s an excellent White Paper from Toshiba that outlines the techniques which can be implemented for augmenting the life expectancy of MLC SSDs. Many of these same techniques are currently in use on modern drives and tend to be quite beneficial for prolonging the life of an SSD.

Let’s take the numbers Toshiba has published since they are definitely a worst case scenario at a mere 1,400 cycles before wear leveling is taken into account. Even today’s 25nm MLC drives supposedly have an endurance of about 3,000 cycles.

1.jpg

1,400 cycles may not seem like much but when over provisioning and wear leveling is taken into account, even an SSD with such low endurance can have its life extended exponentially. In the chart above (compiled by Toshiba) an average user will likely be writing around 3GB per day to their primary drive while heavier users average around 9GB even with hibernation built into the equation. Meanwhile, the average daily write limit of a 128GB SSD in this case is 44GBover the course of five years. To put this into context, you would have to write upwards of 44GB to the drive every day for five years in order to hit the its maximum wear level.

Most home and office users won’t get anywhere near this level of usage unless they start using the SSD in a file server, NAS or some other high usage scenario.

2.jpg

Since more cells are available on larger drives, the amount of data that can be written pre cell failure expands exponentially. Unfortunately, the opposite is true for smaller capacity products as they will hit their endurance limits much quicker.

So after all of this you may be wondering what this all has to do with the question we started this long-winded explanation with: does 25nm NAND lower the overall life expectancy of a drive? In theory it does but in the grand scheme of things, very few –if any- gamers, home users and enthusiasts will even come close to testing the limits of these drives’ endurance. In addition, higher capacity drives are granted an expanded endurance overhead so there’s even less to worry about. So breathe easy folks and buy with confidence.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
A Closer look at the Vertex 3 240GB

A Closer look at the Vertex 3 240GB


Since OCZ was gracious enough to send us a pre-production sample of this drive, it doesn’t come with the usual packaging and accessories seen on retail products. However, we have been told that the overall design intent and performance will be reflected in the mass market retail drives as they become available.

OCZ_Vertex3_case_sm.jpg

Our unit came in a classic OCZ two tone chassis which may or may not be carried over into the retail drives. A black chassis with silver lid is pretty much standard fare for OCZ and while it is not the flashiest SSD we have seen, no one should really care since it will be stowed away in a computer case anyways.

OCZ_Vertex3_board_sm.jpg
OCZ_Vertex3_board2_sm.jpg

What we can tell you is that our particular sample has 16, 128Gbit (16GB) 25nm Micron branded NAND flash chips and the latest SandForce SF2281 controller chip.

This 8 channel controller is a second generation SandForce unit which sports a SATA 6Gb/s enabled chip (rated for a maximum 550MB/s transfer speed) with upwards of 60,000 IOPS. As with the first gen controller it requires no external ram cache. Also, since it isn’t meant for the enterprise market it does not support a “power fail circuit” (aka SuperCap) and thus ships without a super capacitor.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Vista load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a Phoneix Pro 120GB Solid State Drive.

For synthetic tests we used a combination of ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD and PCMark Vanatage.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.


For all testing a Asus P8P67 Deluxe motherboard was used, running Windows 7 64bit Ultimate edition (or Vista for boot time test). All drives were tested using AHCI mode using Intel RST 10 drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase or OCZ SSDToolbox and then quick formatted to make sure that they were in optimum condition for the next test suite.


Processor: Core i5 2400
Motherboard: Asus P8P67 Deluxe
Memory: 8GB Mushkin DDR3 1300
Graphics card: Asus 5550 passive
Hard Drive: 1x Seagate 3TB XT, OCZ 120GB RevoDrive
Power Supply: XFX 850


SSD FIRMWARE (unless otherwise noted):

OCZ Vertex: 1.6
OCZ Vertex 2 100GB: 1.33
Mushkin Callisto Deluxe 40GB: 3.4.0
Corsair Force F90: 2.0
OCZ Vertex 3 240GB: 1.11
Crucial C300 128GB: 006
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Read Bandwidth / Write Performance

Read Bandwidth


For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture. We don’t put much stock in Burst speed readings and thus we no longer included it. The most important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.

read.jpg

Since this is a SATA 6GB/s enabled solid state device we were expecting some pretty impressive numbers and the OCZ Vertex 3 does not fail to impress. With an average sequential read speed of nearly 420MB/s this Solid State Drive is not only much faster than its predecessor the Vertex 2 but is also noticeably faster than the Crucial C300 128GB – a SSD which also sports a SATA 6GB/s interface.



Write Performance


For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.

write.jpg


While the OCZ Vertex 3 is unable to post unreal sequential write performance numbers like it can with sequential reads, an average of 375MB/s is still impressively fast. Also impressive is the fact this SSD posted a minimum speed which is actually faster – nearly 13% faster to be precise - than the Vertex 2’s average speed!
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Crystal DiskMark / PCMark Vantage

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your hard drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/Vertex3/cd_w.jpg" border="0" alt="" />

<img src="http://images.hardwarecanucks.com/image/akg/Storage/Vertex3/cd_r.jpg" border="0" alt="" />

While the single queue depth 4K numbers are only moderately higher than some other drives, the true difference in performance comes to light when you look at the 32 queue depth numbers. On the read side of things the OCZ Vertex 3 posted a 4k32 queue depth number which is actually greater than the Vertex 2’s Crystal DiskMark sequential read performance numbers.



PCMark Vantage


While there are numerous suites of tests that make up PCMark Vantage, only one is pertinent: the HDD Suite. The HDD Suite consists of 8 tests that try and replicate real world drive usage. Everything from how long a simulated virus scan takes to complete, to MS Vista start up time to game load time is tested in these 8 core tests; however we do not consider this anything other than just another suite of synthetic tests. For this reason, while each test is scored individually we have opted to include only the overall score.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/Vertex3/vantage.jpg" border="0" alt="" />

Well it appears that even PCMark Vantage loves this SSD. This also makes the OCZ Vertex 3 “four for four” with synthetic tests as it has out right dominated all the other solid state drives we have looked at. Colour us impressed to say the least.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
AS-SSD / Access Time

AS-SSD


AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data

asd_w.jpg


asd_r.jpg

This certainly is interesting. While once again the power of this drive is simply awe inspiring it is not a clean sweep. With an ultra deep 64 queue depth – for a home user environment – the Vertex 3 is only able to come in second place. It seems that the Marvel controller inside the Crucial C300 is more positively inclined towards deep queue depths. This is most likely because of the added overhead the SandForce controller has to deal with (e.g. data encryption) and the fact it does not have an external RAM cache to fall back on. With that being said the write speed of this drive is simply amazing and the reads are nothing to be ashamed of either.


Access Time


To obtain an accurate reading on the read and write latency of a given drive, AS-SSD was used for this benchmark. A low number means that the drive’ data can be accessed quickly while a high number means that more time is taken trying to access different parts of the drive.

random.jpg

While the write access time of the Vertex 3 has not improved much over its predecessor the Vertex 2, the same cannot be said of the read latency. With a read latency which is less than HALF of that of the Vertex 2 this is easily the best performance we have ever seen. It seems that the second generation SandForce controller has a lot to offer and not just in the raw horsepower department.
 
Last edited by a moderator:
Status
Not open for further replies.

Latest posts

Top