What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

OWC Mercury Extreme Pro 120GB Solid State Drive Review

Status
Not open for further replies.

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284


OWC Mercury Extreme Pro 120GB Solid State Drive Review




Manufacturer’s Product Page: Click Here
Part Number: OWCSSDMX120
Warranty: 3 Years
Price: about $320 US Directly from OWC



*PLEASE NOTE: The day this review went live, new information was given to us which changes certain aspects of the Mercury. We appreciate your understanding and any changes will be listed at the top of every page. Pay special attention to the pricing which we listed as $380 but is actually $320 at retail.


The juggernaut known as SandForce’s SF-1222 controller has continued its drive to world domination as it picks up newer manufactures on an almost daily basis. Naturally, some companies that have picked up this controller are not exactly household names here in North America outside of certain circles (like Apple users for example). Many are now trying to make a splash in the lucrative SSD market. One such company is Other World Computing or -OWC for short- who are best known for catering to that highly demanding, yet extremely finicky crowd of alternative computing enthusiasts. Today we are going to be looking at their brand new Mercury Extreme Pro 120GB Solid State drive.

As we said, OWC is not exactly well known here in North America outside of Apple circles but their presence is nonetheless established in this market. Even though OWC isn’t a name on everyone’s lips, they were actually the first company to release a SandForce drive; even beating OCZ to the marketplace. That does have to count for something and in our minds it shows OWC’s commitment to their customers.

As the 120GB in the name suggests, this is not just any ordinary SandForce drive as it is the first of the so called “extended” products we will be looking at. In a nut shell you get an extra 20GB of space versus the first generation SF-1222 products for your hard earned money with supposedly no downsides. In addition, since this is an extended version we fully expect the Mercury to come with the latest and greatest 310 production firmware. Needless to say it will be interesting to see how this new firmware stacks up against the others we have looked at. Of course, OCZ has had the 310 firmware out for a while now while other 301 and 305 firmware-loaded drives also litter the market from the likes of Corsair, G.Skill, Patriot and others.

OWC seems to have a mainly Apple customers in mind here since the price on their drive stands at a low $320 US. This is actually the lowest price we have seen for a SandForce SF-1222 drive so it seems like OWC could make a splash in what is shaping up to be a very cluttered high end solid SSD marketplace.


 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
Specifications

Specifications



<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/Mercury_Extreme_Pro_specs.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/Mercury_Extreme_Pro_specs2.jpg" border="0" alt="" />
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
A Closer Look at the OWC Mercury Extreme

A Closer Look at the OWC Mercury Extreme


Since our drive was sent in a non-retail package, we’ll skip right to the money shots of the Mercury.


The OWC Mercury is actually quite a stunning looking drive but all those looks will likely be for naught simply because it will spend most of its life within the dark confines of your case. We were disappointed with the lack of performance and power consumption numbers on the drive itself but if all else fails, you can find this information on OWC’s website.


The connector plate shows us basically what we have seen in the past from other SF-1222 drives: a SATA power connector as well as a single SATA II data port. There are none of the jumper pins we used to see on some other SSDs from the likes of Indilinx and JMicron.


On first glance this drive’s PCB looks very similar to all previous SandForce units we have looked at in the past. There are 16 flash chips, with 8 per side laid out in a C configuration, and one centrally located SandForce controller chip. However, first impressions can be deceiving for if you looked closely you will notice that this is not one PCB but in fact two. The give away – besides the screws holding the two PCBs together – is the fact that the power and data ports of the Mercury Extreme Pro are not on the side of the board as they usually are found in other SandForce drives. Each PCB is a one sided affair with either 8 NAND chips and a controller chip or 8 NAND chips and the SATA power and data ports.


As we have mentioned more than a dozen times already, the Mercury Extreme Pro uses the SandForce SF1200 (full model name is SF-1222TA3-SBH ) controller and not the SF1500 or SF1500 “lite” which were originally specified and used in the Mercury Extreme line. The heart of the SF1200 is a licensed Tensilica Diamond Core 570T CPU which is a 32bit RISC processor. While this may be just a guess, we would say that a the very least there are probably a few MB of onboard cache on-die as well. By keeping the cache on the chip it allows the controller to be much more efficient as it wastes less cycles waiting for data from an external chip

Just like the majority of SandForce drives we have looked at in the past, the Mercury Extreme Pro uses sixteen, Intel branded, 29F64G08CAMDB chips. To be precise these are also the exact same chips we found in the high end Vertex 2 and the Patriot Inferno. They also the same the mid tier NAND chips we first set eyes on in the G. Skill Falcon 2 - a more budget-oriented orientated drive. In other words except for the different PCB(s), the OWC drive is basically the exact same drive as all those other SandForce drives at the component layer.

While Intel is not exactly free with some of their specifications as other manufacturers what we do know is these MLC NAND Flash chips are 34NM 64gigaBIT (8GB) units. Since there are 16 of them, the Mercury is in fact a 128GB drive, with 8GB set aside for over provisioning.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
A Look at DuraWrite, RAISE and More

A Look at DuraWrite, RAISE and More



Let’s start with the white elephant in the room and explain why this 120GB drive is in reality a 128GB drive. The drive obviously has sixteen 8GB NAND chips onboard which gives it a capacity of 128GB, but is seen by the OS as 120GB. This is called “over-provisioning” and happens when a manufacturer has their drive consistently under report its size. Manufacturers use this to help increase IOPS performance and also extend life via wear levelling (as there is always free cells even when the drive is reported as “full”) and even durability since the drive has cells in reserve it can reassign sectors to as the “older” cells die. Having it giving up 8GB of its capacity to this “buffer” is much less extreme then 28GB like the original non “extended” versions do though and is more inline with what is expected in the consumer orientated niche of the market.


As we said, over-provisioning is usually for wear levelling and ITGC as it gives the controller extra cells to work with for not only keeping all the cells at about the same level of wear. However, this is actually not the main reason SandForce sets aside so much. Wear levelling is at best a secondary reason or even just a “bonus” as this over-provisioning is mainly for the Durawrite and RAISE technology.

Unlike other solid state drives which do not compress the data that is written to them, the SandForce controller does do real time loss-less compression. The upside to this is not only smaller lookup tables (and thus no need for off chip cache) but also means less writes will occur to the cells. Lowering how much data is written means that less cells have to be used to perform a given task and it should also result in longer life and even fewer controller cycles being taken up with internal house cleaning (via TRIM or ITGC).


Longevity may be a nice side effect but the real purpose of this compression is so the controller has to use fewer cells to store a given amount of data and thus has to read from fewer cells than any other drive out there (SandForce claims only .5x is written on average). The benefit to this is even the NAND level storage itself is the bottleneck for any controller and no matter how fast the NAND is, the controller is faster. Cycles are wasted in waiting for data retrieval and if you can reduce the number of cycles wasted, the faster an SSD will be.

Compressing data and thus hopefully getting a nice little speed boost is all well and fine but as anyone who has ever lost data to corruption in a compressed file knows, reliability is much more important. Compressing data means that any potential loss to a bad or dying cell (or cells) will be magnified on these drives so SandForce needed to ensure that the data was kept as secure as possible. While all drives use ECC, to further ensure data protection SandForce implemented another layer of security.



Data protection is where RAISE (Redundant Array of Independent Silicon Elements) comes into the equation. All modern SSDs use various error correction concepts such as ECC because the simple fact of the matter is with any mass produced item there are going to be bad cells and even good cells are going to die off as time goes by; yet data cannot be lost or the end user’s experience will go from positive to negative. SandForce likes to compare RAISE to that of RAID 5, but unlike RAID 5 which uses a parity stripe, RAISE does not. SandForce does not explicitly say how it does what it does, but what they do say is on top of ECC, redundant data is striped across the array. However, since it is NOT parity data there is no added overheard incurred by calculating the parity stripe.


According to SandForce’s documentation, not only individual bits or even pages of data can be recovered but entire BLOCKS of data can be as well. So if a cell dies or passes on bad data the controller can compensate, pass on GOOD data, mark the cell as defective and if necessary swap out the entire block for a spare from the over-provisioning area. As we said, SandForce does not get into the nitty-gritty details of how DuraWrite or RAISE works, but the fact that it CAN do all this means that it most likely is writing a hash table along with the data.

SandForce is so sure of their controller abilities that they state the chances of data corruption are not only lower than that of other manufactures’ drives, but actually approaches ZERO chance of data corruption. This is a very bold statement, but only time will tell if their estimates are correct. In the mean time, we are willing to give the benefit of the doubt and say that at the very least data corruption is as unlikely with an Inferno as it is on any modern MLC drive.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
Firmware, Trim & Self Maintenance

Firmware, Trim & Self Maintenance



As you can see the firmware which comes preloaded on the Mercury is labelled as "310A13F0"; and as such this is the latest “mass production” firmware and is commonly referred to as simply “310” . What this means it has all the tweaks and bug fixes that go along with later firmware revisions but also has the same performance limiting issues we mentioned in the past. As discussed in a earlier reviews, SandForce hobbled the small file IO/s performance of all but the Vertex 2 starting with firmware 305.

What this all this means is that in theory this drive’s competition is more along the lines of the Agility 2 Extended and not the Vertex 2 Extended. However, since this is the only drive we have tested with the newer 310 firmware it will be interesting to see if SandForce eased up on their choke hold and allowed non OCZ customers a bit of breathing room on the performance end of things.

This is not the first time we have seen this firmware revision as OCZ has had their own firmware 1.1 out for a little while now. Since we have already retested the OCZ (expect to see some new numbers in the charts!) and know what the potential of this firmware is, any variation will be readily apparent.

For anyone who has a 100GB OWC - or other 100GB SandForce drive for that matter - please don’t get your hopes up. Firmware 310 - like 309 before it - will not make your 100GB drive into a 120GB product (or 50 into a 60, etc etc). It simply allows both “standard” and “extended” drives the ability to run the same firmware, making for less confusion and less chance of bricking your drive by trying to flash an “incorrect” firmware. SandForce literally just rolled both slightly different firmwares into ONE package.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/ocz-toolbox.jpg" border="0" alt="" />

For anyone interested in whether or not the upcoming OCZ Toolbox (0.6 beta) works on the Mercury Extreme Pro the answer is: yes it works. One of the nicest features we have found with this program - besides it ability to remove the issue of SMART monitoring programs from reporting it as bad via a secure erase - is to see exactly how many free cells are left in reserve to replace dead ones.


One Note of Concern (And Some Really Technical Stuff)


In OCZ’s ToolBox, we have yet to see a drive with 100% replacement blocks showing “out of the box” but this drive is the very first one which showed a mere 98%. Worse still was the fact that by the end of testing it had sunk down to 88% with 2048 bad blocks replaced. We may be hard on these drives in testing but losing an additional 10% of reserved cells to usage (as new bad ones die an early death) is disquieting to say the least.

This is a direct result of the extended firmware scheme and with all things in life there is no free lunch. To get the “extra” 20GB of space you need to be willing to give up something and it appears the disconcerting down side is a severe curtailing of this drive’s potential lifespan. We say this as while you get in theory 10,000 writes cycles per cell not all cells are going to last that long. This is where the replacement blocks come in handy as they replace the bad ones seamlessly and usually before catastrophic failure and loss of data. Once you run of out replacement blocks, the drive’s lifespan starts decreasing exponentially.

Based on an admittedly small sample size it appears SandForce reduced the over-provisioning of replacement blocks by at least 300%. For example, an OCZ Vertex 2 100GB with 1728 bad blocks shows as 96% of its free cells still available for approximately 49152 (48 in binary) to 51200 (50 in binary) replacement blocks set aside. This assumes the “1728” number (1.6 in binary) would go as high as 2048 (2 in binary) before going down further in percentage. Whereas 2048 bad blocks for a total of 12% of the OWC’s over-provisioned blocks works out to be about 16384 (16 in binary) replacement blocks. We are not going to get into the issue of potential reduction of error correction / checksum tables that makes up RAISE as, unlike bad block replacement & allocation, it is conceivable that this data is now being written in amongst the 120GB of space.

To put this into perspective, 8GB of free cells just waiting to jump into action to replace early death cells is still pretty darn good for most consumer orientated solid state drives. It’s just not anywhere near as good as the number of cells the 100GB drives have in reserve.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for SSDs to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the XP load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a WD 320 single platter drive.

For these tests we used a combination of the ATTO Disk Benchmark, HDTach, HDTune, Cystal Disk Benchmark, h2benchw, SIS Sandra Removable Storage benchmark, and IOMeter for synthetic benchmarks.

For real world benchmarks we timed how long XP startup took, Adobe CS3 (w/ enormous amounts of custom brushes installed) took, how long a single 4GB rar file took to copy to and then from the hard drives, then copy to itself. We also used 1gb of small files (from 1kb to 20MB) with a total 2108 files in 49 subfolders.

For the temperature testing, readings are taken directly from the hottest part of the drive case using a Digital Infrared Thermometer. The infrared thermometer used has a 9 to 1 ratio, meaning that at 9cm it takes it reading from a 1 square cm. To obtain the numbers used in this review the thermometer was held approximately 3cm away from the heatsink and only the hottest number obtained was used.


Please note to reduce variables the same XP OS image was used for all the hard drives.

For all testing a Gigabyte PA35-DS4 motherboard was used. The ICH9 controller on said motherboard was used.

All tests were run 4 times and average results are represented.

Processor: Q6600 @ 2.4 GHZ
Motherboard: Gigabyte p35 DS4
Memory: 4GB G.Skill PC2-6400
Graphics card: Asus 8800GT TOP
Hard Drive: 1x WD 320
Power Supply: Seasonic S12 600W

SSD FIRMWARE (unless otherwise noted):
G. Skill Titan: 0955
G.Skill Falcon: 1571 (AKA FW 1.3)
OCZ Apex: 955
OCZ Vertex: 1.3 (AKA FW 1571)
Patriot Torqx: 1571 (AKA FW 1.3)
Corsair P64: 18C1Q
OCZ Summit: 1801Q
A-Data S592: 1279 (AKA PRE 1.1 FW)
OCZ Agility EX 60GB: 1.3 (AKA 1.4 for MLC Indilinx Drives)
Kingston SSDNow V 40GB: 02G9
G.Skill Falcon 2: 1881 (AKA 1.4)
Kingston SSDNow V+ 128GB: AGYA0201
Corsair Nova: 1.0 (AKA 1916/1.5 for most other MLC Indilinx Drives)
Corsair Force F100: 0.2 (AKA bug fixed / modified 3.0.1)
OCZ Vertex 2: 1.1 (custom “full speed” SandForce 310 firmware)
G.Skill Phoneix: 305 (standard “mass production” firmware)
Patriot Inferno: 305 (standard “mass production” firmware)
OWC Mercury Extreme Pro: 310 (standard 310 firmware)
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
Read Bandwidth / Write Performance

Read Bandwidth


For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture.

We don’t put much stock in Burst speed readings and this goes double for SSD based drives. The main reason we include it is to show what under perfect conditions a given drive is capable of; but the more important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.



As you can see the read speed of this SandForce drive about average for an SF-1222 product. Yes, there is a small difference between the Mercury and its competition but it really isn't anything to be overly concerned about.


Write Performance


For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.


Unfortunately, what we are seeing here are some slightly disappointing minimum numbers while the average numbers look to be right within the margin of error for a Sandforce-based drive.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
Crystal DiskMark

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your hard drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5. When all 5 tests for a given section were run Crystal DiskMark then averages out all 5 numbers to give a result for that section.

Read Performance


<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/cdm_r.jpg " border="0" alt="" />​

We’re once again seeing the Mercury in-line with a first generation, pre-mass market firmware drive: the Corsair F100. This was pretty much expected but we would have liked to have seen at least some difference in performance with the latest firmware.


Write Performance


<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/cdm_w.jpg " border="0" alt="" />​

Here we’re seeing performance that is in-line with other SandForce drives and it’s actually able to narrowly beat out the excellent G.Skill Phoenix in a two of the tests. Not bad at all.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
Random Access Time / ATTO Disk Benchmark

Random Access Time


To obtain the absolute, most accurate Random access time, h2benchw was used for this benchmark. This benchmark tests how quickly different areas of the drive’s memory can be accessed. A low number means that the drive space can be accessed quickly while a high number means that more time is taken trying to access different parts of the drive. To run this program, one must use a DOS prompt and tell it what sections of the test to run. While one could use “h2benchw 1 -english -s -tt "harddisk test" -w test” for example and just run the seek tests, we took the more complete approach and ran the full gamout of tests and then extracted the necessary information from the text file. This is the command line argument we used “h2benchw 1 -a -! -tt "harddisk drivetest" -w drivetest”. This tells the program to write all results in english, save them in drivetest txt file, do write and read tests and do it all on drive 1 (or the second drive found, with 0 being the OS drive).

<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/random.jpg " border="0" alt="" />​

Like most other SSDs, the Random Access Time seen by the Mercury is good when compared to a standard hard drive. Unfortunately, it is one of the “slower” SSDs on this list.



ATTO Disk Benchmark


The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second.

Read


<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/atto_r.jpg " border="0" alt="" />​

While the read power curve of this drive is decent it is once again slightly off the pace set by the other SandForce drives.


Write


<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/atto_w.jpg " border="0" alt="" />​

The power curve is once again OK but it still is lower then every single other SandForce drive we have looked at. This means there has really been no changes in terms of performance when going from one firmware to the next.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,284
IOMeter / IOMeter Controller Stress Test

IOMETER


IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 que depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular que depth that is heavily weighted for single user environments.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/IOM.jpg " border="0" alt="" />​

We knew based on our past experiences with SandForce drives running the mundane version of their firmware that as soon as the que depth gets deeper then 1 the Mercury’s numbers would tank; and tank they did. All in all, this is a less than impressive result for a near-$400 drive.


IOMeter Controller Stress Test


In our usual IOMeter test we are trying to replicate real world use where reads severely outnumber writes. However, to get a good handle on how powerful the controller is we, we have also run an additional test. This test is made of 1 section at que depth of 1. In this test we ran 100% random. 100%writes of 4k size chunks of information. In the past we found this tests was a great way to check and see if stuttering would occur. Since the introduction of ITGC and / or TRIM the chances of real world stuttering happening in a modern generation SSD are next to nill; rather the main focus has shifted from predicting "stutter" to showing how powerful the controller used is. By running continuous small, random writes we can stress the controller to its maximum, while also removing its cache buffer from the equation (by overloading it) and showing exactly how powerful a given controller is. In the .csv file we then find the Maximum Write Response Time. This in ms is worst example of how long a given operation took to complete. We consider anything higher than 350ms to be a good indicator that the controller is either relying heavily on its cache buffer to hide any limitations it possess or the firmware of the controller is severely limiting it.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/OWC/stutter.jpg " border="0" alt="" />​

As with the random access times, we weren’t expecting much and got what was expected. The numbers we see here aren’t all that great, especially when you consider the performance of several last-gen drives we are comparing the Mercury to. On the bright side, Patriot’s Inferno isn’t too far in front.
 
Last edited by a moderator:
Status
Not open for further replies.
Top