What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Corsair Force 120GB Solid State Drive Review

Status
Not open for further replies.

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
The world of Solid State Drives is a fast-paced one, where things seemingly change at a breakneck pace as we have seen with the current crop of drives. While it seems like only yesterday we reviewed the Corsair Force F100 (which happened to be the first SandForce-based drive here at HWC), things have actually changed a lot since then. The SandForce market has fleshed out, presenting several highly regarded drives and there has even been some talk of more value-oriented drives using the SF-1222 controller as well. Therefore, in some it does seem fitting that we go back to where we began and look at Corsair’s other Force-series offering: the F120.

As the name suggests and unlike the F100, the F120 is a 120GB SandForce SF1200 based drive. Basically, SandForce recently provided another option for their clients: BIGGER drives which allowed for a minimized amount of over-provisioning in order to increase capacity. The result is the newer “extended” drives which promise all the great performance of their smaller brethren but with more room for all of you who need the space. This should also prove to be a boon for marketing departments as many customers had some difficulty justifying a 100GB SSD when 128GB drives from competitors were readily available. As such, the new F120 is for all intents and purposes supposed to be the exact same drive as the F100, just with more room “available”.

While it is relatively new on the market, the F120 is actually quite widely available throughout North America at retailers and e-tailers alike. However, since this IS a SandForce drive and thus a flagship model, it tends to come with a flagship price of about $350 US. This is actually less then what the F100 goes for and considering you get more room the newest Force-series drive may just make for one potent bang for your buck product.


 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
Specifications

Specifications





 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
A Closer Look at the Corsair F120

A Closer Look at the Corsair F120



As expected, the Corsair Force F120 comes in the exact same compact grey and white cardboard box as the Force F100 comes in. It has a generic look to it but there’s more than enough protection within to get the job done. While not nearly as good as the foam book-like boxes which accompany some other premier drives, the clamshell protection scheme is more than good enough for this kind of kit.


When it comes to accessories the Force F120 comes with a 2.5 to 3.5 adapter plate and the necessary mounting screws. While the adapter plate is pretty much par for the course with SandForce drives, it sill is nonetheless a nice feature to include.


Unlike the F100 which had a dark grey case, the F120’s metal case is an all black affair. Otherwise, there really isn’t all that much to distinguish it from many other SSDs currently on the market other than the fact that its label really does only list the bare minimum of information.


Since this is literally your typical SandForce-based drive, the PCB and layout of the chips is not very different from others we have seen in the past. To be precise this is the exact same PCB as found on the F100 since the “extended” SandForce drives are really just the standard drive, just with custom controller firmware running on them.

There are 16 flash chips (8 per side laid out in a C configuration) and one centrally located SandForce controller chip. It is interesting to note that just like other consumer solid state drives we have looked at, the Force does not have an onboard super capacitor to ensure all data is written to the NAND in case of power loss (there is a spot on the PCB for one though). This is par for the course as only true enterprise class drives (both hard disk or solid state based) get onboard super capacitors.


As we already mentioned the Force drive uses the standard SandForce SF1200 (full model name is SF-1222TA3-SBH ) controller and not the SF1500. This SandForce controller is a SATA revision 2, 3GB/s controller which supports native command queuing (NCQ), TRIM, S.M.A.R.T monitoring and MLC NAND. It is unfortunate that SandForce is not very open about the specifications of their controller as it is unclear how much onboard cache this controller has. Keeping the cache on the chip allows the controller to be much more efficient as it wastes less cycles waiting for data from external chip.

While the F100 used Micron MT29F64G08CFABA-WB NAND chips our F120 uses sixteen Intel branded 29F64G08CAMDB chips. While the NAND chips in the F100 may say “Micron” on them and these say Intel, they are for all intents and purposes the same chip just coming from different factories. Since there is 16 of them means this drive is also in fact a 128GB drive, with 8GB set aside for over provisioning.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
A Look at DuraWrite, RAISE and More

A Look at DuraWrite, RAISE and More




Let’s start with the white elephant in the room and explain why this 120GB drive is in reality a 128GB drive. The drive obviously has sixteen 8GB NAND chips onboard which gives it a capacity of 128GB, but is seen by the OS as 120GB. This is called “over-provisioning” and happens when a manufacturer has their drive consistently under report its size. Manufacturers use this to help increase IOPS performance and also extend life via wear leveling (as there is always free cells even when the drive is reported as “full”) and even durability since the drive has cells in reserve it can reassign sectors to as the “older” cells die. Having it giving up 8GB of its capacity to this “buffer” is much less extreme then 28GB like the original non “extended” versions do and is more in line with what is expected in the consumer orientated niche of the market.



As we said, over-provisioning is usually for wear leveling and ITGC as it gives the controller extra cells to work with for keeping all the cells at about the same level of wear. However, this is actually not the main reason SandForce sets aside so much. Wear leveling is at best a secondary reason or even just a “bonus” as this over-provisioning is mainly for the Durawrite and RAISE technology.

Unlike other solid state drives which do not compress the data that is written to them, the SandForce controller does real time loss-less compression. The upside to this is not only smaller lookup tables (and thus no need for off chip cache) but also means less writes will occur to the cells. Lowering how much data is written means that less cells have to be used to perform a given task and this should also result in longer life and even fewer controller cycles being taken up with internal house cleaning (via TRIM or ITGC).



Longevity may be a nice side effect but the real purpose of this compression is so the controller has to use fewer cells to store a given amount of data and thus has to read from fewer cells than any other drive out there (SandForce claims only .5x is written on average). The benefit to this is even at the NAND level storage itself is the bottleneck for any controller and no matter how fast the NAND is, the controller is faster. Cycles are wasted in waiting for data retrieval and if you can reduce the number of cycles wasted, the faster an SSD will be.

Compressing data and thus hopefully getting a nice little speed boost is all well and fine but as anyone who has ever lost data to corruption in a compressed file knows, reliability is much more important. Compressing data means that any potential loss to a bad or dying cell (or cells) will be magnified on these drives so SandForce needed to ensure that the data was kept as secure as possible. While all drives use ECC, to further ensure data protection SandForce implemented another layer of security.



Data protection is where RAISE (Redundant Array of Independent Silicon Elements) comes into the equation. All modern SSDs use various error correction concepts such as ECC. This is because as with any mass produced item there are going to be bad cells while even good cells are going to die off as time goes by. Yet data cannot be lost or the end user’s experience will go from positive to negative. SandForce likes to compare RAISE to that of RAID 5, but unlike RAID 5 which uses a parity stripe, RAISE does not. SandForce does not explicitly say how it does what it does, but what they do say is on top of ECC, redundant data is striped across the array. However, since it is NOT parity data there is no added overheard incurred by calculating the parity stripe.



According to SandForce’s documentation, not only individual bits or even pages of data can be recovered but entire BLOCKS of data can be as well. So if a cell dies or passes on bad data, the controller can compensate, pass on GOOD data, mark the cell as defective and if necessary swap out the entire block for a spare from the over-provisioning area. As we said, SandForce does not get into the nitty-gritty details of how DuraWrite or RAISE works, but the fact that it CAN do all this means that it most likely is writing a hash table along with the data.

SandForce is so sure of their controller abilities that they state the chances of data corruption are not only lower than that of other manufactures’ drives, but actually approaches ZERO chance of data corruption. This is a very bold statement, but only time will tell if their estimates are correct. In the mean time, we are willing to give the benefit of the doubt and say that at the very least data corruption is as unlikely with one of these products as it is on any modern MLC drive.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
DuraWrite, Firmware & Trim

DuraWrite, Firmware & Trim


<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/cdinfo.jpg" border="0" alt="" />​

As you can see, our drive shipped with the “30CA13F0” firmware. In more common nomenclature this is the 310 firmware, which we first saw with the OWC Mercury Extreme Pro 120GB SSD. The reason Corsair has updated this firmware beyond the 301 “release candidate” which ships with their F100 drive is simple: SandForce never made a 301 RC firmware capable of running on extended drives. Corsair literally had to either go with 309 which was the first extended RC firmware or go for the latest version. Since both are dumbed down firmware in comparison to the original 301 firmware, it made no sense to NOT get with the latest and greatest.

On the upside this means that every firmware revision which comes down the pipeline from SandForce should eventually be available for the F120 without any decrease in performance. The downside of course is the F120 is most likely going to be slower than its smaller brother - the F100. We will see later in the testing stage if this trade-off was worth it. It also goes without saying that this drive ships with TRIM enabled firmware right out of the box.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/toolbox.jpg" border="0" alt="" />​

For anyone interested, yes the OCZ toolbox does work on this drive. Also worth noting was this was the very first SandForce drive we have received which did NOT need a sanitary erase before its SMART numbers would report anything besides “BAD”.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for SSDs to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the XP load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a WD 320 single platter drive.

For these tests we used a combination of the ATTO Disk Benchmark, HDTach, HDTune, Cystal Disk Benchmark, h2benchw, SIS Sandra Removable Storage benchmark, and IOMeter for synthetic benchmarks.

For real world benchmarks we timed how long XP startup took, Adobe CS3 (w/ enormous amounts of custom brushes installed) took, how long a single 4GB rar file took to copy to and then from the hard drives, then copy to itself. We also used 1gb of small files (from 1kb to 20MB) with a total 2108 files in 49 subfolders.

For the temperature testing, readings are taken directly from the hottest part of the drive case using a Digital Infrared Thermometer. The infrared thermometer used has a 9 to 1 ratio, meaning that at 9cm it takes it reading from a 1 square cm. To obtain the numbers used in this review the thermometer was held approximately 3cm away from the heatsink and only the hottest number obtained was used.


Please note to reduce variables the same XP OS image was used for all the hard drives.

For all testing a Gigabyte PA35-DS4 motherboard was used. The ICH9 controller on said motherboard was used.

All tests were run 4 times and average results are represented.

Processor: Q6600 @ 2.4 GHZ
Motherboard: Gigabyte p35 DS4
Memory: 4GB G.Skill PC2-6400
Graphics card: Asus 8800GT TOP
Hard Drive: 1x WD 320
Power Supply: Seasonic S12 600W

SSD FIRMWARE (unless otherwise noted):
G. Skill Titan: 0955
G.Skill Falcon: 1571 (AKA FW 1.3)
OCZ Apex: 955
OCZ Vertex: 1.3 (AKA FW 1571)
Patriot Torqx: 1571 (AKA FW 1.3)
Corsair P64: 18C1Q
OCZ Summit: 1801Q
A-Data S592: 1279 (AKA PRE 1.1 FW)
OCZ Agility EX 60GB: 1.3 (AKA 1.4 for MLC Indilinx Drives)
Kingston SSDNow V 40GB: 02G9
G.Skill Falcon 2: 1881 (AKA 1.4)
Kingston SSDNow V+ 128GB: AGYA0201
Corsair Nova: 1.0 (AKA 1916/1.5 for most other MLC Indilinx Drives)
Corsair Force F100: 0.2 (AKA bug fixed / modified 3.0.1)
OCZ Vertex 2: 1.1 (custom “full speed” SandForce 310 firmware)
G.Skill Phoneix: 305 (standard “mass production” firmware)
Patriot Inferno: 305 (standard “mass production” firmware)
OWC Mercury Extreme Pro: 310 (standard 310 firmware)
Corsair Force F120: 30CA13F0 (aka standard 310 firmware)
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
Read Bandwidth / Write Performance

Read Bandwidth


For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture.

We don’t put much stock in Burst speed readings and this goes double for SSD based drives. The main reason we include it is to show what under perfect conditions a given drive is capable of; but the more important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.



What we see from the F120 is right in-line with the performance from other SandForce-based drives here which was expected.


Write Performance


For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.


This right here is a perfect example of why pre production firmware usually doesn’t see the light of day. Sure the write speed of the F120 is lower then that of other drives running the same or slightly earlier – but still mass production ready – firmware it just destroys the F100’s write abilities.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
Crystal DiskMark

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your hard drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5. When all 5 tests for a given section were run Crystal DiskMark then averages out all 5 numbers to give a result for that section.

Read


<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/cdm_r.jpg " border="0" alt="" />​

While the power curve is slightly lower then that of the F100, the F120 is still right in line with most other SandForce drives.


Write


<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/cdm_w.jpg " border="0" alt="" />

Again we see more of the same as with the other tests. Basically, unless seriously modification are made like in G.Skill’s case there really won’t be that much of a difference between SandForce drives.


Random Access Time


To obtain the absolute, most accurate Random access time, h2benchw was used for this benchmark. This benchmark tests how quickly different areas of the drive’s memory can be accessed. A low number means that the drive space can be accessed quickly while a high number means that more time is taken trying to access different parts of the drive. To run this program, one must use a DOS prompt and tell it what sections of the test to run. While one could use “h2benchw 1 -english -s -tt "harddisk test" -w test” for example and just run the seek tests, we took the more complete approach and ran the full gamout of tests and then extracted the necessary information from the text file. This is the command line argument we used “h2benchw 1 -a -! -tt "harddisk drivetest" -w drivetest”. This tells the program to write all results in english, save them in drivetest txt file, do write and read tests and do it all on drive 1 (or the second drive found, with 0 being the OS drive).

<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/random.jpg " border="0" alt="" />

This really tells us nothing new and pretty much cements our opinion that the F120 really doesn’t lack any performance with its production firmware.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
ATTO Disk Benchmark

ATTO Disk Benchmark


The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second.

Read


<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/atto_r.jpg " border="0" alt="" />​

For all intents and purposes the read performance of this drive in all three tests is the same as that of the F100.


Write


<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/atto_w.jpg " border="0" alt="" />​

Once again the difference in performance between the F100 and F120 is so small as to be all but a tie. Twenty percent more room with no loss of power is still looking like a good trade off to us.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,274
IOMeter / Controller Stress Test

IOMeter


IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 que depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular que depth that is heavily weighted for single user environments.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/IOM.jpg " border="0" alt="" />​

Here we see the difference between the pre and post production firmware when comparing the F100 to the F120.

Honestly though, this drop in performance was completely expected and not that big of a deal. The F120 really doesn’t belong in a server setting and while we have tweaked this test to be more representational of a home environment (or at the very least a small office or workstation environment) the fact of the matter is IOMeter has a strong preference for non hobbled drives. Honestly, we would never even consider an MLC NAND based drive for a server environment; especially one with lowered error correction abilities and fewer spare cells. However, in a home environment we can seriously see ourselves opting for the additional room and freedom the F120 brings to the table compared to the F100.


IOMeter Controller Stress Test


In our usual IOMeter test we are trying to replicate real world use where reads severely outnumber writes. However, to get a good handle on how powerful the controller is we, we have also run an additional test. This test is made of 1 section at que depth of 1. In this test we ran 100% random. 100%writes of 4k size chunks of information. In the past we found this tests was a great way to check and see if stuttering would occur. Since the introduction of ITGC and / or TRIM the chances of real world stuttering happening in a modern generation SSD are next to nill; rather the main focus has shifted from predicting "stutter" to showing how powerful the controller used is. By running continuous small, random writes we can stress the controller to its maximum, while also removing its cache buffer from the equation (by overloading it) and showing exactly how powerful a given controller is. In the .csv file we then find the Maximum Write Response Time. This in ms is worst example of how long a given operation took to complete. We consider anything higher than 350ms to be a good indicator that the controller is either relying heavily on its cache buffer to hide any limitations it possess or the firmware of the controller is severely limiting it.

<img src="http://images.hardwarecanucks.com/image/akg/Storage/F120/stutter.jpg " border="0" alt="" />​

While still higher than we would like to see, these are still pretty decent test results to us. They really go to show how far SSD technology has come in the last year or so.
 
Last edited by a moderator:
Status
Not open for further replies.

Latest posts

Twitter

Top