What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Patriot Wildfire 120GB SSD Review

Status
Not open for further replies.

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Along with a long list of competitors, Patriot has decided to jump into the high end SATA 6Gbps SSD market by using the SandForce SF2281 controller. In keeping with their nomenclature – and their fondness for all things fire related – they dubbed their latest creation the Wildfire. This is supposed to be their direct successor to the “wildly” popular Inferno series while acting as a more expensive and higher performance alternative to the Pyro series. Today we are going to put Wildfire 120GB version under the microscope to see what makes this drive special and make no mistake here, it really is special.

When the Wildfire was first announced many rumors began swirling and the enthusiast community was abuzz with speculation. As with nearly every rumor most – including the possibility that Patriot was going to disable RAISE- were quickly proven to be incorrect. However, one rumor was spot on: unlike many manufactures, Patriot has decided to take a slightly different approach to things on the internals of this drive. Most SSD manufactures have been opting for 25nm ONFi 2 and 25nm ONFi 1 NAND to distinguish their “mid-tier” from “high performance” SandForce SF2281-based drives. Patriot is on the other hand has equipped the Wildfire with 8 chips meeting the DDR Toggle Mode 1.0 NAND specification.


The last Toggle Mode NAND based drive we looked at (OCZ's Vertex 3 MaxIOPS) and it simply blew us away with bleeding edge performance numbers. To this day it still is the gold standard by which all other SSDs are judged so this bodes well for the Wildfire.

Patriot has also taken an interesting approach in terms of pricing as well. Currently, the Wildfire 120GB can be found for as little as $280. Now that may sound like a bitter pill to swallow but with the MaxIOPS 120GB going for somewhere north of $285 before rebates, this drive’s price is actually quite aggressive.

Patriot_Wildfire_120GB_board_sm.jpg
Patriot_Wildfire_120GB_board2_sm.jpg

Opening up the Wildfire we can indeed see that the interior architecture isn’t laid out like your typical SF2281 based 120GB drive. One entire side of the PCB (eight) is completely bereft of any NAND modules. All eight NAND chips reside on one side and the SF2281 controller itself is housed on the other side of the board. This means the 8 NAND chips are the same capacity and similar type as the ones found in the 240GB Wildfire and the aforementioned MAXIOPS as well.

The eight ICs are Toshiba branded Asynchronous Toggle Mode 1.0 MLC NAND. While Toggle Mode 1.0 NAND which has a slightly slower interface speed of 133MB/s compared to 200MB/s for ONFi 2.x NAND. This is still more than enough to saturate the 8 channel SandForce controller's abilities which top out in the mid 500MB/s range on reads (or about half of what this NAND's interface can provide). As such, having “only” eight chips may not hurt the sequential read speed of this device but low-end file performance may be another thing all together.

access_sm.jpg


As with the previous Inferno model, Patriot includes a 2.5” to 3.5” adapter.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Introducing the SandForce SF2000 Family

Introducing the SandForce SF2000 Family


sf_lg.jpg


As you are probably well aware by now, there are actually many different models which make up the next generation of SandForce controllers. Much like Intel’s socket 1155 i3/i5/i7 series of processors, all these different SandForce numbers represent slightly different tweaks and features, but all are basically built upon the same SF2000 foundation.

In grand total there are eight SF2000 iterations, but for the most part we won't see most of them in the retail channel. Take for example the SF2141; this is a cut down 4 channel, 24bit RS ECC, SATA 3GB/s controller which probably wont see much fan fare outside of truly budget SSDs. The easiest way to think about this one is to consider it the low end of SF2000 drives. Stepping up a level to 8 channels (and 55bit BCH ECC) but still SATA 3GB/s only is the SF2181 which you can consider the mid range of this generation. This one will probably be featured in more mid-tier next generation SSDs as it has better error correction abilities, yet cannot directly compete with the true stars of the SF200 consumer line: the SF 2281.

The only difference between the two “real” consumer grade SF2000 SATA 6G controllers most likely to be seen (the SF2281 and SF2282) is the one -the 2282- is only for extra large 512GB and higher drives (though the SF2281 can handle 512GB of NAND) and is a larger chip. These are the two flagship products as such have received all the features and all the tweaks which are going to become synonymous with the SF2000 consumer class controllers.

The other four controllers are for enterprise environments and boast features such as eMLC compatibility, Military Erase, SAS and super capacitor capabilities.


Features


enhance_lg.jpg


The SF2000 controller series is built upon the same architecture as the original SF1000 series. You get DuraWrite, RAISE and all the other features but these have all undergone enhancements and tweaking.

rs.jpg


The original SF1000 series had ECC of 24bits per 512byte sector of ECC; whereas the new controller has 55bits. The type of ECC has changed as well. The original used the more simplistic Reed-Solomon (aka “RS”) ECC code which is probably best known from its use in CDs.

bch.jpg


Compare and contrast this with the fact that the new controller uses Bose-Chaudhuri-Hocquenghem (aka “BCH”) for its ECC code; which is a more elegant version that targets individual errors. It is also faster and easier for the controller to correct these errors making for a lowered performance impact. AES encryption has also doubled from 128 to 256

sata.jpg

The most important of these new features for consumers is of course the new SATA 6Gb/s capabilities. This larger bus instantly translates into much higher sequential performance. The second generation of flagship SandForce controllers has also received a boost on the small file performance end of things thanks in no small part to a 20% increase in IOPS. The first generation SF1200 was rated for up to 50,000 IOPS whereas the new controller family has a rating of 60,000 IOPS.

The other interesting feature which all but the most basic of the SF2000 models boast is SLC NAND abilities. In the past, a manufacturer had to step up the enterprise SF1500 to get SLC compatibility but now they don't have to. Add in lowered power consumption and you can see that while the SF2000 series builds upon the same basic foundation as the previous generation, they are not all that similar when you take a closer look.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
A Look at DuraWrite, RAISE and More

A Look at DuraWrite, RAISE and More


Corsair_Force_sandforce_logi.jpg

Let’s start with the white elephant in the room and explain why this 120GB drive is in reality a 120GB drive. The Wildfire has eight 16GB NAND chips onboard which gives it a capacity of 128GB, but is seen by the OS as 120GB. Manufacturers use this to help increase IOPS performance and also extend life via wear leveling (as there are always free cells even when the drive is reported as “full”) and even durability since the drive has cells in reserve it can reassign sectors to as the “older” cells die. While 8GB worth of cells set aside for a SandForce drive is not that much compared to some previous models, this is still a lot of space.

dura.jpg


As we said, over-provisioning is usually for wear leveling and ITGC as it gives the controller extra cells to work with for keeping all the cells at about the same level of wear. However, this is actually not the main reason SandForce sets aside so much. Wear leveling is at best a secondary reason or even just a “bonus” as this over-provisioning is mainly for the Durawrite and RAISE technology.

Unlike other solid state drives which do not compress the data that is written to them, the SandForce controller does real time loss-less compression. The upside to this is not only smaller lookup tables (and thus no need for off chip cache) but also means less writes will occur to the cells. Lowering how much data is written means that less cells have to be used to perform a given task and this should also result in longer life and even fewer controller cycles being taken up with internal house cleaning (via TRIM or ITGC).

Corsair_Force_Fact5.jpg


Longevity may be a nice side effect but the real purpose of this compression is so the controller has to use fewer cells to store a given amount of data and thus has to read from fewer cells than any other drive out there (SandForce claims only .5x is written on average). The benefit to this is even at the NAND level storage itself is the bottleneck for any controller and no matter how fast the NAND is, the controller is faster. Cycles are wasted in waiting for data retrieval and if you can reduce the number of cycles wasted, the faster an SSD will be.

Compressing data and thus hopefully getting a nice little speed boost is all well and fine but as anyone who has ever lost data to corruption in a compressed file knows, reliability is much more important. Compressing data means that any potential loss to a bad or dying cell (or cells) will be magnified on these drives so SandForce needed to ensure that the data was kept as secure as possible. While all drives use ECC, to further ensure data protection SandForce implemented another layer of security.

Corsair_Force_Fact4.jpg


Data protection is where RAISE (Redundant Array of Independent Silicon Elements) comes into the equation. All modern SSDs use various error correction concepts such as ECC. This is because as with any mass produced item there are going to be bad cells while even good cells are going to die off as time goes by. Yet data cannot be lost or the end user’s experience will go from positive to negative. SandForce likes to compare RAISE to that of RAID 5, but unlike RAID 5 which uses a parity stripe, RAISE does not. SandForce does not explicitly say how it does what it does, but what they do say is on top of ECC, redundant data is striped across the array. However, since it is NOT parity data there is no added overheard incurred by calculating the parity stripe.

Corsair_Force_Fact2.jpg


According to SandForce’s documentation, not only individual bits or even pages of data can be recovered but entire BLOCKS of data can be as well. So if a cell dies or passes on bad data, the controller can compensate, pass on GOOD data, mark the cell as defective and if necessary swap out the entire block for a spare from the over-provisioning area. As we said, SandForce does not get into the nitty-gritty details of how DuraWrite or RAISE works, but the fact that it CAN do all this means that it most likely is writing a hash table along with the data.

SandForce is so sure of their controller abilities that they state the chances of data corruption are not only lower than that of other manufactures’ drives, but actually approaches ZERO chance of data corruption. This is a very bold statement, but only time will tell if their estimates are correct. In the mean time, we are willing to give the benefit of the doubt and say that at the very least data corruption is as unlikely with one of these products as it is on any modern MLC drive.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Vista load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a Phoneix Pro 120GB Solid State Drive.

For synthetic tests we used a combination of ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD and PCMark Vanatage.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.


For all testing a Asus P8P67 Deluxe motherboard was used, running Windows 7 64bit Ultimate edition (or Vista for boot time test). All drives were tested using AHCI mode using Intel RST 10 drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase or OCZ SSDToolbox and then quick formatted to make sure that they were in optimum condition for the next test suite.


Processor: Core i5 2400
Motherboard: Asus P8P67 Deluxe
Memory: 8GB Mushkin DDR3 1300
Graphics card: Asus 5550 passive
Hard Drive: 1x Seagate 3TB XT, OCZ 120GB RevoDrive
Power Supply: XFX 850


SSD FIRMWARE (unless otherwise noted):

OCZ Vertex: 1.6
OCZ Vertex 2 100GB: 1.33
OCZ Vertex 3 240GB: 2.09
Mushkin Callisto Deluxe 40GB: 3.4.0
Corsair Force F90: 2.0
OCZ Vertex 3 MI 240GB: 2.09
Crucial C300 128GB: 006
Corsair Force 3 GT 120GB: 1.2
Corsair Force 3 120GB: 1.2
Patriot Pyro 120GB: 3.1.9
Corsair Performance 3 256GB: 1.1
Patriot Wildfire 120GB: 3.1.9
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Read Bandwidth / Write Performance

Read Bandwidth


For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture. We don’t put much stock in Burst speed readings and thus we no longer included it. The most important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.

read.jpg


As with the 240GB ONFi 2 vs. Toggle Mode NAND, the 120GB Patriot Wildfire is slightly slower than some of its direct competitors like the Force 3 GT 120GB which rely on ONFi 2 NAND. This small reduction in sequential read was fully expected, and we were actually expecting it to be a touch worse. After all, the Wildfire only has 8 NAND chips versus 16 found in the Force 3 GT but so far it seems that having only eight chips is not much of a handicap.


Write Performance


For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.

write.jpg


These results certainly come as a very pleasant surprise! Even though there are only eight of those Toggle Mode 1.0 NAND chips powering the Patriot Wildfire it simply is in a different league than any other 120GB SSD we have looked at. In fact, its minimum write performance is better than a ONFi 2 wielding OCZ Vertex 3.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
ATTO Disk Benchmark

ATTO Disk Benchmark


The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second.


atto_w.jpg


atto_r.jpg


As with the sequential read and write performance numbers, the ATTO power curves of this drive are very decent on reads and quite impressive on writes. Overall, the Wildfire really does act more like a 240GB drive than a 120GB and the minor difference on reads is more than offset by that increased write performance.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Crystal DiskMark / PCMark 7

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your hard drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.

cdm_r.jpg


cdm_w.jpg


With the exception of single queue depth 4K read results, we are once again looking at numbers that you would expect from a 240GB SF2281 drive rather than a 120GB model. There really doesn’t seem to be much handicapping by only having 8 of the 16 ICs populated.


PCMark 7


While there are numerous suites of tests that make up PCMark 7, only one is pertinent: the HDD Suite. The HDD Suite consists of numerous tests that try and replicate real world drive usage. Everything from how long a simulated virus scan takes to complete, to MS Vista start up time to game load time is tested in these core tests; however we do not consider this anything other than just another suite of synthetic tests. For this reason, while each test is scored individually we have opted to include only the overall score.

vantage.jpg


Interestingly enough, while every other test so far as painted the 120GB Wildfire as a monster performer, the same can't be said of PCMark 7. These numbers are good – downright impressive even – but we were hoping to see it top the 5300 mark.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
AS-SSD / Access Time

AS-SSD


AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.

asd_r.jpg


asd_w.jpg


As with the results for Crystal DiskMark, the AS-SSD results – with the exception of the single queue 4k read results - are very, very good.


Access Time


To obtain an accurate reading on the read and write latency of a given drive, AS-SSD was used for this benchmark. A low number means that the drive’ data can be accessed quickly while a high number means that more time is taken trying to access different parts of the drive.

random.jpg


As expected there is a slight increase in latency due to the controller layout, but this increase is minor enough to be negligible.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Anvil Storage Utilities Pro

Anvil Storage Utilities Pro


Much like AS-SSD, Anvil Pro was created to quickly and easily – yet accurately – test your drives. While it is still in the Beta stages it is a very versatile and powerful little program. Currently it can test numerous read / write scenarios; however two stand out as of most interest to our needs: 4K queue depth of 4 and 4K queue depth of 16. For most users a queue depth of four is about as deep as you will experience, while 16 depth will only be encountered by power users and the like. We have also included the 4k queue depth 1 results to help put these two other numbers in their proper perspective. All settings were left in their default states and the test size was set to 1GB.

anvil_w.jpg


anvil_r.jpg


While there appears to be a small bottleneck occurring here, it only seems to happen at a single queue depth. Once the queue depth gets a bit deeper, this bottlenecking disappears.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER

IOMETER


IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,3xk,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for single user environments.

iom.jpg


Once again we are seeing a very nice bump in performance compared to any SF2281 120GB drive we have tested to date. The Patriot Wildfire may not be able to compete with the 240GB drives in IOMeter but it still posts some pretty impressive numbers.
 
Last edited by a moderator:
Status
Not open for further replies.

Latest posts

Top