What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Intel DC S3700 200GB & 800GB, Enterprise SSD Review

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Although it may seem odd to some, there was actually a time when Intel didn't consider SSDs ‘ready for primetime’ and were rather slow to adapt to the possibilities this new technology had to offer. Luckily for the industry, Intel is not a company to do things by half measures. Once the potential in this new technology was realized, they quickly became a major player.

This review will be a bit of a departure from our usual mass-market focused articles. Instead of looking at an SSD which will appeal to gamers and the like, it will focus upon Intel’s DC S3700, an enterprise class drive which is supposed to shake this industry to its core.

The DC S3700 is unique for a number of reasons but its primary claim to fame is the new X25 controller which lies at its heart. Intel’s first and second generation X25 controllers brought ground breaking levels of performance and reliability when it was first released five years ago. Unfortunately, five years is a heck of a long time in the SSD market and Intel soon turned to the likes of Marvell, LSI and even Hitachi to power their later models. The Intel 3-series, 5-series and - most importantly- 9-series SSDs certainly proved the merit of this approach, but they were only stop gap measures.

Like all stop gap measures, the outsourced controllers were meant to be short term solutions. Today we will be looking at the first iteration of Intel’s long term solution: the third generation X25 controller. It is supposed to dramatically change any preconceptions large business administrators have about Enterprise storage solutions while also working to drive down the overall cost of high level SSDs.

chart.jpg

What makes this controller unique is not the significant increase in peak performance it offers. Rather what makes the X25 a standout is the lowered latency and sustained performance it can offer the new DC S3700. Recently the OCZ’s Barefoot 3 offered home consumers a glimpse of how the next generation of controllers were not going to be just about short term ‘burst’ performance but rather the possibility of long term capabilities across the entire spectrum. However, in this area, the new X25 Gen.3 is touted by Intel as the controller to make this philosophy a reality.

In the Enterprise arena, there can potentially be hundreds of users simultaneously requesting data at any given time on a 24/7/365 basis. Because of these unique demands, there is simply no better environment to test and promote the high sustained performance of a controller. Hence why Intel has released their new controller inside an Enterprise grade product before it gets cascaded down to the enthusiast level.

Even when taking I/O performance ratings at face value the new DC S3700 is certainly an impressive step forward. Compared to the previous generation 710 series of drives it offers a massive increase in performance, write endurance and boosts error correction capabilities to unheard of levels. Equally impressive is this massive reduction in price per gigabyte. The DC S3700 actually has a lower price per gigabyte ratio than the mass market Intel 520 had when it was first released.

The DC S3700’s price also scales in a linear fashion with each capacity step receiving the exact same $2.35 per GB ratio. In other words, the 100GB model will have a 1000 unit purchase price of $235, the 200GB will be $470, 400GB will be $940 and the largest capacity 800GB will be $1880.

name.jpg

While it may seem counter-intuitive, the new naming scheme for Intel’s DC S3700 does make perfect sense and is actually in keeping with their latest nomenclature. Much like an Intel Core i7 3770K, the entire model name can be decoded to tell experienced users exactly what the new device is and where it resides in Intel’s current lineup. This is a much more rational naming scheme than previous generations which were rather simplistic in nature.

In the case of the DC S3700, the ‘DC’ moniker tells us it is part of Intel’s new “Data Center” family of solid state drives. The “DC family” is Intel’s short hand for Data Center, Server, Storage and Embedded SSD Solutions and is quite a bit more inclusive than simply saying ‘Enterprise’ as was the case before the DC S3700. The ‘S’ stands for SATA interface, the ‘3’ stands for third generation and the ‘700’ tells us where in Intel’s lineup –second from the top with 900 being the flagship - this particular device resides. We fully expect future Enterprise grade Intel solid state drives to come with comparably easy to decode – yet seemingly complex – naming scheme.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Closer Look at the DC S3700

Closer Look at the DC S3700


top1.jpg

Since the DC S3700 this is a 700-series device which is designed around standard form factor specifications, it uses a 7mm high, 2.5” layout. Intel has always taken a durable – if rather utilitarian – approach to solid state drive cases and the DC S3700 is no exception. The only difference from the 200 and 800 GB models is the sticker itself as both versions share the exact same housing.

board_200GB_2.jpg
board_800GB.jpg

Opening up the case on the 200GB and 800GB drives, we can see both share the exact same internal architecture. The only difference between the various sizes is the density and number of RAM ICs, and the capacity of the NAND itself. The 200GB only has 256MB of DDR3-1600 RAM whereas the 800GB version has 1GB of onboard memory.

The capacitors housed on the PCB’s edge are for Flush in Flight, providing more than enough reserve power to allow the 3700 to flush its buffers and complete any outstanding writes in the event of unexpected power loss. This is a critical feature for datacenter clients as it protects sensitive data at the most critical times.

Unlike most 7mm form factor drives we have seen, Intel has also included plastic ‘stiffeners’ to ensure that there is no flexing or movement of the PCB.

het.jpg

Standard 25nm MLC NAND has an erase cycle life of between 3,000 and 5,000 which is very low for use in constant high demand environments So Intel couldn’t use it in their enterprise-focused product. By that same token the traditional choice of SLC - with has a much longer life – would have significantly increased the upfront cost of these storage devices. Rather than opting for one of these two extremes, Intel has side stepped the issue by using what they call HET MLC NAND, or what the rest of the industry simply calls e-MLC or “enterprise MLC” NAND. In essence, the DC-S3700 uses the exact same NAND we found on Intel’s 910 PCI-E SSD.

High Endurance Technology MLC NAND is approximately thirty times more durable than standard MLC NAND but not nearly as costly to manufacture as SLC NAND. This in turn allows Intel to offer such a massive drive for a reasonable price, while still being able to guarantee it for 5 years at 10 full drives writes everyday for those five years. To put that in more practical terms, Intel guarantees the 800GB version’s NAND for over 14.6 Petabytes and over 3.6 Petabytes for the 200GB model. Intel firmly states this is the pessimistic estimate and in all likelihood the DC S3700 series could double this amount under optimal conditions.

The only negative to using such durable MLC NAND is that it does sacrifice some small file performance to gain longevity. However, the DC S3700 isn’t meant to be used as a standalone device. Rather, it is meant to be used in RAID array environments. As we saw with the 910, distributing the load across multiple drives does more than make up for the small loss of performance.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Introducing the Intel X25 (Gen3) Controller

Introducing the Intel X25 (Gen3) Controller


Even though it has been five years since their last in-house controller was released, Intel has been hard at work improving upon previous designs. The end result is the all new X25 which is used inside the DC S3700 line of solid state drives.

Very little is actually known about the physical layout and design of the controller itself. All Intel is willing to share about the architecture is that it is a 100% in-house proprietary design, using custom in-house firmware. However, Intel may not be willing to give specifics on the core’s primary components they were more than willing to share feature-specific details.

controller.jpg

The most obvious departure from previous designs is the new X25’s use of 8 channels rather than the previous generation’s 10-channel layout. 8 channel designs have become the de-facto standard for the industry and in all likelihood Intel went with slightly less pathways to help reduce latency. Much like OCZ’s Barefoot 3 controller, latency, long term sustained performance and durability are the three central domains which Intel wanted to improve upon.

On the latency front, Intel states that 99.9% of the time average latency will be less than .500ms. This is a 50% reduction in read latency compared to the 710’s specification of .75ms. When compared against the impressive 910 PCI-E SSD’s specification of .65ms it still is 23% faster.

The IOPS performance of this controller also has been greatly increased as well. The Intel 910 800GB requires a quartet of controllers to hit its peak 180K read / 75K writes IOPS specification, whereas a single DC S3700 800GB has 75K read / 35K writes IOPS. Just as importantly this maximum 75,000 / 35, 000 IOPS rating is much closer to the DC S3700’s sustained real world performance than any other device available. In fact, while most other enterprise-class drives claim potential IOPS performance variation of 20% from one drive to the next, Intel has reduced that number to a mere 15% or less. This allows server and storage administrators to more accurately judge how many devices they will require to meet expected demand.

het2.jpg

Increased performance with decreased latency is all well and fine but what really makes the DC-S3700 distinctive is the fail safes it has in place to ensure data safety. All controllers implement certain a level of checks and balances to ensure data integrity but the new Intel X25 and the DC S3700 takes this approach to entirely new levels. The Uncorrectable Bit Error Rate (UEBR) is actually an unheard of 1 bit in 10 to the 17th power, which is 10 times greater than the previous generation 710 or even the Intel 910’s 1 in 10 to the 16th UEBR. To put this into understandable terminology, Intel expects there to be a single unrecoverable bit error in every 12.5 Petabytes of data read.

The X25 is able to offer such high levels of dependability due to the comprehensive data protection routines it uses under the auspices of Intel’s Stress Free Protection. At its most basic, this philosophy of end to end protection starts with over provisioning. Much like the original SandForce series, Intel has chosen ~20% of over-provisioning for their new controller. In the 200GB model this is 40GB while the 800GB model uses a whopping 160GB to ensure a failsafe partition is maintained.

This spare area can be used for everything from bad block replacement to garbage collection. It also allows for more consistent long term performance as the controller will always have access to free blocks to use for wear leveling even when the drive is nearing full capacity.

aes.jpg

Like the previous generation X25 controller, this new iteration makes use of auto-encryption with AES routines which have now been upgraded from 128-bit to 256-bit. By default, Intel has disabled AES encryption but it can be initiated via software on a case by case basis. This added flexibility allows the controller to be more adaptable to the needs of individual clients. Using the built in AES encryption routines will impart a certain amount of performance loss due to increased overhead, but there are scenarios where this is a trade-off well worth making.

The X25 G3 also uses BCH error correction algorithms. However, unlike any other controller, the X25 G3 writes parity data to the NAND and does ECC on the memory rather than solely focusing upon the primary storage environment.

All these features are fairly typical for modern controllers, albeit at the higher end of the scale. However, Intel has not stopped at basic levels to ensure data safety. They’ve stepped up the game by double and triple checking nearly every segment of data. So much so that a ‘simple’ write request to the NAND looks like this: a CRC is created along with the DATA, the DATA + CRC is then written to the NAND, an LBA tag is then created, then a parity bit is also written and checked. Finally, during read IO requests the additional data protections are all read – including ECC on the memory - to ensure the data is safe and reliable.

If at any time a check fails, the controller will notify the system host controller to recover the data via its own ECC. Should this happen, the administrator is instructed to removed the drive from the RAID array, replace it with a new one and RMA the failing drive. Meanwhile, all of the data would be saved on other segments of the array.

On top of all this, at standard intervals –during low IO periods, boot up, etc - the controller will also routinely check the status of all onboard capacitors. If they fail the self test, the controller will automatically disable the write buffer and notify the system’s host controller of the issue.

It is worth noting that the controller does not treat its internal NAND as an array. Rather, it treats the ICs as “one” logical unit so there is no equivalent of LSI SF2281’s RAISE (Redundant Array of Independent Silicon Elements) being implemented meriting such extreme error recovery procedures.

Taken as a whole, this high performance controller is highly adaptable and much more capable than either any of its predecessors or any other controller on the market today. It certainly should make a great addition to Intel’s current stable of high performance RAID-oriented drives.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Vista load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a Phoneix Pro 120GB Solid State Drive.

For synthetic tests we used a combination of ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD and PCMark Vanatage.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.

For all testing a Asus P8P67 Deluxe motherboard was used, running Windows 7 64bit Ultimate edition (or Vista for boot time test). All drives were tested using AHCI mode using Intel RST 10 drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase, OCZ SSDToolbox or Intel Toolbox and then quick formatted to make sure that they were in optimum condition for the next test suite.


Steady-State Testing

While optimum condition performance is important, knowing exactly how a given device will perform after days, weeks and even months of usage is actually more important for most consumers. For home user and workstation consumers our Non-Trim performance test is more than good enough. Sadly it is not up to par for Enterprise Solid State Storage devices and these most demanding of consumers.

Enterprise administrators are more concerned with the realistic long term performance of any device rather than the brand new performance as down time for TCL is simply not an option. Even though an Enterprise device will have many techniques for obfuscating and alleviating a degraded state (eg Idle Time Garbage Collection, multiple controllers, etc) there does come a point where these techniques fail to counteract the negative results of long term usage in an obviously non-TRIM environment. The point at which the performance falls and then plateaus at a lower performance level is known as the “steady state” performance or as “degraded state” in the consumer arena.

To help all consumer gain a better understanding of how much performance degradation there is between “optimal” and “steady state” we have included not only optimal results but have rerun tests after first degrading a drive until it plateaus and reaches its steady state performance level. These tests are labelled as “Steady State” results and can be considered as such.

While the standard for steady state testing is actually 8 hours we feel this is not quiet pessimistic enough and have extended the pre-test run to a full ten hours before testing actually commences. The pre-test or “torture test” consists of our standard “NonTrim performance test” and as such to quickly induce a steady state we ran ten hours of IOMeter set to 100% random, 100% write, 4k size chunks of data at a 64 queue depth across the entire array’s capacity. At the end of this test, the IOMeter file is deleted and the device was then tested using a given test sections’ unique configuration.


Processor: Core i5 2500
Motherboard: Asus P8P67 Deluxe
Memory: 8GB Corsair Vengeance LP “blue”
Graphics card: Asus 5550 passive
Primary Hard Drive: Intel 520 240GB
Power Supply: XFX 850

Below is a description of each SSD configuration we tested for this review:

Intel 910 800GB (Single Drive) HP mode: A single LUN of the Intel 910 800GB in its High Performance Mode

Intel 910 800GB (Raid 0 x2) std mode: Two of the Intel 910 800GB SSD LUN's in Standard Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x2) HP mode: Two of the Intel 910 800GB SSD LUN's in High Performance Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x4) std mode: All four of the Intel 910 800GB SSD LUN's in Standard Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x4) HP mode: All four of the Intel 910 800GB SSD LUN's in High Performance Mode Configured in RAID 0

Intel DC S3700 200GB: A single DC S3700 200GB drive

Intel DC S3700 800GB: A single DC S3700 800GB drive

Intel DC S3700 200GB (RAID 0): Two DC S3700 200GB drives Configured in RAID 0

Intel DC S3700 800GB (RAID 0): Two DC S3700 800GB drives Configured in RAID 0

Intel 710 200GB (RAID 0): Two 710 200GB drives Configured in RAID 0
 
Last edited:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
ATTO Disk Benchmark

ATTO Disk Benchmark


The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second.


atto_w.jpg


atto_r.jpg



Both the read and write performance curves of the new DC S3700 drives are very impressive. They may not be the absolute fastest Enterprise devices we have tested, but the small file performance in both single and simple two drive RAID 0 configurations is amazing.

The 200GB model may be a touch slower on reads, but it appears that the largest difference between the 200GB and 800GB model is the write performance. Simply put the 800GB is noticeably faster throughout the performance curve. Most likely the massive difference in cache is the reason for this decrease in performance.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Crystal DiskMark / AS-SSD

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.


cdm_w.jpg


cdm_r.jpg



AS-SSD


AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.

asd_w.jpg


asd_r.jpg



Once again the overall read and write performance of this new model is very impressive. When it comes to read performance the difference between both sizes appears to be minor, but the 800GB does noticeably pull ahead of the 200GB model once write performance enters the equation. Depending on the intended environment, capacity and usage requirements, opting for the 200GB may actually make perfect sense. All sizes may have the same price per GB, but the 200GB does cost less overall yet will net nearly identical read performance.
 

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER; Standard Test

IOMETER: Standard Test


IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per device (1,4,16,64,128 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for single user environments and workstation environments.

iom_std.jpg

Because of its 50/50 performance split, the 800GB version certainly does pull ahead of the 200GB model. This becomes very evident once the DC S3700 is placed within intended environment: RAID.

Even in a simple 2 drive RAID 0 array the 800GB makes for one potent storage solution. It can actually hit nearly 70% of what our 4-way RAID’ed Intel 910 800GB was capable of at deeper queue depths and easily outperforms the 910 in shallow queue depth testing.

In fact, even the much less expensive 2-way RAID 0 200GB outperformed the 910’s 2 LUN configuration. When compared to the previous 710, even a single 200GB was able to outperform a pair of them. Needless to say, the new X25 Generation 3 controller is full of potential.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER; File, Web & Email Server Testing

IOMETER: File Server Test


To test each drive we ran 6 test runs per device (1,4,16,64,128,256 queue depth), with each test having 8 parts and lasting 10 minutes with an additional 20 second ramp up. The 6 subparts were set to run 100% random, 75% read 25% write; testing 512b, 4k,8k,16k,32k,64k size chunks of data.

When each test is finished IOMeter gives a report of the 6 subtests with scores in I/Os per second. We then take these 6 numbers add them together and divide by 6. This gives us an average score for that particular queue depth that is heavily weighted for file server usage.


iom_fserve.jpg

At lower queue depths the difference between the 200GB and 800GB in both single and RAID environments is minimal. Both nearly outperform a pair of 710 series drives and both - drive for drive / LUN for LUN - outperform a similarly configured Intel 910 at nearly all queue depths.

However, once the queue depths start to get ramp up, the extra onboard cache of the 800GB model does start to make a difference. At ultra deep queue depths the 800GB continues to stay strong while the 200GB does start to falter ever so slightly.

The reduction in performance at the 256 mark is not large, but it is something to take into consideration when deciding on which capacity would be a better fit for a given builds requirements.


IOMETER: Web Server Test


The goal of our IOMeter Web Server configuration is to help reproduce a typical heavily accessed web server. The majority of the typical web server’s workload consists of dealing with random small file size read requests.

To replicate such an environment we ran 6 test runs per device (1,4,16,64,128,256 queue depth) with 8 parts each. Every one of these sections lasted 10 minutes with an additional 20 second ramp up. The 8 subparts were set to run 100% random (95% read 5% write) testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data.

When each test is finished IOMeter gives a report of the 8 subtests with scores in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth which is heavily weighted for web server environments.


iom_web.jpg

Since this test is nearly all reads, the results are identical between both DC S3700 capacities As Intel claims, these SSDs really do make excellent Web Server storage devices.

Due to the lack of a ‘dip’ at the end of the 200GB line’s results, it appears that the small fall-off we noticed in the other test results is the result of its reduced write performance and amount of onboard cache. However, both capacities flat-line at the 128 queue depth mark and at 64 or lower both perform well within tolerances of each other. It is only at ultra deep queue depths that the 200GB’s smaller amount of onboard cache becomes saturated.

This saturation point could of course be compensated for by simply using more drives in the array or even a more powerful RAID controller than our test bed’s onboard controller.


IOMETER: Email Server Test


The goal of our IOMeter Email Server configuration is to help reproduce a typical corporate email server. Unlike most servers, the typical email server’s workload is split evenly between random small file size read and write requests.

To replicate such an environment we ran 5 test runs per drive (1,4,16,64,128 queue depth) each test having 3 parts and lasting 10 minutes with an additional 20 second ramp up. The 3 subparts were set to run 100% random, 50% read 50% write; testing 2k,4k,8k, size chunks of data.

When each test is finished IOMeter gives a report of the- subtests with scores in I/Os per second. We then take these numbers add them together and divide by 3. This gives us an average score for that particular queue depth which is heavily weighted for email server environments.


iom_email.jpg

Once again the difference between the last generation 710 and the new DC 3700 is significant. At all but deep queue depths a single 200GB DC S3700 outperforms a pair of 710 200GB drives. It was only when the queue depths moved on that the 200GB faltered, but even then a single 800GB DC S3700 was still able to meet and beat two Intel 710 200GB all the way to completion. That is an impressive performance increase from one generation to the next.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Steady State Testing; File, Web & Email Server

IOMETER: Our Standard Steady State Test


iom_std_ss.jpg

Our ‘standard’ IOMeter configuration tends to replicate a workstation-esque environment rather than a true server situation, but these results do point out one thing: Intel’s effort to improve long term performance and decrease latency is obviously paying dividends.

The new DC S3700 is not only relatively faster – LUN for LUN – than an Intel 910 under perfect scenarios, but it is also relatively faster when both are in a steady state. The difference actually gets greater when all devices are within real world conditions rather than in pristine shape.

Even though it may ‘only’ have eight channels instead of 10, this new Intel controller is also a massive upgrade over the past X25 G2 and the Intel 710 SSD.


IOMETER: File Server Steady State Test


iom_fserve_ss.jpg

Now that we turn our attention back to server orientated real world scenarios we can see that the new DC S3700 drives are a lot more stable than anything we have seen to date. They have the capability to keep up their top-level performance even after countless hours of operation. The Intel 710 is not even in the same league and to find a competitor we have to turn towards the 9 series.

Under pristine conditions, a dual LUN configured Intel 910 800GB was able to outperform the 200GB model, but when both are in a used state (as they will be in the real world) a very different picture emerges.


IOMETER: Web Server Steady State Test


iom_web_ss.jpg


As with the File Server results, the DC S3700 steady state performance actually improves in comparison to both the last generation Intel 710 series and the Intel 910 series. This new controller really has been designed with real world long term storage scenarios in mind.

As with the non-steady state File Server results, the 800GB model does handle ultra deep file queue depths better than the 200GB model. However, at anything less than 64 queue depths the difference is minimal and even at 128 queue depth the drop-off is not all that drastic.

The 200GB would make a great option for both less demanding environments and budget orientated builds whereas the 800GB would be perfect for situations where performance and capacity are the two top priorities.


IOMETER: Email Server Steady State Test


iom_email_ss.jpg


Unlike under optimum conditions, the RAID 0 710s are simply unable to outperform even a single 200GB DC S3700. At best, two of the last generation drives were only able to match a single DC S3700 200GB’s performance and only then for a small portion of the entire testing suite. Simply put these new drives start fast and stay fast.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Adibe CS5 Load Time / Firefox Portable Offline

Adobe CS5 Steady State Load Time


Photoshop is a notoriously slow loading program under the best of circumstances, and while the latest version is actually pretty decent, when you add in a bunch of extra brushes and the such you get a really great torture test which can bring even the best of the best to their knees. To make things even more difficult we have first placed the devices into a steady state so as to help recreate the absolute worst case scenario possible.

adobe.jpg

Thanks to this new controller’s great low queue depth performance these results come as no surprise. The DC S3700 800GB drive is an outstanding performer and the 200GB is almost as good.


Firefox Portable Offline Steady State Performance


Firefox is notorious for being slow on loading tabs in offline mode once the number of pages to be opened grows larger than a dozen or so. We can think of fewer worse case scenarios than having 100 tabs set to reload in offline mode upon Firefox startup, but this is exactly what we have done here.

By having 100 pages open in Firefox portable, setting Firefox to reload the last session upon next session start and then setting it to offline mode, we are able to easily recreate a worse case scenario. Since we are using Firefox portable all files are easily positioned in one location, making it simple to repeat the test as necessary. In order to ensure repetition, before touching the Firefox portable files, we have backed them up into a .rar file and only extracted a copy of it to the test device.

As with the Adobe test, we have first placed the devices into a steady state.


ff.jpg

Once again both size drives perform very well, but thanks to its extra onboard cache the 800GB does post noticeably better numbers.
 
Last edited by a moderator:

Latest posts

Top