What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Crucial MX300 750GB SSD Review

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
If there is one constant in the SSD marketplace it is <i>change</i>. This entire storage subclass was created from the very desire for change – as hard drives weren’t meeting anyone's need for speed. Needless to say this market is extremely volatile and its very foundation can change seemingly overnight as technology evolves and new drives are launched. In this crucible, innovation is the key to staying relevant, and those that don’t innovate don’t last all that long.

One company known for their constant desire for innovation is Crucial, which has served them extremely well over the years. This is not all that surprising as Crucial is the consumer 'arm' of Micron, and as such has insider knowledge of what’s on the horizon before many of their competitors. This insight has rarely led Crucial astray and when combined with their focus on only offering a few models their lineup usually has excellent staying power in market known for mayfly-lifespan product cycles.

For the past few years their top of the line model was the MX series and it is still arguably the linchpin which keeps Crucial in the forefront of many consumers' minds when making an SSD purchasing decision. More importantly, this line is the very epitome of 'change'. The original MX100 model introduced 16nm 128Gbit NAND, while the second generation MX200 made Drive Write Acceleration a household name outside of TLC based models. For the all new third generation MX300 Limited Edition Crucial has continued this tradition of making massive changes and changing the very way in which mainstream solid state drives are designed. This time Crucial is bringing 3D NAND to the masses. Equally noteworthy is this is also the first 'M' series (let alone 'MX') model to be based upon TLC and rather than MLC class NAND.

<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/intro.jpg" border="0" alt="" /></div>
Crucial’s MX300 is an interesting addition since the plan is to initially offer the SATA-based 750GB version we’re reviewing here today. Then throughout the remainder of 2016 they want to continue the model’s cadence to other capacities and additional form factors like M.2. Indeed, just this week the rest of the lineup was announced.

Other than the odd staggering of SKU release dates, the MX300 really doesn’t represent an improvement over the MX200 and that may be potentially more problematic. As a matter of fact from a raw specifications standpoint this new drive has lower read / write throughput numbers, consumes slightly more power when in use and even costs a bit more per GB than its predecessor. The only potential area of improvement we can detect is the standby power which has been reduced from 100mW to 75mW.
<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/top_sm.jpg" border="0" alt="" />
</div>

This radical double change of NAND type and manufacturing technology does require a bit of explanation – one that we will go over in detail in the next page. For the time being these massive changes completely transform the very nature of this series. So much so that it is rather difficult to compare the new MX300 to its predecessor – though this will not stop us from doing precisely that. As such we will be comparing it to not only its predecessor the MX200 but also a wide variety of entry and mainstream options that do not use 3D NAND. This way the differences should be accentuated and come to the forefront – where they belong.
<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/board2_sm.jpg" border="0" alt="" />
</div>

Beyond the physical NAND used Crucial has also changed out the MX200’s controller. This is not all that unexpected since the MX-series has historically used the newest Marvell controller with every successive generation. In the past, MX models were powered by 8-channel Marvell 88SS9189 controllers - the MX100 used an earlier version, the MX200 used a more refined version. The MX300 on the other hand takes a page from the BX series and uses a <I>four</i> channel based design.

While the newer Marvell 88SS1074 SoC is technically faster many consider it to be hobbled by its four-channel nature. That is a significant reduction that will become rather apparent in deeper queue depth scenarios. On the positive side at least the MX300 uses the same amount of RAM cache – in the case of the Limited Edition 750GB MX300 this means a single 512MB DRAM IC.
<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/board_sm.jpg" border="0" alt="" />
</div>

This combination of different NAND type (and technology type) with a different controller is certainly going to give the MX300 unique performance characteristics. On the positive side the overall endurance rating has not changed all that much. The last generation MX200 500GB was rated for 220TB of drive writes, and the MX200 1TB was rated for 320TB; whereas the new MX300 750GB LE is rated for 220TB. At worst this is a 20TB reduction compared to a (non-existent) MX200 750GB's ~240TB, which is bloody amazing for a TLC NAND based drive. Equally impressive is this number can easily be increased via modifying the over-provisioning amount in Crucial's free Storage Executive application.

While the MX300 may use an entirely new PCB and different hardware it does come with the same level of data loss protection as its predecessors. There are onboard capacitors to ensure data integrity which will continue to be a major selling feature for this series.

The other thing that has not changed all that much is the asking price. With an MSRP of $200 it is almost to the cent what a MX200 costs per Gigabyte. This is actually the most critical part of the equation as the MX series has always been the more mainstream <i>performance</i> orientated option and yet this MX300 appears to be – at best – no better than its predecessor. It also relies upon unproven 3D TLC NAND.

All of these factors combine to place a pretty significant handicap upon the shoulders of Crucial’s new MX300 LE. Only with stellar short and long term performance in a broad range of categories will this SSD live up to its predecessors’ sterling reputation. Otherwise this 'Limited Edition' may indeed live up to its <i>limited</i> name.
<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/mfg_sm.jpg" border="0" alt="" />
</div>
 
Last edited:

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Introducing IMFT's 3D NAND

Introducing IMFT's 3D NAND


In the past 2D or 'planar' NAND was laid down in layers. Each layer was one cell 'high' which consists of the actual storage substrate and a 'bottom' controller substrate. In the case of MLC NAND this meant each layer was 2-bits of data high, and with TLC it is 3bits of data high - and how many rows of cells wide the die package was. Since making the actual footprint of the chip larger wasn’t efficient, manufacturers had only two options when increasing densities: utilize a finer-grain manufacturing process or stack these two dimensional layers upon each other to increase the overall density of each NAND IC.

Both of these solutions aren’t without their own potential challenges and switching up the manufacturing process was -at best- a stopgap measure. Every node shrink made the NAND transistors more fragile, and each layer added made cooling the NAND cells more and more difficult. As such combining the two has long been considered a dead-end design route that was quickly becoming harder and harder to improve upon.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/3d_2.jpg" border="0" alt="" />
</div>

3D NAND on the other hand is not laid down in separate layers like carpet, and rather is built upon a three dimensional cube-like structure. How each NAND manufacturer goes about the actual design differs, but IMFT does things in a very interesting way. Instead of opting for a radical new manufacturing process they have taken the best parts of planar NAND and applied it to their first generation 3D NAND design. The end result is 3D NAND that looks somewhat like an apartment building.

Much like an apartment building, there are individual cells laid out in a grid pattern throughout the chip’s length and width as well as above and below. To command and control each cell there are hallways (horizontal pathways) and elevators (vertical pathways) built in between the cells that connect not only the cells but also join up to the CMOS control circuits.

These pathways not only allow for command and control but also enhance cooling so that the cells packed in at the chip’s center of the chip don’t have heat limitations like they would in a 2D NAND chip of such massive heights. To further help keep temperatures in check this 'apartment' has a metal 'roof' with integrated cooling fins. This built-in cooling feature does actually decrease the density per 'floor’. However, any capacity reduction is a minor concern since instead of including a main controller substrate at each floor of this 32 story high-rise apartment building (that would be required in 32 layer planar NAND die package) Micron has only needed to include one at the base of the 'building'.

In theory there really is no limit to how high this building-type structure can get, but 32 seems to be the point where things get tricky from both a latency and heat standpoint. As such we fully expect 'higher' IMFT 3D NAND chips to require either a new underlying design or new engineering feats before being ready for primetime.

Limiting to only 32 layers for this first generation does however represent a very nice increase in data densities. When combined with TLC (rather than MLC) storage transistors, a single 3D NAND 'layer' can theoretically pack in 384Gbits of data – or 48 Gigabytes. For reference purposes, remember that IMFT's planar NAND maxed out at 128Gbit, though once again that was MLC and not TLC.

To further boost total capacity per 'chip' 3D NAND has another ace up its sleeve in the form of layering. As with 2D planar NAND Crucial has been able to just use <i>two</i> layers of 3D NAND per package to create the necessary 96GB die package required for the MX300 LE 750GB. It is this combination of 3D design with layering that will allow future Crucial solid state drives to hit multi-Terabyte levels without requiring a massive PCB or complicated controller. This will be of special interest to M.2 enthusiasts as the lack of room for more than four NAND die packages ('ICs') on a M.2 2280's PCB was one the main bottlenecks to that form-factor's acceptance with mainstream consumers.

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/3d.jpg" border="0" alt="" />
</div>

In the meantime, 750GB drives may not sound like such a massive leap over the densest planar NAND designs, but reliability and performance are paramount with this new generation rather than just increased data densities. To help alleviate reliability concerns IMFT hasn’t radically changed the underlying NAND transistor storage technology like Samsung has. Instead of opting for the technically superior Charge Trap design they have carried over the proven Floating Gate transistor of previous planar designs. This not only makes manufacturing less costly with more consistent output, but it also removes one more variable from the equation when coding the controller firmware.

As an added bonus it also grants the firmware team the luxury of carrying over many of their previous works to this new generation without too much tweaking. This means Drive Write Acceleration, wear-leveling algorithms, and the like do not have to be radically altered just because the underlying NAND layout has been changed. As time passes more tweaking will obviously occur, but even launch day firmware will be fairly adept at handling the new NAND. It also means that tracking down issues and fixing them is a much more straightforward affair without the learning curve associated with a major transistor change – that may or may not perform in exactly the same manner as floating gates do.

This is why when a rather large performance bug was discovered late in the product testing cycle it only pushed the MX300’s release back from early Q2 to late Q2 release. In this short time span Crucial not only diagnosed and fixed the issue but also had the <i>luxury</i> of a full quality control / quality assurance testing cycle before launching this series series. Compare and contrast that with when TLC first came out and the issues that plagued Samsung's Evo line (that were never entirely fixed in the Evo 840) and there is a lot to be said for taking the conservative approach when transitioning from Floating Gate to Trap Charge transistor storage.

Regardless of opinions on which is the more optimal approach, there is no denying the fact that even though Crucial has not only gone from tried and true MLC to TLC but also from 2D to 3D NAND, that they have been able to keep the durability of the NAND extremely high. This increased durability is of course mainly due to Drive Write Acceleration and its ability to 'transform' TLC into quasi-SLC NAND but does point to toward the 3D NAND being not as fragile as what usually accompanies a new generation of NAND.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Test System and Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our test-bed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Windows 7 load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being an Intel DC S3700 800GB Solid State Drive.

For synthetic tests we used a combination of the ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD, Anvil Storage Utilities and PCMark 7.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.

For all testing a Asus Sabretooth TUF X99 LGA 2011-v3 motherboard was used, running Windows 7 64bit Ultimate edition. All drives were tested using either AHCI mode using Intel RST 10 drivers, or NVMHCI using Intel NVMe drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase or a manufactures 'Toolbox' and then quick formatted to make sure that they were in optimum condition for the next test suite.

Processor: Core i7 5930K
Motherboard: Asus Sabretooth TUF X99
Memory: 32GB Crucial Ballistix Elite DDR4-2666
Graphics card: NVIDIA GeForce GTX 780
Hard Drive: Intel DC S3700 800GB, Intel P3700 800GB
Power Supply: XFX 850

SSD FIRMWARE (unless otherwise noted):

OCZ Vertex 2 100GB: 1.33
Vertex 460 240GB: 1.0
Intel 7230 240GB: L2010400
AMD R7 240GB: 1.0
Crucial MX200: MU01
Intel 750: 8EV10135
Kingston HyperX Predator 480GB: 0C34L5TA
OCZ Trion 480GB & 960GB: SAFM11.1
AData XPG SX930 240GB : 5.9E
AData SP550 240GB: O0730A
PNY CS2211: CS221016
PNY CS1311: CS131122
ZOTAC Premium Edition: SAFM01.6
Apacer AS720: PLD1130
Apacer AS330: AP121PD0
Crucial MX300: M0CR011

Toshiba TC58 controller:
OCZ Trion 480GB & 960GB - Custom firmware w/ 19nm Toggle Mode TLC NAND

Samsung MDX controller:
Samsung 840 Pro 256GB- Custom firmware w/ 21nm Toggle Mode NAND

SandForce SF1200 controller:
OCZ Vertex 2 - ONFi 2 NAND

Marvell 9183 controller:
Plextor M6e 256GB- Custom firmware w/ 21nm Toggle Mode NAND

Marvell 1074 controller:
Crucial MX300 - Custom firmware w/ IMFT 384Gbit TLC 3D NAND

Marvell 9293 controller:
Kingston HyperX Predator - Custom firmware w/ 19nm Toggle Mode NAND


Intel X25 G3 controller:
Intel 730 - Custom firmware w/ ONFi 2 NAND

Intel NVMe G1 Controller:
Intel 750 - Customer firmware w/ MLC 20nm NAND

Phison PS3110 Controller:
Kingston HyperX Savage 240GB - 19nm Toggle Mode NAND
PNY CS2211: 15nm Toggle Mode NAND
PNY CS1311: 19nm TLC NAND
ZOTAC Premium Edition: 19nm MLC
Apacer AS330 - TLC NAND

JMicron JMF670H Controller:
AData XPG SX930 240GB - 128Gbit MLC NAND
Apacer AS720 - 128Gbit MLC NAND

SMI SM2256 Controller:
AData SP550 240GB - TLC NAND

Special Thanks to Crucial for providing the memory for this testbed.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Read / Write Bandwidth

Read Bandwidth


<i>For this benchmark, HDTach was used. It shows the potential read speed which you are likely to experience with these hard drives. The long test was run to give a slightly more accurate picture. We don’t put much stock in Burst speed readings and thus we no longer included it. The most important number is the Average Speed number. This number will tell you what to expect from a given drive in normal, day to day operations. The higher the average the faster your entire system will seem.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/read.jpg" border="0" alt="" />
</div>


Write Performance


<i>For this benchmark HD Tune Pro was used. To run the write benchmark on a drive, you must first remove all partitions from that drive and then and only then will it allow you to run this test. Unlike some other benchmarking utilities the HD Tune Pro writes across the full area of the drive, thus it easily shows any weakness a drive may have.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/write.jpg" border="0" alt="" />
</div>


We must admit to being extremely curious to see what this new NAND could do in sequential performance testing. We wanted to see precisely how the DWA has been implemented. As we all know the smaller MX200 series had a non-fixed / floating DWA cache amount that varied depending on how much free space was available. This did boost performance but made the smaller MX200's write performance a touch more variable than some would like.

We can say with near certainty that this floating DWA cache algorithm has not been carried over to the MX300. Instead there is a fixed amount that is rather paltry. In testing this drive consistently started out at near its write specification of over 500MB/s, but quickly plummeted to the 275-290 range at just past the 5% mark. Put another way this drive's cache is approximately 40GB to 50GB in size which should be more than enough for home users' needs. More importantly it is was very consistent and did not vary based on amount of data on the drive – or at least did not until there was less than a 120GB of free space. This is because it takes 3-bits worth of TLC space to make 1-bit of 'SLC' as each TLC NAND cell is transformed from a 3bit storage container to a 1bit container. Luckily the Over-Provisioning is being used for DWA and as such consumers can expect to get peppy short burst write performance right up until near full capacity is reached.

On the other had, the write numbers really aren't all that great to be honest. This new SSD actually polls behind the BX200 in this respect. Ouch.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
ATTO Disk Benchmark

ATTO Disk Benchmark


<i>The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/atto_w.jpg" border="0" alt="" />

<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/atto_r.jpg" border="0" alt="" />
</div>

We highly doubt many will be displeased with these results as the MX300 750GB acts more like a MLC based model than a TLC based one. Obviously this new 3D NAND does have a lot to offer, it is just a shame that it is not being fully harnessed by this model.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Crystal DiskMark / PCMark 7

Crystal DiskMark


<i>Crystal DiskMark is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/cdm_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/cdm_r.jpg" border="0" alt="" />
</div>


PCMark 7


<i>While there are numerous suites of tests that make up PCMark 7, only one is pertinent: the HDD Suite. The HDD Suite consists of numerous tests that try and replicate real world drive usage. Everything from how long a simulated virus scan takes to complete, to MS Vista start up time to game load time is tested in these core tests; however, we do not consider this anything other than just another suite of synthetic tests. For this reason, while each test is scored individually we have opted to include only the overall score.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/pcm.jpg" border="0" alt="" />
</div>


At least now we know why this is called a 'Limited Edition' as this performance is rather limited. We are especially disappointed with the overall read performance, and obviously the MX300's DWA settings need to be tweaked so they don't chew up as many processor cycles during read IO requests. Until then this drive is still quick, and we doubt the average user would ever guess it is a TLC based model. Instead most will simply assume it is ONFi MLC NAND. That certainly is noteworthy, even if the MX300 LE is unable to even match a MX200 1TB's performance level.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
AS-SSD / Anvil Storage Utilities Pro

AS-SSD


<i>AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/asd_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/asd_r.jpg" border="0" alt="" /></div>



Anvil Storage Utilities Pro


<i>Much like AS-SSD, Anvil Pro was created to quickly and easily – yet accurately – test your drives. While it is still in the Beta stages it is a versatile and powerful little program. Currently it can test numerous read / write scenarios but two in particular stand out for us: 4K queue depth of 4 and 4K queue depth of 16. A queue depth of four along with 4K sectors can be equated to what most users will experience in an OS scenario while 16 depth will be encountered only by power users and the like. We have also included the 4k queue depth 1 results to help put these two other numbers in their proper perspective. All settings were left in their default states and the test size was set to 1GB.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/anvil_w.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/anvil_r.jpg" border="0" alt="" /></div>


Once again we are seeing a drive that has been designed with accentuating overall write performance even at the expense of read throughput. Hopefully this decision will not noticeably impact performance in real world scenarios – as Solid State Drives in home computers spend much more time dealing with <i>read</i> requests than they do with <i>write</i>.

Also noteworthy is that this drive has been optimized for shallow queue depth scenarios. This is a lot less controversial since home systems rarely have I/O request queue depths that go beyond single digits. Put another way this is a decent drive, but one that is arguably no better a performer than the MX200 it replaces.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
IOMeter Results

IOMETER


<i>IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per HDD (1,4,16,64,128 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for single user environments.</i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/iom.jpg" border="0" alt="" />
</div>

There is no way to sugar coat these results. This drive is slow for an MX model. It is slow for a mainstream late Q2 2106 model. The MX300 however is decent bordering on great for an entry level model. Unlike the mainstream marketplace the entry level market is filled to capacity with planar TLC NAND based models and this SSD would have been a veritable shark in amongst a school of tasty tuna. Sadly, it is not marketed nor priced like a BX model, and as such this model will never find its way into entry level workstation systems – like the MX200 did.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Windows 8 / Adobe CS5 Load Time

Windows 8.1 Start Up w/ Boot Time A/V Scan Performance



<i>When it comes to hard drive performance there is one area that even the most oblivious user notices: how long it takes to load the Operating System. We have chosen Windows 8.1 64bit Pro as our Operating System with all 'fast boot' options disabled in the BIOS. In previous load time tests we would use the Anti-Virus splash screen as our finish line; this however is no longer the case. We have not only added in a secondary Anti-Virus to load on startup, but also an anti-malware program. We have set Super Anti-Spyware to initiate a quick scan on Windows start-up and the completion of the quick scan will be our new finish line. </i>


<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/boot.jpg" border="0" alt="" /></div>


Adobe CS5 Load Time


<i>Photoshop is a notoriously slow loading program under the best of circumstances, and while the latest version is actually pretty decent, when you add in a bunch of extra brushes and the such you get a really great torture test which can bring even the best of the best to their knees. Let’s see how our review unit fared in the newly updated Adobe crucible! </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/adobe.jpg" border="0" alt="" /></div>


As you can see the MX300 Limited Edition is far from being a poor drive with terrible performance. Instead this is a decent entry level / mainstream crossover model that offers more than enough performance for the typical home user. By that same token this model is priced more in line with the <i>upper</i>-mainstream marketplace and unlike the previous MX200 it is outclassed by similarly priced models.

We do have to question Crucial's motivations behind making this a TLC NAND based drive and labeling it as an 'MX' series model. IMFT is manufacturing next generation 32-layer 3D MLC NAND, and it would have been a much more optimal fit for the 'MX' moniker. Conversely, this TLC NAND would have potentially made for the best 'BX' model to be released to date. Hopefully the upcoming TX line makes up for the inevitable sales loss.
 

SKYMTL

HardwareCanuck Review Editor
Staff member
Joined
Feb 26, 2007
Messages
12,840
Location
Montreal
Firefox Performance / Real World Data Transfers

Firefox Portable Offline Performance


<i>Firefox is notorious for being slow on loading tabs in offline mode once the number of pages to be opened grows larger than a dozen or so. We can think of fewer worse case scenarios than having 100 tabs set to reload in offline mode upon Firefox startup, but this is exactly what we have done here.

By having 100 pages open in Firefox portable, setting Firefox to reload the last session upon next session start and then setting it to offline mode, we are able to easily recreate a worst case scenario. Since we are using Firefox portable all files are easily positioned in one location, making it simple to repeat the test as necessary. In order to ensure repetition, before touching the Firefox portable files, we have backed them up into a .rar file and only extracted a copy of it to the test device.</i>


<div align="center"><img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/ff.jpg" border="0" alt="" /></div>


Real World Data Transfers


<i>No matter how good a synthetic benchmark like IOMeter or PCMark is, it cannot really tell you how your hard drive will perform in “real world” situations. All of us here at Hardware Canucks strive to give you the best, most complete picture of a review item’s true capabilities and to this end we will be running timed data transfers to give you a general idea of how its performance relates to real life use. To help replicate worse case scenarios we will transfer a 10.00GB contiguous file and a folder containing 400 subfolders with a total 12,000 files varying in length from 200mb to 100kb (10.00 GB total).

Testing will include transfer to and transferring from the devices, using MS RichCopy and logging the performance of the drive. Here is what we found. </i>

<div align="center">
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/copy_sm.jpg" border="0" alt="" />
<img src="http://images.hardwarecanucks.com/image/akg/Storage/MX300/copy_lg.jpg" border="0" alt="" />
</div>

 

Latest posts

Top