What's new
  • Please do not post any links until you have 3 posts as they will automatically be rejected to prevent SPAM. Many words are also blocked due to being used in SPAM Messages. Thanks!

Intel DC S3500 480GB SSD Review (Single & RAID)

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Intel’s DC S3500 has been created to rewrite many market driven preconceptions about enterprise-level SSDs. Unlike the Intel 910 or even the DC S3700 series which both focus on competing against other enterprise SSDs, it has been designed to displace high performance 15,000 RPM hard drives from the server rack. At first glance this may not seem all that significant but the DC S3500 is one of the first enhanced endurance SSDs available at an accessible price point.

The mature and ubiquitous hard drive has always been the enterprise environment’s de facto standard. Displacing them as a gold standard in a market that requires value, capacity and long term endurance is a titanic undertaking and many other manufacturers have deftly avoided this battle.

When purchasing managers or even enthusiasts think about server-grade SSD solutions, stratospheric pricing has often caused them to look elsewhere. Less expensive and capacity focused mainstream alternatives have always been available, while SSDs represented a very much unknown factor. Now, as solid state technology has matured, NAND prices are falling and longevity –a cornerstone of enterprise storage- is finally reaching a point of viability. This has allowed Intel to create the new DC S3500, a lineup of drives which were impossible to design a year ago.

TCO.jpg

While there is no denying most high end SSDs outperform even the fastest 15K hard drive, benchmark numbers typically fall on deaf ears outside of the mass market. However, SSDs have always offered some tangible tertiary benefits as well: power consumption, spatial allocation and heat are significantly reduced. As data centers and server farms expand, these three factors are becoming increasingly important so there is an emerging interest for something other than traditional spindle-based storage media. This is especially true for read heavy scenarios which the DC S3500 has been designed to excel at. Intel states that read only tasks which would have required an entire six foot / 42U rack of five hundred power hungry 15K RPM hard drives can now be accomplished with a mere twelve DC S3500 drives.

Intel already has their hands well into the enterprise SSD market. The DC S3700 series uses ultra expensive 25nm HET MLC NAND while the impressive 910 series uses a PCI-E interface to reach dizzying levels of performance. Each of those solutions will easily set a company or individual back thousands of dollars but the DC S3500 has slightly more humble goals in mind. It uses heavily binned, highly screened ONFi 2 20nm enterprise-certified MLC NAND similar to that found in other Intel drives recently released.

chart1.jpg

This aggressive screening process, in conjunction with artificially capped write performance allows the DC S3500 to have an endurance of up to 450TB instead of 22TB like the mass market Intel 335. To put this in different terms, the DC S3500 is rated for about 0.3 Drive Writes Per Day versus the 335’s 0.05 DWPD and keeps with JEDEC’s JESD218 standards. This is however much, much lower write endurance than the 10 DWPD the DC S3700 is rated for.

Using non-HET NAND has a major impact on write endurance and write performance but the cumulative effect on the DC S3500’s upfront cost and Total Cost of Ownership is significant. Thanks to the 20nm NAND these new drives will come with a price per gigabyte ratio not that much higher than the average consumer grade SSD or about $1.20 per GB for the 480GB model. That’s right, the DC S3500 480GB drive we’re reviewing here goes for just $579.

With lower tier enterprise-grade SSDs reaching more affordable levels, one may think the DC S3500 may be a prime candidate for enthusiasts who want some peace of mind. Not so fast. As we’ll see in the coming pages, these drives are laser targeted at a specific usage pattern, one which favors their deployment in high level storage applications rather than gaming systems.

mfg.jpg
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Closer Look

A Closer Look at the Intel DC S3500


top_sm.jpg

has always taken a durable – if rather unrefined – approach to solid state drive cases and the DC S3500 is no exception. The only difference from 3700 and 3500 models is the sticker itself as both models share the exact same 7.5mm 2.5” form factor housing.

bottom_sm.jpg

As with the DC S3700 before it, both of our DC S3500 480GB samples do show rather extensive machine and tool markings on them. Intel doesn’t waste money on making their SSD devices look ‘pretty’. Rather, they’re durable, robust and rather utilitarian in appearance.

board1_sm.jpg

Opening up the case we can see the DC S3500 480B shares the exact same architecture of the DC S3700 series. The only differences between the two series are the type of NAND, over-provisioning level and number of RAM ICs. Much like the Intel DC S3700 800GB, the DC S3500 480GB has two 512MB DDR3-1600 Micron RAM IC or four times the amount of RAM as found on the 200GB DC S3700.

Both models use the same Intel highly capable X25 Gen 3 controller with two capacitors on the edge of the PCB. These capacitors are for Flush in Flight and provide more than enough reserve power to allow the DC S3500 to flush its buffers and complete any outstanding writes in the event of unexpected power loss. Unlike most 7mm form factor drives we have seen, Intel has also included plastic ‘stiffeners’ to ensure that there is no flexing or movement of the PCB.

board2_sm.jpg

Just like the DC S3700, the DC S3500 has 16 NAND ICs filling up every slot on the PCB. The NAND itself is actually main difference between the DC S3700 and DC S3500 series; instead of HET MLC NAND Intel has opted standard 20nm ONFi 2 MLC NAND.

Standard 20nm MLC NAND has an erase cycle life of between 3,000 and 5,000 which is quite low for use in constant high demand environments like the typical server. However, the binning process and optimized performance curves should result in drastically increased longevity. There is 512GB worth of NAND onboard which means a mere ~7% of over-provisioning is being used instead of the ~20% of the DC S3700 series.

This less durable NAND does explain the difference in each respective drive series’ rated write lifespan. Instead of an SSD which is rated for Petabytes worth of writes, the DC S3500 is rated for Terabytes. In the 480GB model’s case it is 275TB.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Intel's X25 (Gen3) Controller

Introducing the Intel X25 (Gen3) Controller


controller.jpg

Even though it has been five years since their last in-house controller was released, Intel has been hard at work improving upon previous designs. The end result is the X25 generation 3 controller which was first used inside the DC S3700 and now DC S3500 series of solid state drives.

Very little is actually known about the physical layout and design of the controller itself. All Intel is willing to share about the architecture is that it is a 100% in-house proprietary design, using custom built firmware. However, while Intel may not be willing to give specifics on the core’s primary components they were more than willing to share feature-specific details.

The most obvious departure from previous designs is the new X25’s use of 8 channels rather than the last generation’s 10-channel layout. 8 channel designs have become the de-facto standard for the industry and in all likelihood Intel went with slightly less pathways to help reduce latency. Much like OCZ’s Barefoot 3 controller, latency, long term sustained performance and durability are the three central domains which Intel wanted to improve upon.

FRE.jpg

On the latency front, Intel states that 99.9% of the time average latency will be less than .500ms. This is the same level of performance as when this controller is paired with Intel’s more expensive DC S3700 series. Unfortunately, while the IOPS of this controller has been greatly improved over previous Intel branded controllers, the X25 Gen 3 controller has been somewhat handicapped by Intel’s choice of NAND. Instead of the DC S3700’s 75K read / 35K writes IOPS rating, the Intel DC S3500 is rated for 75K read / 11.5K write. Obviously, the firmware of this controller has been toned down to reduce undue stress on the less durable MLC NAND; however, with the same read performance rating this issue will only be a cause for concern in certain write centric scenarios.

Increased performance with decreased latency is all well and fine but what really makes the DC-S3500 distinctive is the fail safes it has in place to ensure data safety. All controllers implement certain a level of checks and balances to ensure data integrity but the new Intel X25 and the DC S3500 takes this approach to entirely new levels. The Uncorrectable Bit Error Rate (UEBR) is actually an unheard of 1 bit in 10 to the 17th power, which is the same as the DC S3700’s rating. To put this into understandable terminology, Intel expects there to be a single unrecoverable bit error in every 12.5 Petabytes of data read, and for this error to happen at the same expected interval as it would on the more costly DC S3700 series.

The X25 is able to offer such high levels of dependability due to the comprehensive data protection routines it uses under the auspices of Intel’s Stress Free Protection. At its most basic, this philosophy of end to end protection starts with over provisioning. Much like the later generation of SandForce controllers, Intel has chosen ~7% of over-provisioning for their new controller. In the 480GB model starts out with 512GB and subtracts ~32GB of space to ensure a fail safe partition is maintained. Unfortunately, this is a lot less than the SC S3700’s 20% OP level and when combined with the less durable nature of the 20nm MLC NAND used, it becomes apparent that the DC S3500 isn't meant for heavy write tasks.

This spare area can be used for everything from bad block replacement to garbage collection. It also allows for more consistent long term performance as the controller will always have access to free blocks to use for wear leveling even when the drive is nearing full capacity.

aes.jpg

Like the previous generation X25 controller, this new iteration makes use of auto-encryption with AES routines which have now been upgraded from 128-bit to 256-bit. By default, Intel has disabled AES encryption but it can be initiated via software on a case by case basis. This added flexibility allows the controller to be more adaptable to the needs of individual clients. Using the built in AES encryption routines will impart a certain amount of performance loss due to increased overhead, but there are scenarios where this is a trade-off well worth making.

The X25 G3 also uses BCH error correction algorithms. However, unlike any other controller, the X25 G3 writes parity data to the NAND and does ECC on the memory rather than solely focusing upon the primary storage environment.

All these features are fairly typical for modern controllers, albeit at the higher end of the scale. However, Intel has not stopped at basic levels to ensure data safety. They’ve stepped up the game by double and triple checking nearly every segment of data. So much so that a ‘simple’ write request to the NAND looks like this: a CRC is created along with the DATA, the DATA + CRC is then written to the NAND, an LBA tag is then created, then a parity bit is also written and checked. Finally, during read IO requests the additional data protections are all read – including ECC on the memory - to ensure the data is safe and reliable.

If at any time a check fails, the controller will notify the system host controller to recover the data via its own ECC. Should this happen, the administrator is instructed to removed the drive from the RAID array, replace it with a new one and RMA the failing drive. Meanwhile, all of the data would be saved on other segments of the array.

On top of all this, at standard intervals –during low IO periods, boot up, etc. - the controller will also routinely check the status of all onboard capacitors. If they fail the self-test, the controller will automatically disable the write buffer and notify the system’s host controller of the issue.

It is worth noting that the controller does not treat its internal NAND as an array. Rather, it treats the ICs as “one” logical unit so there is no equivalent of LSI SF2281’s RAISE (Redundant Array of Independent Silicon Elements) being implemented meriting such extreme error recovery procedures. Taken as a whole, this high performance controller is highly adaptable and much more capable than either any of its predecessors or any other controller on the market today.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Testing Methodology

Testing Methodology


Testing a drive is not as simple as putting together a bunch of files, dragging them onto folder on the drive in Windows and using a stopwatch to time how long the transfer takes. Rather, there are factors such as read / write speed and data burst speed to take into account. There is also the SATA controller on your motherboard and how well it works with SSDs & HDDs to think about as well. For best results you really need a dedicated hardware RAID controller w/ dedicated RAM for drives to shine. Unfortunately, most people do not have the time, inclination or monetary funds to do this. For this reason our testbed will be a more standard motherboard with no mods or high end gear added to it. This is to help replicate what you the end user’s experience will be like.

Even when the hardware issues are taken care of the software itself will have a negative or positive impact on the results. As with the hardware end of things, to obtain the absolute best results you do need to tweak your OS setup; however, just like with the hardware solution most people are not going to do this. For this reason our standard OS setup is used. However, except for the Vista load test times we have done our best to eliminate this issue by having the drive tested as a secondary drive. With the main drive being a Phoneix Pro 120GB Solid State Drive.

For synthetic tests we used a combination of ATTO Disk Benchmark, HDTach, HD Tune, Crystal Disk Benchmark, IOMeter, AS-SSD and PCMark Vanatage.

For real world benchmarks we timed how long a single 10GB rar file took to copy to and then from the devices. We also used 10gb of small files (from 100kb to 200MB) with a total 12,000 files in 400 subfolders.

For all testing a Asus P8P67 Deluxe motherboard was used, running Windows 7 64bit Ultimate edition (or Vista for boot time test). All drives were tested using AHCI mode using Intel RST 10 drivers.

All tests were run 4 times and average results are represented.

In between each test suite runs (with the exception being IOMeter which was done after every run) the drives are cleaned with either HDDerase, SaniErase, OCZ SSDToolbox or Intel Toolbox and then quick formatted to make sure that they were in optimum condition for the next test suite.


Steady-State Testing

While optimum condition performance is important, knowing exactly how a given device will perform after days, weeks and even months of usage is actually more important for most consumers. For home user and workstation consumers our Non-Trim performance test is more than good enough. Sadly it is not up to par for Enterprise Solid State Storage devices and these most demanding of consumers.

Enterprise administrators are more concerned with the realistic long term performance of any device rather than the brand new performance as down time for TCL is simply not an option. Even though an Enterprise device will have many techniques for obfuscating and alleviating a degraded state (eg Idle Time Garbage Collection, multiple controllers, etc) there does come a point where these techniques fail to counteract the negative results of long term usage in an obviously non-TRIM environment. The point at which the performance falls and then plateaus at a lower performance level is known as the “steady state” performance or as “degraded state” in the consumer arena.

To help all consumers gain a better understanding of how much performance degradation there is between “optimal” and “steady state” we have included optimal results alongside tests conducted after first degrading a drive until it plateaus and reaches its steady state performance level. These tests are labelled as “Steady State” results and can be considered as such.

While the standard for steady state testing is actually 8 hours we feel this is not quiet pessimistic enough and have extended the pre-test run to a full ten hours before testing actually commences. The pre-test or “torture test” consists of our standard “NonTrim performance test” and as such to quickly induce a steady state we ran ten hours of IOMeter set to 100% random, 100% write, 4k size chunks of data at a 64 queue depth across the entire array’s capacity. At the end of this test, the IOMeter file is deleted and the device was then tested using a given test sections’ unique configuration.


Processor: Core i5 2500
Motherboard: Asus P8P67 Deluxe
Memory: 8GB Corsair Vengeance LP “blue”
Graphics card: Asus 5550 passive
Primary Hard Drive: Intel 520 240GB
Power Supply: XFX 850

Below is a description of each SSD configuration we tested for this review:

Intel 910 800GB (Single Drive) HP mode: A single LUN of the Intel 910 800GB in its High Performance Mode

Intel 910 800GB (Raid 0 x2) HP mode: Two of the Intel 910 800GB SSD LUN's in High Performance Mode Configured in RAID 0

Intel 910 800GB (Raid 0 x4) HP mode: All four of the Intel 910 800GB SSD LUN's in High Performance Mode Configured in RAID 0

Intel DC S3500 480GB: A single DC S3500 480GB drive

Intel DC S3500 480GB (RAID 0): Two DC S3500 480GB drives Configured in RAID 0

Intel DC S3700 200GB: A single DC S3700 200GB drive

Intel DC S3700 800GB: A single DC S3700 800GB drive

Intel DC S3700 200GB (RAID 0): Two DC S3700 200GB drives Configured in RAID 0

Intel DC S3700 800GB (RAID 0): Two DC S3700 800GB drives Configured in RAID 0

Intel 710 200GB (RAID 0): Two 710 200GB drives Configured in RAID 0
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
ATTO Disk Benchmark

ATTO Disk Benchmark


The ATTO disk benchmark tests the drives read and write speeds using gradually larger size files. For these tests, the ATTO program was set to run from its smallest to largest value (.5KB to 8192KB) and the total length was set to 256MB. The test program then spits out an extrapolated performance figure in megabytes per second.

atto_w.jpg

atto_r.jpg


Much like the DC S3700 series we previously tested, the read performance curves in both single and simple two drive RAID 0 configurations is simply amazing. For all intents and purposes a 480GB DC S3500 will provide read performance nearly identical to that of a similarly sized DC S3700 drive.

Unfortunately, the write performance - in and out of RAID configurations - is not nearly as impressive. This is a known weakness of 20nm MLC NAND and is not overly surprising to see it plague the DC S3500 like it does consumer grade drives.

It is worth keeping mind however that this newer – and more reasonably priced – model is not meant to compete against high performance solid state drives. Rather,it is meant to compete against hard drives. When compared against hard drives, this new model is simply in a different league. Also note that if typical HDDs were included here, they would simply show up as a flat smudge along the bottom axis.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Crystal DiskMark / AS-SSD

Crystal DiskMark


Crystal DiskMark is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and random read/write speeds; and allows you to set the number of tests iterations to run. We left the number of tests at 5 and size at 100MB.

cdm_w.jpg

cdm_r.jpg


AS-SSD


AS-SSD is designed to quickly test the performance of your drives. Currently, the program allows to measure sequential and small 4K read/write speeds as well as 4K file speed at a queue depth of 6. While its primary goal is to accurately test Solid State Drives, it does equally well on all storage mediums it just takes longer to run each test as each test reads or writes 1GB of data.

asd_w.jpg

asd_r.jpg

Once again the overall read performance of this new model is impressive, but the write performance isn't quite up to the level of other, more expensive Enterprise-class SSDs. Depending on the intended environment, capacity and usage requirements, opting for a larger DC S3500 over a smaller DC S3700 may actually make perfect sense. These drives cost half of what the DC S3700 series goes for yet will net nearly identical read performance. This will allow enterprise consumers the luxury of either using half the number of drives to hit a target storage capacity or usingtwice as many DC S3500s to further boost overall performance.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER: Our Standard Test

IOMETER: Our Standard Test


IOMeter is heavily weighted towards the server end of things, and since we here at HWC are more End User centric we will be setting and judging the results of IOMeter a little bit differently than most. To test each drive we ran 5 test runs per device (1,4,16,64,128 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 80% read 20% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for single user environments and workstation environments.

iom.jpg

Because of its 50/50 performance split, the 800GB DC S3700 easily outperforms the DC S3500 series in single and RAID scenarios. However, at lower queue depths the extra onboard cache does afford the 480GB version performance levels that approach the DC S3700 200GB. It is only at deeper queue depths that the difference in NAND becomes apparent.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
IOMETER: File, Web & Email Server Testing

IOMETER: File Server Test


To test each drive we ran 6 test runs per device (1,4,16,64,128,256 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 6 subparts were set to run 100% random, 75% read 25% write; testing 512b, 4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 6 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 6. This gives us an average score for that particular queue depth that is heavily weighted for file server usage.


iom_f.jpg

Thanks to the 75/25 performance split of this typical fileserver test you would have a hard time telling a 480GB DC S3500 from DC S3700 800GB in single drive configurations. Even in simple RAID configurations, the similar read performance results in rather small differences at lower queue depths but once the test expands into other ranges, the onboard cache is unable to hide the DC S3500's performance dropoff.

Whether or not this difference is significant enough to warrant the cost discrepancy between Intel's enterprise class drives is open to debate. In all likelihood it would have to be decided on a case by case basis and greatly depend on estimate average total writes per day. Simply put, if your budget only allows for 10 DC S3700 400GB drives or 32 15K RPM HDDs, sixteen DC S3500 480GB SSDswould not only provide a lot more performance, but a nice boost in overall capacity as well as less wear and tear on individual NAND cells.


IOMETER: Web Server Test


The goal of our IOMeter Web Server configuration is to help reproduce a typical heavily accessed web server. The majority of the typical web server’s workload consists of dealing with random small file size read requests.

To replicate such an environment we ran 6 test runs per device (1,4,16,64,128,256 queue depth) each test having 8 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 8 subparts were set to run 100% random, 95% read 5% write; testing 512b, 1k, 2k,4k,8k,16k,32k,64k size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the 8 subtests are given a score in I/Os per second. We then take these 8 numbers add them together and divide by 8. This gives us an average score for that particular queue depth that is heavily weighted for web server environments.


iom_w.jpg

Since this test is nearly all reads, the DC S3700 and DC S3500 draw even. This new SSD series really does make an excellent Web Server storage medium and there is very little reason to opt for either inexpensive 15K hard drives or the expensive DC S3700



IOMETER: Email Server Test


The goal of our IOMeter Email Server configuration is to help reproduce a typical corporate email server. Unlike most servers, the typical email server’s workload is split evenly between random small file size read and write requests.

To replicate such an environment we ran 5 test runs per drive (1,4,16,64,128 queue depth) each test having 3 parts, each part lasting 10 min w/ an additional 20 second ramp up. The 3 subparts were set to run 100% random, 50% read 50% write; testing 2k,4k,8k, size chunks of data. When each test is finished IOMeter spits out a report, in that reports each of the subtests are given a score in I/Os per second. We then take these numbers add them together and divide by 3. This gives us an average score for that particular queue depth that is heavily weighted for email server environments.


iom_e.jpg

In the email server test, the DC S3500's handicapped write throughput results in performance levels which are noticeably worse than the DC S3700. This can be mitigated to a great extent by putting more DC S3500 SSDs into the array, and while yes more DC S3500s can fit into a given budget, the ~30X reduction in NAND lifespan does not make this a great trade-off. Only for those scenarios where the additional capacity would be worthwhile would we opt for the less expensive DC S3500 series.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Steady State Testing: Standard, File, Web & Email Server

IOMETER: Our Standard Steady State Test


iom_s.jpg


IOMETER: File Server Steady State Test


iom_f_s.jpg


IOMETER: Web Server Steady State Test


iom_w_s.jpg


IOMETER: Email Server Steady State Test


iom_e_s.jpg

The steady-state results just back up everything we already know the DC S3500 is a good enterprise SSD for certain environments. Usually price is of only secondary concern to enterprise consumers but we again and again come back to it, as this level of performance per dollar is simply amazing.

For read centric scenarios these new DC S3500 480GB’s are excellent drives and can nearly equal the much more expensive DC S3700 series' performance. Unfortunately, when dealing with more and more write heavy tasks they do start to fall behind.
 
Last edited by a moderator:

AkG

Well-known member
Joined
Oct 24, 2007
Messages
5,270
Adobe CS5 Load Time / Firefox Portable

ADOBE CS5 Steady State Load Time


Photoshop is a notoriously slow loading program under the best of circumstances, and while the latest version is actually pretty decent, when you add in a bunch of extra brushes and the such you get a really great torture test which can bring even the best of the best to their knees. To make things even more difficult we have first placed the devices into a steady state so as to help recreate the absolute worst case scenario possible.

adobe.jpg



Firefox Portable Offline Steady State Performance


Firefox is notorious for being slow on loading tabs in offline mode once the number of pages to be opened grows larger than a dozen or so. We can think of fewer worse case scenarios than having 100 tabs set to reload in offline mode upon Firefox startup, but this is exactly what we have done here.

By having 100 pages open in Firefox portable, setting Firefox to reload the last session upon next session start and then setting it to offline mode, we are able to easily recreate a worse case scenario. Since we are using Firefox portable all files are easily positioned in one location, making it simple to repeat the test as necessary. In order to ensure repetition, before touching the Firefox portable files, we have backed them up into a .rar file and only extracted a copy of it to the test device.

As with the Adobe test, we have first placed the devices into a steady state.


ff.jpg

Because both of these tests are heavily weighted towards read performance these results are right where we expect them to be. The 480GB DC S3500 is an excellent performer even though this isn't the environment it was intended for.
 
Last edited by a moderator:

Latest posts

Top