We (and by we I mean Jeff) are looking into the possibility of using Consumer MLC SSD disks in our backup data center.
We want to try to keep costs down and usable space up – so the Intel X25-E’s are pretty much out at about 700$ each and 64GB of capacity.
What we are thinking of doing is to buy some of the lower end SSD’s that offer more capacity at a lower price point. My boss doesn’t think spending about 5k for disks in servers running out of the backup data center is worth the investment.
These drives would be used in a 6 drive RAID array on a Lenovo RD120. The RAID controller is an Adaptec 8k (rebranded Lenovo).
Just how dangerous of an approach is this and what can be done to mitigate these dangers?
A few thoughts;
- SSDs have ‘overcommit’ memory. This is the memory used in place of cells ‘damaged’ by writing. Low end SSDs may only have 7% of overcommit space; mid-range around 28%; and enterprise disks as much as 400%. Consider this factor.
- How much will you be writing to them per day? Even middle-of-the-range SSDs such as those based on Sandforce’s 1200 chips rarely appreciate more than around 35GB of writes per day before seriously cutting into the overcommitted memory.
- Usually, day 1 of a new SSD is full of writing, whether that’s OS or data. If you have significantly more than >35GB of writes on day one, consider copying it across in batches to give the SSD some ‘tidy up time’ between batches.
- Without TRIM support, random write performance can drop by up to 75% within weeks if there’s a lot of writing during that period – if you can, use an OS that supports TRIM
- The internal garbage collection processes that modern SSDs perform is very specifically done during quiet periods, and it stops on activity. This isn’t a problem for a desktop PC where the disk could be quiet for 60% of its usual 8 hour duty cycle, but you run a 24hr service… when will this process get a chance to run?
- It’s usually buried deep in specs but like cheapo ‘regular’ disks, inexpensive SSDs are also only expected to have a duty cycle of around 30%. You’ll be using them for almost 100% of the time – this will affect your MTBF rate.
- While SSDs don’t suffer the same mechanical problems regular disks do, they do have single and multiple-bit errors – so strongly consider RAIDing them even though the instinct is not to. Obviously it’ll impact on all that lovely random write speed you just bought but consider it anyway.
- It’s still SATA not SAS, so your queue management won’t be as good in a server environment, but then again the extra performance boost will be quite dramatic.
Good luck – just don’t ‘fry’ them with writes
I did find this link, which has an interesting and thorough analysis of MLC vs SLC SSDs in servers
In my view using an MLC flash SSD array for an enterprise application without at least using the (claimed) wear-out mitigating effects of a technology like Easyco’s MFT is like jumping out of a plane without a parachute.
Note that some MLC SSD vendors claim that their drives are “enterprisey” enough to survive the writes:
SandForce aims to be the first company with a controller supporting multi-level cell flash chips for solid-state drives used in servers. By using MLC chips, the SF-1500 paves the way to lower cost and higher density drives servers makers want.
To date flash drives for servers have used single-level cell flash chips. That’s because the endurance and reliability for MLC chips have generally not been up to the requirements of servers.
There is further analysis of these claims at AnandTech.
Additionally, now Intel has gone on the record saying that SLC might be overkill in servers 90% of the time:
“We believed SLC [single-level cell] was required, but what we found through studies with Microsoft and even Seagate is these high-compute-intensive applications really don’t write as much as they thought,” Winslow said. “Ninety percent of data center applications can utilize this MLC [multilevel cell] drive.”
.. over the past year or so, vendors have come to recognize that by using special software in the drive controllers, they’re able to boost the reliability and resiliency of their consumer-class MLC SSDs to the point where enterprises have embraced them for high-performance data center servers and storage arrays. SSD vendors have begun using the term eMLC (enterprise MLC) NAND flash to describe those SSDs.
“From a volume perspective, we do see there are really high-write-intensive, high-performance computing environments that may still need SLC, but that’s in the top 10% of even the enterprise data center requirements,” Winslow said.
Intel is feeding that upper 10% of the enterprise data center market through its joint venture with Hitachi Global Storage Technologies. Hitachi is producing the SSD400S line of Serial Attached SCSI SSDs, which has 6Gbit/sec. throughput — twice that of its MLC-based SATA SSDs.
Intel, even for their server oriented SSD drives, has migrated away from SLC to MLC with very high “overprovisioning” space with the new Intel SSD 710 series. These drives allocate up to 20% of overall storage for redundancy internally:
Performance is not top priority for the SSD 710. Instead, Intel is aiming to provide SLC-level endurance at a reasonable price by using cheaper eMLC HET NAND. The SSD 710 also supports user-configurable overprovisioning (20%), which increases drive endurance significantly. The SSD 710′s warranty is 3 years or until a wear indicator reaches a certain level, whichever comes first. This is the first time we’ve seen SSD warranty limited in this manner.
Always base these sorts of things on facts rather than supposition. IN this case, collecting facts is easy: record longish-term read/write IOPS profiles of your production systems, and then figure out what you can live with in a disaster recovery scenario. You should use something like the 99th percentile as your measurement. Do not use averages when measuring IOPS cpacity – the peaks are all that matter! Then you need to buy the required capacity and IOPS as needed for your DR site. SSDs may be the best way to do that, or maybe not.
So, for example, if your production applications require 7500 IOPS at the 99th percentile, you might decide you can live with 5000 IOPS in a disaster. But that’s at least 25 15K disks required right there at your DR site, so SSD might be a better choice if your capacity needs are small (sounds like they are). But if you only measure that you do 400 IOPS in production, just buy 6 SATA drives, save yourself some coin, and use the extra space for storing more backup snapshots at the DR site. You can also separate reads and writes in your data collection to figure out just how long non-enterprise SSDs will last for your workload based on their specifications.
Also remember that DR systems might have smaller memory than production, which means more IOPS are needed (more swapping and less filesystem cache).
Even if the MLS SSD only lasted for one year, in a years time the replacements will be a lot cheaper. So can you cope with having to replace the MLS SSD when they where out?
If we set the write quantity problem aside (or prove that consumer level SSDs can handle it), I think SSDs are a good thing to add to enterprise-level environments. You will probably be using the SSDs in a RAID array. RAID5 or RAID6. And the problem with these is that after a single drive failure, the array becomes increasingly vulnerable to failure. And the time to rebuild it depends heavily on the volume of the array. A several TB array can take days to rebuild, while being constantly accessed. In case of SSDs, the RAID-arrays will a) be inevitably smaller b) rebuild time decreases drastically.
A Whitepaper on the differences between SLC and MLC from SuperTalent puts the endurance of MLC and a 10th of the endurance of an SLC SSD but the chances are the MLS SSD’s will outlive the hardware you are putting them into anyway. I’m not sure how reliable those statistics/facts are from SuperTalent though.
Assuming you get a similar level of support from the supplier of the MLC SSD’s then the lower price point makes it worth a shot.
You should just calculate the amount of daily writes you have with your current set-up and compare that with what the manufacturer guarantees their SSD drives can sustain. Intel seems to be the most up-front about this – for example, take a look at their mainstream SSD drive datasheets: http://www.intel.com/design/flash/nand/mainstream/technicaldocuments.htm
Section 3.5 (3.5.4, specifically) of the specs document says that you’re guaranteed to have your drive last at least 5 years with 20GB of writes per day. I assume that’s being calculated when using the entire drive capacity and not provisioning any free space for writes yourself.
Also interesting is the datasheet regarding using mainstream SSDs in an enterprise environment.
I deployed a couple of 32gb SLC drives a couple of years ago as a buffer for some hideously poorly designed app we were using.
The application was 90% small writes (< 4k) and was running consistently (24/7) at 14k w/s once on the SSD drives. They were configured RAID 1, everything was rosy, latency was low!
However roughly one month in and the first drive packed up, literally within 3 hours, the second drive had died as well. RAID 1 not such a good plan after all
I would agree with the other posters on some sort of RAID 6 if nothing else it spreads those writes out across more drives.
Now bear in mind this was a couple of years ago and these things are much more reliable now and you may not have a similar I/O profile.
The app has been re-engineered, however as a stop gap which may or may not help you, we created a large ram disk, created some scripts to rebuild/backup the ram disk and take the hit of a hour or so loss on data/recovery time.
Again, your the life cycle of your data may be different.