Hard Drive Reliability Review for 2015

Backblaze Hard Drive Stats

For the most recent hard drive reliability statistics, as well as the raw hard drive test data, visit Hard Drive Data and Stats.

By the end of 2015, the Backblaze data center had 56,224 spinning hard drives containing customer data. These hard drives reside in 1,249 Backblaze Storage Pods. By comparison, 2015 began with 39,690 drives running in 882 Storage Pods. We added 65PB of storage in 2015, give or take a petabyte or two. Not only was 2015 a year of growth, it was also a year of drive upgrades and replacements. Let’s start with the current state of the hard drives in our data center as of the end of 2015 and then dig into the rest later on.

Hard Drive Statistics for 2015

The table below contains the statistics for the 18 different models of hard drives in our data center as of 31 December 2015. These are the hard drives used in our Storage Pods to store customer data. The failure rates and confidence intervals are cumulative from Q2 2013 through Q4 2015. The drive count is the number of drives reporting as operational on 31 December 2015.
Hard Drive Relability 2015

During 2015, five drive models were retired and removed from service. These are listed below. The cumulative failure rate is based on data from Q2 2013 through the date when the last drive was removed from service.

Hard Drives Removed 2015

Note that drives retired and removed from service are not marked as “failed,” they just stop accumulating drive-hours when they are removed.

Computing Drive Failure Rates

This is a good point to review how we compute our drive failure rates. There are two different ways to do this, either works.

For the first way, there are four things required:

  1. A defined group to observe, in our case a group of drives, usually by model.
  2. A period of observation, typically a year.
  3. The number of drive failures in the defined group over the period of observation.
  4. The number of hours a group of drives are in operation over the period of observation.

Let’s use the example of 100 drives which over the course of 2015 accumulated a total of 750,000 power-on hours based on their SMART 9 RAW values. During 2015, our period of observation, five drives failed. We use the following formula to compute the failure rate:

(100*drive-failures)/(drive-hours/24/365)

(100*5)/(750,000/24/365)

This gives us a 5.84% annual failure rate for 2015.

For the second method, the only change is how we count the time a group of drives is in service. For a given drive we simply count the number of days that drive shows up in our daily log files. Each day a drive is listed it counts as one drive-day for that hard drive. When a drive fails, it is removed from the list and its final count is used to compute the total number of drive-days for all the drives being observed.

Drives by Manufacturer

The drives in the data center come from four manufacturers. The following chart shows the cumulative hard drive failure rates by manufacturer for all drives:
Hard Drive Failures by Manufacturer
Let’s take a look at the “make up” of the drives in our data center.

Hard Drive Count 2015 2015-drive-days-piechart

The chart on the left is the total number of drives and the chart on the right is the total number of drive hours for all the data drives by each manufacturer. Notice there are more Seagate drives but the HGST drives have more hours. The HGST drives are older, as such they have more drives hours, but most of our recent drive purchases have been Seagate drives. Case in point, nearly all of the 16,000+ drives purchased in 2015 have been Seagate drives. Of the Seagate drives purchased in 2015, over 85% were 4TB Seagate drives.

Hard Drive Reliability by Drive Size

1TB Hard Drives

We removed the last of our 1TB drives during Q4 and ended the quarter with zero installed. This was done to increase the amount of storage in a 1TB filled Storage Pod as we replaced the 1TB drives with 4TB drives (and sometimes 6TB drives). Now, in the same Storage Pod, we get four times as much data. The 1TB Western Digital drives performed well with many of the drives exceeding six years in service and a handful reaching seven years before we replaced them. The cumulative annual failure rate was 5.74% in our environment, a solid performance.

We actually didn’t retire these 1TB Western Digital drives—they just changed jobs. We now use many of them to “burn-in” Storage Pods once they are done being assembled. The 1TB size means the process runs quickly, but is still thorough. The “burn-in” process pounds the drives with reads and writes to exercise all the components of the system. In many ways this is much more taxing on the drives than life in an operational Storage Pod. Once the “burn-in” process is complete, the Western Digital 1TB drives are removed and we put 4TB or 6TB drives in the Pods for the cushy job of storing customer data. On the other hand, the workhorse 1TB Western Digital drives are returned to the shelf where they dutifully await the next “burn-in” session.

2TB Hard Drives

The Seagate 2TB drives were also removed from service in 2015. While their cumulative failure rate was slightly high at 10.1%, they were removed from service because we only had 225 of those drives and it was easier to upgrade the Storage Pods to 4TB drives than to buy and stock the 2TB Seagate drives.

On the other hand, we still have over 4,500 HGST 2TB drives in operation. Their average age is nearly five years (58.6 months) and their cumulative failure rate is a meager 1.55%. At some point we will want to upgrade the 100 Storage Pods they occupy to 4TB or 6TB drives, but for now the 2TB HSGT drives are performing very well.

3TB Hard Drives

The last of the 3TB Seagate drives were removed from service in the data center during 2015. Below is a chart of all of our 3TB drives that were in our data center anytime from April 2013 through the end of Q4 2015.

3TB Drive Review

4TB Hard Drives

As of the end of 2015, 75% of the hard drives in use in our data center were 4TB in size. That represents 42,301 drives broken down as follows by manufacturer:

4TB Drive Stats

The cumulative failure rates of the 4TB drives to date are shown below:

4TB Hard Drive Reliability

All of the 4TB drives have acceptable failure rates, but we’ve purchased primarily Seagate drives. Why? The HGST 4TB drives, while showing exceptionally low failure rates, are no longer available having been replaced with higher priced, higher performing models. The readily available and highly competitive price of the Seagate 4TB drives, along with their solid performance and respectable failure rates, have made them our drive of choice.

A relevant observation from our Operations team on the Seagate drives is that they generally signal their impending failure via their SMART stats. Since we monitor several SMART stats, we are often warned of trouble before a pending failure and can take appropriate action. Drive failures from the other manufacturers appear to be less predictable via SMART stats.

6TB Hard Drives

We continued to add 6TB drives over the course of 2015, bringing the total to nearly 2,400 drives (1,882 Seagate, 485 Western Digital). Below are the Q4 and cumulative failure rates for each of these drives.

6TB Drive Stats

The Seagate 6TB drives are performing very well, even better than the 4TB Seagate drives. So why do we continue to buy 4TB drives when quality 6TB drives are available? Three reasons:

  1. Based on current street prices, the cost per terabyte of the 4TB drives (0.028/GB) is less than that of the 6TB drives (0.044/GB).
  2. The channels we purchase from are not flush with 6TB drives, often limiting sales to 50 or 100 units. There was a time during our drive farming days when we would order 50 drives and be happy, but in 2015 we purchased over 16,000 new drives. The time and effort of purchasing small lots of drives doesn’t make sense when we can purchase 5,000 4TB Seagate drives in one transaction.
  3. The 6TB drives like electricity. The average operating power is 9.0W for a Seagate 6TB drive. That is 60% more than the 5.6W used by the 4TB Seagate drives we use. When you have a fixed amount of power per rack, this can be a problem. The easy answer would seem to be to add more electric to the rack, but as anyone who designs data centers knows, it’s not that simple. Today, we mix 6TB filled Storage Pods and 4TB filled Storage Pods in the same rack to optimize both power consumption and the storage space per square foot.

5TB and 8TB Hard Drives

We continue to only have 45 of each of the 5TB Toshiba and 8TB HGST Helium drives. One 8TB HGST drive failed during Q4 of 2015. Over their lifetime, the 5TB Toshiba drives have a 2.70% annual failure rate with one drive failure and the 8TB HGST drives have a 4.90% annual failure rate with two drive failures. In either case, there is not enough data to reach any conclusions about failure rates of these drives in our environment.

Drive Stats Data

Each day, we record and store the drive statistics on every drive in our data center. This includes the operational status of the drive, along with the SMART statistics reported by each drive. We use the data collected to produce our drive stats reviews. We also take this raw data and make it freely available by publishing it on our website. We’ll be uploading the Q4 2015 data in the next few days then you will be able to download the data so you can reproduce our results or you can dig in and see what other observations you can find. Let us know if you find anything interesting.

print

About Andy Klein

Andy Klein is the Principal Cloud Storage Storyteller at Backblaze. He has over 25 years of experience in technology marketing and during that time, he has shared his expertise in cloud storage and computer security at events, symposiums, and panels at RSA, SNIA SDC, MIT, the Federal Trade Commission, and hundreds more. He currently writes and rants about drive stats, Storage Pods, cloud storage, and more.