Hard Drive Stats for Q3 2016: Less is More

November 15th, 2016

Hard Drive Stats - 8TB Drives

In our last report for Q2 2016, we counted 68,813 spinning hard drives in operation. For Q3 2016 we have 67,642 drives, or 1,171 fewer hard drives. Stop, put down that Twitter account, Backblaze is not shrinking. In fact, we’re growing very nicely and are approaching 300 petabytes of data under our management. We have fewer drives because over the last quarter we swapped out more than 3,500 2 terabyte (TB) HGST and WDC hard drives for 2,400 8 TB Seagate drives. So we have fewer drives, but more data. Lots more data! We’ll get into the specifics a little later on, but first, let’s take a look at our Q3 2016 drive stats.

Backblaze hard drive reliability stats for Q3 2016

Below is the hard drive failure data for Q3 2016. This chart is just for the period of Q3 2016. The hard drive models listed below are data drives, not boot drives. We only list drive models that have 45 or more of that model deployed.

Q3 2016 hard drive failure rate chart

A couple of comments on the chart:

  • The models that have an annualized failure rate of 0.00% had zero hard drive failures in Q3 2016.
  • The “annualized failure rate” is computed as follows: ((Failures)/(Drive Days/365)) * 100. Therefore, consider the number of “Failures” and “Drive Days” before reaching any conclusions about the failure rate.

Less is more: The move to 8 TB drives

In our Q2 2016 drive stats post we covered the beginning of our process to migrate the data on our aging 2 TB hard drives to new 8 TB hard drives. At the end of Q2, the migration was still in process. All of the 2 TB drives were still in operation, along with 2,720 of the new 8 TB drives – the migration target. In early Q3, that stage of the migration project was completed and the “empty” 2 TB hard drives were removed from service.

We then kicked off a second wave of migrations. This wave was smaller but continued the process of moving data from the remaining 2 TB hard drives to the 8 TB based systems. As each migration finished we decommissioned the 2 TB drives and they stopped reporting daily drive stats. By the end of Q3, we had only 180 of the 2 TB drives left – four Storage Pods with 45 drives each.

The following table summarizes the shift over the 2nd and 3rd quarters.

Migration from 2TB hard drives to 8TB hard drives

As you can see, during Q3 we “lost” over 1,100 hard drives from Q2, but we gained about 12 petabytes of storage. Over the entire migration project (Q2 and Q3) we added about 900 total drives while gaining 32 petabytes of storage.

Drive migration and hard drive failure rates

A four-fold storage density increase takes care of much of the math in justifying the migration project. Even after factoring drive cost, migration costs, drive recycling, electricity, and all the other incidentals, the migration still made economic sense. The only wildcard was the failure rates of the hard drives in question. Why? The 2 TB HGST drives had performed very well. Drive failure is to be expected, but our costs go up if the new drives fail at twice or three times the rate of the 2 TB drives. With that in mind let’s take a look at the failure rates of the drives involved in the migration project.

Comparing Drive Failure Rates

The Seagate 8 TB drives are doing very well. Their annualized failure rate compares favorably to the HGST 2 TB hard drives. With the average age of the HGST drives being 66 months, their failure rate was likely to rise, simply because of normal wear and tear. The average age of the Seagate 8 TB hard drives is just 3 months, but their 1.6% failure rate during the first few months bodes well for a continued low failure rate going forward.

What about the 60 drive Storage Pods?

In Q3 we deployed 2,400 eight TB drives into two Backblaze Vaults. We used 60 drive Storage Pods in each vault. In other words, each Backblaze Vault had 1,200 hard drives and each hard drive was 8 TB. That’s 9.6 petabytes of storage in one Backblaze Vault.

Each Backblaze Vault has 9.6 petabytes of storage

As a reminder, each Backblaze Vault consists of 20 Storage Pods logically grouped together to act as one storage system. Storage Pods are spread out across a data center in different cabinets, on different circuits and on different network switches to maximize data durability. Backblaze Vaults are the backbone that powers both our cloud backup and B2 cloud storage services.

60 drive storage pod

Our Q3 switch to 60-drive Storage Pods signals the end of the line for our 45-drive systems. They’ve had a good long run. We put together the history of our Storage Pods for anyone who is interested. Over the next couple of years, all of our 45 drive Storage Pods will be replaced by 60 drive systems. Most likely this will be done as we migrate from 3 TB and 4 TB drives to larger hard drives. I hear 60 TB HAMR drives are just around the corner, although we might have to wait for the price to drop a bit.

Cumulative hard drive failure rates by model

Regardless of the drive size or the Storage Pod used, we’ll continue to track and publish our data on our hard drive test data web page. If you’re not into wading through several million rows of hard drive data, the table below shows the annualized drive failure rate over the lifetime of each of the data drive models we currently have under management. This is based on data from April 2013 through September 2016 for all data drive models with active drives as of September 30, 2016.

Hard Drive Stats

Hard drive stats webinar: Join Us!

Want more details on our Q3 drive stats? View the webinar below in the Backblaze BrightTALK channel. You will need to subscribe to the Backblaze channel to view the webinar, but you’ll only have to do that once.

Recap

Less is more! The migrations are finished for the moment, although we are evaluating the migration from 3 TB drives to 10 TB drives. First though, we’d like to give our data center team a chance to catch their breath. The early returns on the Seagate 8 TB drives look good. The 1.6% failure rate at the 3-month point is the best we’ve seen from any Seagate drive we’ve used at the same average age. We’ll continue to track this going forward.

Next time we’ll cover our Q4 drive stats, along with a recap of the lifetime performance of every data drive we’ve used past and present. That should be fun.

Looking for the tables, charts, and images from this post? You can download them from Backblaze B2 as a ZIP file (2.3 MB).

Andy Klein

Andy Klein

Andy has 20+ years experience in technology marketing. He has shared his expertise in computer security and data backup at the Federal Trade Commission, Rootstech, RSA and over 100 other events. His current passion is to get everyone to back up their data before it's too late.
Andy Klein

Latest posts by Andy Klein (see all)

  • Seriously, folks, why don’t you post *normal* tables? Can’t copy HDD model names from damn images into Amazon or Newegg.

    • Damian:

      “Looking for the tables, charts, and images from this post? You can download them from Backblaze B2 as a ZIP file (2.3 MB).” The link is -> https://f001.backblazeb2.com/file/Backblaze_Blog/hard-drive-stats/Q3_2016_Drive_Stats_Materials.zip.

      • All I’m saying is build tables in HTML instead of images for everyone’s convenience.

        • inhumantsar

          Thank you for taking a bunch of time (you didn’t have to take) to generate a report (which your competitors could use) that I find so useful that I base my purchasing decisions on it.

          By the way, I’m too lazy to type out four hundred characters worth of model numbers so could you code it into HTML instead of just screenshotting your spreadsheets?

          FTFY. Asshat.

        • inhumantsar

          oh man, apparently i opened your profile in a tab and forgot about it. just rediscovered that now and saw that you posted the exact same entitled comment on another one of these reports.

          sorry for calling you an asshat, but really.

    • I’m sure they’d get more inbound organic search engine traffic if people Googling those part numbers stumbled on the site. This is a WordPress blog after all.

  • Earl Garber

    It would be interesting if failures were broken down into categories, i.e. mechanical, electronic, and media.

    • How would you get this data from most failed harddrives? And different manufacturer firmware certainly handles its faults differently.

      Would be nice to get the data if there was a practical way to get the failure mode.

  • Matt Viverette

    Are you selling the 45-drive pods at a discount (without the drives, of course)? I know you’ve shared some of the designs, but they’re probably still useful machines.

    • Andy Klein

      We do have some empty chassis (V1 – V3?), is that interesting? Or do you want the rest of the electronics.?

      • ZeDestructor

        I don’t know about Matt, but for I’m personally perfectly fine with just the empty chassis (electrnics, especially the PSUs would be nice though).

        Direct wire is a must though, cause I expect to use em with SAS HBAs and expanders. On that note, have you considered direct wiring to a custom port multiplier chassis instead of boilting the multiplers directly under the drives?

      • Matt Viverette

        Yes, that’s interesting to me. I’d want the backplanes, though. Seems like they are an integral part of the chassis.

  • Bill Waslo

    Now I see this. I just bought and installed 2 of the WD Red WD20EFX drives in a RAID. 8.2% annualized failure rate!?! Should have bought the cheaper HGST…..

    • You’re probably going to be fine! Our environment is different than most…homes. Just make sure you have a good backup strategy ;)

  • gcstang

    Not sure I’m reading the tables correctly but in the 8TB range is it saying the Seagate is more reliable than the HGST ? I’m looking at purchasing some drives for a personal NAS and don’t want to have to turn around and purchase more drives, I’d like them to last a while (reliable, etc…)

    Thank you for your posts

    • At the moment, yes – but we have less of the HGST drives in production. In all likelihood you’re fine either way since our environment is much different than what you’d likely have at home.

  • Vince

    Interestingly the WD20 series we found so unreliable they almost all failed and the last few we removed before they also did. Whereas the WD10 series are ridiculously reliable and so few have failed we can’t recall it happening.

    So far experience with WD30 good, not enough of the WD40 or WD60 to know long term but the WD60 series has a high initial dead on arrival count, but if they survive a few weeks they seem OK.

    We are cautiously optimistic about trying the Seagate 8TB and 10TB series and your initial finds seem favourable so we might just give it a whirl – thanks for continuing to publish your data.

  • Brandon Kruse

    We have 3,024 8TB Seagate drives (non-SMR/non-archive) and have had a slightly higher overall failure rate than the 6TB. We’ve had them for about 9 months now. We’ve had a significantly higher INITIAL failure rate (failure within less than a week), but a lower annualized failure rate for the last 9-months of data that we have. This could have just been a batch problem with the drives we received, as they were very close to first off the line.

    I suspect as we continue to operate, that the annualized failure rate will win out past the 6TB drives, but the initial failures put it a little behind.

  • morganf

    In the last table, the 2TB WD drive has the upper AFR interval listed as “0.136” instead of “13.6%”

    • Andy Klein

      Formatting error – I’ll fix it shortly. Thanks.