What is the Best Hard Drive?

January 21st, 2015

blog-which-drive

It was one year ago that I first blogged about the failure rates of specific models of hard drives, so now is a good time for an update.

At Backblaze, as of December 31, 2014, we had 41,213 disk drives spinning in our data center, storing all of the data for our unlimited backup service. That is up from 27,134 at the end of 2013. This year, most of the new drives are 4 TB drives, and a few are the new 6 TB drives.

Hard Drive Failure Rates for 2014

Let’s get right to the heart of the post. The table below shows the annual failure rate through the year 2014. Only models where we have 45 or more drives are shown. I chose 45 because that’s the number of drives in a Backblaze Storage Pod and it’s usually enough drives to start getting a meaningful failure rate if they’ve been running for a while.

Backblaze Hard Drive Failure Rates Through December 31, 2014
Name/Model     Size Number
of Drives
Average Age
in years
Annual
Failure Rate
95% Confidence
Interval
HGST Deskstar 7K2000
(HDS722020ALA330)
2.0 TB 4,641 3.9 1.1% 0.8% – 1.4%
HGST Deskstar 5K3000
(HDS5C3030ALA630)
3.0 TB 4,595 2.6 0.6% 0.4% – 0.9%
HGST Deskstar 7K3000
(HDS723030ALA640)
3.0 TB 1,016 3.1 2.3% 1.4% – 3.4%
HGST Deskstar 5K4000
(HDS5C4040ALE630)
4.0 TB 2,598 1.8 0.9% 0.6% – 1.4%
HGST Megascale 4000
(HGST HMS5C4040ALE640)
4.0 TB 6,949 0.4 1.4% 1.0% – 2.0%
HGST Megascale 4000.B
(HGST HMS5C4040BLE640)
4.0 TB 3,103 0.7 0.5% 0.2% – 1.0%
Seagate Barracuda 7200.11
(ST31500341AS)
1.5 TB 306 4.7 23.5% 18.9% – 28.9%
Seagate Barracuda LP
(ST31500541AS)
1.5 TB 1,505 4.9 9.5% 8.1% – 11.1%
Seagate Barracuda 7200.14
(ST3000DM001)
3.0 TB 1,163 2.2 43.1% 40.8% – 45.4%
Seagate Barracuda XT
(ST33000651AS)
3.0 TB 279 2.9 4.8% 2.6% – 8.0%
Seagate Barracuda XT
(ST4000DX000)
4.0 TB 177 1.7 1.1% 0.1% – 4.1%
Seagate Desktop HDD.15
(ST4000DM000)
4.0 TB 12,098 0.9 2.6% 2.3% – 2.9%
Seagate 6 TB SATA 3.5
(ST6000DX000)
6.0 TB 45 0.4 0.0% 0.0% – 21.1%
Toshiba DT01ACA Series
(TOSHIBA DT01ACA300)
3.0 TB 47 1.7 3.7% 0.4% – 13.3%
Western Digital Red 3 TB
(WDC WD30EFRX)
3.0 TB 859 0.9 6.9% 5.0% – 9.3%
Western Digital 4 TB
(WDC WD40EFRX)
4.0 TB 45 0.8 0.0% 0.0% – 10.0%
Western Digital Red 6 TB
(WDC WD60EFRX)
6.0 TB 270 0.1 3.1% 0.1% – 17.1%

Notes:

  1. The total number of drives in this chart is 39,696. As noted, we removed from this chart any model of which we had less than 45 drives in service as of December 31, 2014. We also removed Storage Pod boot drives. When these are added back in we have 41,213 spinning drives.
  2. Some of the HGST drives listed were manufactured under their previous brand, Hitachi. We’ve been asked to use the HGST name and we have honored that request.

blog-drive-failure-by-manufacturer

What Is A Drive Failure For Backblaze

A drive is recorded as failed when we remove it from a Storage Pod for one or more of the following reasons:

  1. The drive will not spin up or connect to the OS.
  2. The drive will not sync, or stay synced, in a RAID Array.
  3. The Smart Stats we use show values above our thresholds.

Sometimes we’ll remove all of the drives in a Storage Pod after the data has been copied to other (usually higher-capacity) drives. This is called a migration. Some of the older pods with 1.5 TB drives have been migrated to 4 TB drives. In general, migrated drives don’t count as failures because the drives that were removed are still working fine and were returned to inventory to use as spares.

This past year, there were several pods where we replaced all the drives because the RAID storage was getting unstable, and we wanted to keep the data safe. After removing the drives, we ran each of them through a third-party drive tester. The tester takes about 20 minutes to check the drive; it doesn’t read or write the entire drive. Drives that failed this test were counted as failed and removed from service.

Takeaways: What are The Best Hard Drives

4 TB Drives Are Great

We like every one of the 4 TB drives we bought this year. For the price, you get a lot of storage, and the drive failure rates have been really low. The Seagate Desktop HDD.15 has had the best price, and we have a LOT of them. Over 12 thousand of them. The failure rate is a nice low 2.6% per year. Low price and reliability is good for business.

The HGST drives, while priced a little higher, have an even lower failure rate, at 1.4% 1.0% (for all HGST 4TB models). It’s not enough of a difference to be a big factor in our purchasing, but when there’s a good price, we grab some. We have over 12 thousand of these drives.
blog-4tb-failure-rates
Where are the WD 4 TB Drives?

There is only one Storage Pod of Western Digital 4 TB drives. Why? The reason is simple: price. We purchase drives through various channel partners for each manufacturer. We’ll put out an RFQ (Request for Quote) for say 2,000 – 4 TB drives, and list the brands and models we have validated for use in our Storage Pods. Over the course of the last year, Western Digital drives were often not quoted and when they were, they were never the lowest price. Generally the WD drives were $15-$20 more per drive. That’s too much of a premium to pay when the Seagate and HGST drives are performing so well.

3 TB Drives Are Not So Great

The HGST Deskstar 5K3000 3 TB drives have proven to be very reliable, but expensive relative to other models (including similar 4 TB drives by HGST). The Western Digital Red 3 TB drives annual failure rate of 7.6% is a bit high but acceptable. The Seagate Barracuda 7200.14 3 TB drives are another story. We’ll cover how we handled their failure rates in a future blog post.

Confidence in Seagate 4 TB Drives

You might ask why we think the 4 TB Seagate drives we have now will fare better than the 3 TB Seagate drives we bought a couple years ago. We wondered the same thing. When the 3 TB drives were new and in their first year of service, their annual failure rate was 9.3%. The 4 TB drives, in their first year of service, are showing a failure rate of only 2.6%. I’m quite optimistic that the 4 TB drives will continue to do better over time.

6 TB Drives and beyond: Not Sure Yet

We’re beginning the transition from using 4 TB to using 6 TB drives. Currently we have 270 of the Western Digital Red 6 TB drives. The failure rate is 3.1%, but there have been only 3 failures. The statistics give a 95% confidence that the failure rate is somewhere between 0.1% and 17.1%. We need to run the drives longer, and see more failures, before we can get a better number.

We have just 45 of the Seagate 6 TB SATA 3.5 drives, although more are on order. They’ve only been running a few months, and none have failed so far. When we have more drives, and some have failed, we can start to compute failure rates.

Which Hard Drive Should I Buy?

All hard drives will eventually fail, but based on our environment if you are looking for good drive at a good value, it’s hard to beat the current crop of 4 TB drives from HGST and Seagate. As we get more data on the 6 TB drives, we’ll let you know.

What About The Hard Drive Reliability Data?

We will publish the data underlying this study in the next couple of weeks. There are over 12 million records covering 2014, which were used to produce the failure data in this blog post. There are over 5 million records from 2013. Along with the data, I’ll explain step by step how to compute an annual failure rate.

Related Posts

Brian Beach

Brian Beach

Brian has been writing software for three decades at HP Labs, Silicon Graphics, Netscape, TiVo, and now Backblaze. His passion is building things that make life better, like the TiVo DVR and Backblaze Online Backup.
  • Chrispen Shumba

    Hi ,

    I am a final year student at Sheffield Hallam University in the UK. I have decided to do my final year project on Survival Analysis using the harddrive failure data between June 2014 and June 2015. I realised that there was an error in the csv file for the second of November 2014. Is it possible to get hold of it as it will provide a more reliable analysis.

    Thank you
    Chris

  • Nightmare

    Hey , I just wanted to ask if this one is worth buying . I want a 1TB because in my country they are expensive and this is the one I found as most reviewed
    WD Blue 1TB SATA-III 7200 RPM 64MB

  • chris

    Seagate barracuda 3Tb life expectancy is 2 years; confirmed by http://www.seagate.com customer service. All the tools and applications developed by Seagate are made to detect or recover data. So, they know the junk they have and they make money upon data recovery.

    This is how it went: purchased the hard drive, downloaded their software to set the partitions, then, after about 2 years, my Windows 7 64bit crashed trying to gather info from one of the partitions. Contacted their support and even if their hard drive still in warranty, they will not helped me. They told me to contact their “data recovery” division to help me. Well, I have called them 1-800-Seagate and they told me to send the hard drive to their lab and the charge will be 550 Pounts (aprox. 700USD). This is not admissible to have a hardware that will last 2 years….
    Dome some digging and found: “One of the most frequent FIRMWARE problems is that of the class of Seagate Barracuda 7200.11 hard drives, where after a certain operation period the hard drives become unavailable due to a problem in production, the most frequent models with this problem are ST3500320AS, ST31000340AS, ST3750330AS with firmware SD15”. This is another example that Seagate business strategy is based on again, on “data recovery”.
    Keeping short: they sell falty hardware and gain money from data recovery.

    If any of the Seagate team members reads this, please don’t try to deny. I have promised you that I will make the world know your type of business you’re doing.

  • Glen

    I’m planning on building a server soon and the motherboard that I will be using has SAS connectors as well as SATA. I am now debating whether to get 4TB SATA drives or 2TB SAS drives to build my RAID arrays – I can always upgrade them later (since space is not critical right now) so was leaning to the SAS drives – just wondered if anyone here who is more knowledgeable would be able to comment? The reason I ask is that the article makes no mention of SAS, so I can only assume that their drives are all SATA?

  • asajinx

    Hi Brian,

    Can you point me towards a model number for the Seagate Desktop HDD.15 you’re using? I’m interested in buying some but i can’t figure out what model you’re referring to.

  • matthewelvey

    Umm… Brian, I see an error in the write-up, where you give the wrong rate for HGST 4TB drives. You give the 1.4% of the Megascale 4000 model. But you have thousands of two other HGST 4TB drive models. It’s about 1.1% (guesstimating) if you include the 3 HGST 4TB models, appropriately weighted.

    Suggest you update the write-up.

    Perhaps the text just needs to be updated for 2014; I see 1.25% (for the matching 12,650 drives) at in your graph for 2014, but 1.4% for 2013.

    • matthewelvey

      Ok, did the math. (2598*0.9+6949*1.4+3103*0.5)/(2598+6949+3103) = 1.0765… so yes, my guesstimate was right.

      • Andy Klein

        Your observation about the incorrect value for all of the HGST 4TB drives in correct. We’ve corrected the 1.4% value to 1.0%. Your guesstimate isn’t quite right though as the failure rate is computed based on the operational hours divided by number of failures for the three models. This computation is explained in the a followup post to this one located here: https://www.backblaze.com/blog/hard-drive-data-feb2015.
        Still, kudos to you for identifying the issue in the first place. Thanks.

  • Guest

    I also had 6 of those 3TB Seagate drives, in RAID5 array. All have failed by now. Warranty was 1 years and drives started failing when 1.5 years old. Good to know Blackblaze confirms I was not alone with this problem.

  • David Adams

    Can you comment on why you’ve chosen not to use the HGST Deskstar NAS drives?

    • I’m not familiar with that specific drive, but likely the NAS variants were more expensive than the drives we tried out.

      • David Adams

        They do cost a bit more, I use them in my FreeNAS box at home, they are similar to WD Reds, but are 7200 RPM drives.

  • Eric Freudenthal

    It appears that the drives that are being identified as reliable are much newer than the ones that are unreliable. Might this explain most of the discrepancy? Perhaps it would be interesting to tally
    f/n for a specific lifetime L where
    * f is the # of drives that failed before L
    * n is the sum of fand the number of drives that survived L

    for various values of L (e.g. 6mo, 1yr, 2yr, 4yr).

  • Michael Smith

    Just to add some data – I purchased 3 Seagate 3TB ST3000DM001 drives less than 2 years ago. First one failed last night after 15626 hours. Fits perfectly with the 43.1% failure rate that Backblaze found. Looking at getting a different replacement drive.

    • Willy

      I also purchased 3 Seagate 3TB ST3000DM001 drives less than 2 years ago, and all 3 of them are still going strong. I use HDDScan for Windows 3.3 (freeware) because it lets me create a batchfile to automatically change the APM value to 240 each time after the firmware of the drive has changed it back to its default value of 128, as I believe using an APM value of 240 will help to prevent the excessive head parking that goes on with this particular drive model (same as with the older 2TB ST2000DM001). Windows Task Scheduler will launch the batchfile for me upon each system startup / user logon / system wakeup event. Furthermore, to somewhat mitigate unnecessary increase of the load cycle count, and to save myself from having to wait for the spinup sequence to finish all the time, I use KeepAliveHD to prevent automatic spindown of green drives that I know are going to be idle for not longer than one hour. (Going to the settings window of KeepAliveHD every once in a while is a whole lot less annoying than waiting for green drives to spin themselves up, one at a time, dozens of times per day, and day after day).

  • darkan9el

    I would class myself as a heavy computer user and my system is on almost 24/7. I have an Hitachi drive as my main hard disk drive (HDD) in my hackintosh system, with several internal and external hard drives for the purpose of storage. I have 2 Western Digital (WD) and 4 Seagate HDD’s, with my latest purchase being a 4TB Seagate external HDD and they all perform well.

    Most people will stick to what they know so the last drive brand that lasted a while; by while I mean well out of guarantee; by “well out of” I would say over 2 years and that is their level of research. I generally read the term guarantee as “How much faith we have in our product”

    Average Joe will have 1, 2 maybe 3 drives to gain reliability statistics but compare that to Blackblaze’s info and Backblaze show statistical nirvana, its all relative and being informed. I have always found that manufacturers have an overly optimistic view of their products and can do many things to make sure that their view is provable however convoluted the method is. Take car manufacturers taping up every hole to make their cars more aerodynamic for advertising info, hardly a real-world test, more like a dream-world test. You would be forgiven for taking stats from a prolific end user over those of the manufacturers.

    Maybe these statistics critics would like to get their heads together and formulate a coherent formula for accurate statistical failure of hard drives instead of pontificating about what they see as wrong about backblaze’s data and presentation, and help to create a positive contribution that can be used by all.

  • Priyank Gupta

    I was analysing the data and segate(ST3000DM001) has more than 1163 drives , some of these drives did not even fail and just disappeared from the log. What could be a reasonable assumption for these drives, this will help us in getting better understanding of data. Thanks.

  • sahil

    what about solid state drive???

    are theyalso goning to fail eventually

  • Michał Sokołowski

    I have at least twenty drives ST3000DM001-9YN166 running since at least a year in different locations and conditions 24/7. No failures…

  • Gerasimos Simeonidis

    may i say HGST is a WD company?

  • Roy

    Why there is no Toshiba in the graph?

  • raveur

    Seagate is a disaster. Two 3TB drives in the past 1.5 years, two dead.

  • Martyn A Ford

    We concur with these results but see in your article that you did not rate 2Tb Seagate Barracuda drives one of the worlds biggest sellers. We stopped supplying the 3TB Seagate’s back in early 2014 although it seemed to be centred around one particular Firmware version. It was no mistake that all of our customer failures were in Raid systems because loosing 3Tb of data is just unthinkable, in most cases we replaced the drive with a Server Grade drive that where more than $100us more and have not looked back.

  • Frank Cathey Jr.

    Doesn’t Western Digital now own HGST? Could they legally re-brand WD drives as HGST (possibly to get rid of the drives damaged in the great floods)?

  • Thanks for all the data. Helps me buying new HDDs for my NAS.

  • amcbeagle

    I’m looking to pick up HGST 8tb drives over next couple months. How about some info on them please?

  • David Whittall

    I had a Seagate BARRACUDA7200 SATA, 1500 GB ST31500341AS (Serial # 9JU138-302 9VS41A0B), that failed before the 3 year Warranty had expired (but was replaced with refurbished one under Warranty). This replacement Seagate BARRACUDA 7200 SATA, 1500 GB ST31500341AS is now starting to fail.

    I am reluctant to buy a Seagate Barracuda 7200.14 (ST3000DM001) 3TB HDD, based on these reported failures.

    However this report said “we think the 4 TB Seagate drives we have now will fare better than the 3 TB Seagate drives we bought a couple years ago”.

    This suggests the Seagate Barracuda 7200.14 (ST3000DM001) 3TB HDD, might not be the new 1GB /Platter Drives (like the 4GB verions with 4 x 1GB PLatters), but the older 0.67GB/Platter?

    How many platters did these 3TB Seagate Barracuda 7200.14 DD, that failed have, 5 platters or just the 3 (The ‘Power of 1.0 or the Failure of 0.67?

    “The Power of One

    Desktop hard drives from Seagate give you with the Power of One.
    One terabyte per disc technology. One drive platform for every capacity need and every desktop storage application. One hard drive qualification effort. One hard drive platform to choose from. One hard drive with trusted performance, reliability, and simplicity to deliver the lowest total cost of ownership.”

  • David Stevens

    With ref. Hard Drive Annual Failure Rate above, why are Seagate 3TB drives so unreliable? I had one crash recently with the loss of 2.7TB of data that will take two years to recover.

  • David

    Have you considered purchasing refurbished enterprise drives and populating a few pods with those? They ought to be cheaper than anything new and – theoretically – should last as long as new drives (I’m basing that statement on the fact that all the HD manufacturers classify a manufacturer refurbished drive as “re-certified”, i.e. meets same specs as new drives).

    If they last as long as new drives, this could be a great way to save $.

    • Mark

      Enterprise drives typically lag capacity-wise compared to consumer drives. And for an outfit like Backblaze to acquire a sufficiently large quantity of reasonably modern drives probably would be problematic from a procurement point of view. No reason why they would want to deliberately escalate their TCO, for no ostensible benefit reliability-wise.

  • tpcock

    All well and fine for the specified drives, but for those who use the smaller drives in servers, specifically at SMB’s, typically in the 300-600GB size range, Seagate HDs have proven time and again to be far superior to other drives. Now, I note that when spin speeds are mentioned in this article I see mostly 7200 rpm drive. I was taught by a very competent business owner I worked for that 10k-15k drives are preferred as read/write speed to a server is a critical component for users in business environments.
    While cloud storage is somewhat of a different horse, I would certainly think the same read/write speeds would be preferred, except the price of the faster drives would drastically increase a company’s investment in their infrastructure. My point is, the drives being outlined in this study because of their use by Backblaze, are drives I would purchase for use in a desktop system, but never for a server. The information gleaned from your well written article is useful for those concerned with desktop systems, home computers, home NAS’s, and large scale storage such as is the focus of your business, and as such, when setup in a RAID array can not only function as intended, but at a considerable savings, and offer the redundancy necessary when hot swappable.
    Now for the use I’ve indicated at the beginning of my post, Seagate drives in the 10k-15k speed range are by far the finest drive one could want or ask for, hands down. Due to my experience with these drives I’ve always leaned towards that brand even for desktop drives in the 7200 rpm range, and have stayed away from Western Digitals. I have replaced many HDs in my career and have personally witnessed WDs as being the most replaced. Perhaps this is a numbers factor as in a majority of PCs I’ve worked on use the WDs, perhaps due to the cost factor manufacturers face. I have had systems using WDs that last forever, I’ve seen drives from all manufacturers fail as I’ve seen the same last seemingly forever. Your ‘test’ platform is quite viable as the drives in use 24/7 and thus your results are solid. The comment regarding slander or whatever seems to be more of a fan-boy whose love of a particular model shades his acceptance of your verifiable results.
    I am curious what your results would be with the higher RPM drives I spoke of, but know from even a recent purchase of 4 – 300GB drives, your costs for drives of the size in your study would equal a small fortune.

  • Atsushi Hayakawa

    I used this data sets for estimating scale parameter “m” and shape parameter “eta” in weibull distribution.

    This result is here( http://blog.gepuro.net/archives/118 ).

    You can read this page in English for using google translate.

    Thank you.

  • Atsushi Hayakawa

    I used this data sets for estimating scale parameter “m” and shape parameter “eta” in weibull distribution.
    This result is here(“http://translate.google.com/translate?u=http://blog.gepuro.net/archives/118&langpair=ja|en”).

    Sorry for using google translation.

  • David

    Hello Brian & friends @ Blaze. Thanks so much for posting this valuable info. I’ve been following all your blog posts on this subject! This is BY FAR the #1 most useful group of posts publicly available on this subject. Hats off to you for your due diligence and posting what the drive manufacturers are unwilling to share.

    I am particularly curious about your thoughts on the WD Red drives. Publicly, there is a lot of noise in forums and store reviews complaining of inordinately high DOA Red drives. In particular, the 3TB and 4TB drives seem to be the worst. I don’t believe the 5/6TB versions have been around long enough to garner meaningful criticism (yet, anyway).

    I wonder if WD has had QA issues from specific batches of these drives (the most vocal complaints seem to be clustered together by date). I certainly don’t trust WD’s claimed AFR of 0.8%.

    Few questions:

    1. You mention your failure rate on the 6TB drives is 3.1%, but only 3 out of 270 drives have failed. Why isn’t that a failure rate of 1%? I don’t understand your math on this one.

    2. I am curious to hear your general opinion of the WD Red drives, overall.

    3. Based on their age, am I correct in presuming the issue you’ve had with the the 6TB Reds is infant mortality on all of them?

  • Brad

    Hey guys!

    I believe that the conclusions were not complete. I am not contending the conclusions of the post, just extending them.

    Disk drive manufacturing is a very high-volume, low-margin industry. And this forces a pattern. The manufacturers are actually assemblers of sub-contracted parts.

    For example suspension assemblies (arm and head) are manufactured by Hutchinson Technology, TDK, and others. They sell into multiple end assemblers. The same is true for all the major mechanical components of drives.

    Comparing drive assembler models is sorta apples and maybe oranges. The design may have problems or the components of a manufacturing lot may have problems. When I was working for an (unnamed) SAN manufacturer there were a whole bunch of drive failures occurring in a single model of a single manufacturer’s drives. Unhappy customers. Much big freak-out, massive application of high-skilled labor, circles run into concrete. No one knew what was happening. It turned out that the failures were occurring in only a single manufacturing lot of the model. The motor sub-component, from a supplier, was throwing grease from the motor onto the platters, causing disk crashes. The model design was (and still is) good, but the motor seals were bad. The end assembler took a marketing hit for having ‘unreliable drives’ and the world moved on.

    When you are checking reliability, which is all we care about really, you have to split up the counts into comparable units – model and lot. If your models came from a single lot, you probably do not have a sufficient sample for talking about the design of a model.

    There is another little-known statistic about enterprise drive failures. 80%+ of the time that a drive is returned to a manufacturer, the manufacturer cannot find a problem with the drive. They take the pup and run it against the existing test protocols, and the drive simply does not fail. Whatever happened was transient. This is the source of re-manufactured/reconditioned drives.

    There will be failures, most transient, some hard. The only way to know is to have access to the manufacturer’s information. So, I take the commonly published end-user drive failure rates with a grain of salt.

    What we have been doing for 20+ years in IT is trying to make controllers and storage less unreliable – RAID, having spare drives in a box, solid-state, etc…

    I think that a better path to storage reliability is to look at whole-systems design that assumes that there will be transient and permanent storage failures (on spinning and non-spinning storage, another story). The design should focus on how to minimize the causes of failure and auto-recover from failures without human knowledge or intervention. These kind of systems are commercially available.

    Brad

  • Dan Schwartz

    Two salient questions about your “pod” configuration:

    1) Are the drives in your “pods” oriented in the horizontal or vertical plane? This makes a major difference as to bearing wear, as when the drives are vertical as both the platter spindle and head arm shafts are horizontal, and do not present an axial load on the thrust bearings, resulting in less friction and hence less heat. In addition, when the drive is oriented vertically, there is also no torque load along the head arm axis, as excessive “play” in this bearing will result first in reduced head flying height (as reported by SMART), and then eventually a head crash into the platter;

    2) If the drives are oriented vertically, what is the orientation of the head arm with respect to the platter spindle, i.e. if the drive is oriented with the long edge down is the force of gravity pulling the arm down or up? This is important as i²R heat generated in the windings of the head arm servo is proportional to the square of the current pulse (i²T [sic] heating). With the drive on it’s long side down and the arm spindle above the platter spindle, gravity balances out the return spring force, requiring about equal current pulse energy levels to move the arm (1²+1²); while if the arm spindle is below the platter spindle, it will require twice the current pulse energy to overcome both the return spring and gravity (2²+0²).

    Dan Schwartz,
    Editor, The Hearing Blog
    http://www.TheHearingBlog.com

  • Great, but do you also have any data of info about the mobile external drives?

  • Kai

    I bought 12 seagate 3tb drives about 20 months ago for personal NAS, and now only 5 survived. And a week ago one more drive(571 days power on, 62 start/stop count) found bad sectors. I’ve never experienced this kind of high failure rate for my almost 20 years of computing life. I am planing to upgrade all hdds soon and they will definitely not the seagate products. Sorry for bad english.

  • Dale

    I presume these are all SATA drives, do you have any statistics on SAS drives?

    • Sorry Dale, just SATA drives :-/

  • Even Linus Tech Tips thinks your stats are bogus (or just pure crap).

    http://youtu.be/e85aRCFH8gM?t=52m23s

    “Flawed list of failure rates”.
    Take with an ENORMOUS grain of salt with it.
    Methodology horrible, no control, poor sample size, too small sample size, different conditions, different temperature, different amount of vibration.

    Please make it obvious that this is COMPLETELY ANECDOTAL STATISTICS AND SHOULD NOT BE CONSIDERED IN ANY WAY.
    Otherwise, I really feel that Seagate should sue you for slander. (and you’d totally deserve it)

    • Hey @drashna:disqus, we’re certainly only reporting which drives work for our use-case. A lot of of folks find that it mirrors what they see out in their own anecdotal set-ups as well. We’re definitely putting a lot of load on these drives in our 24/7/365 environment, so home-use and even other server-use will not necessarily lead to the same results that we see. We’ll be releasing some raw data in the coming weeks, which should be interesting to some folks that are looking for specifics. As for temperature and vibration, we have other blog posts that explain what we see in our environment -> https://www.backblaze.com/blog/hard-drive-temperature-does-it-matter/ and https://www.backblaze.com/blog/enterprise-drive-reliability/ if you’re curious. Again though, all of that is for our environment, so mileage may vary for others!

  • Oak

    I asked this question in another post but I don’t believe it was answered (my apologies if it was!). What does Backblaze do with its failed drives? These would be drives that fail in production while still under warranty, and those at end of life. Are RMA drives returned to the manufacturer with data on them? Are they wiped and/or shredded?

    • Hi! Drives are wiped and recycled if there’s no RMA value to them!

  • Darkwiss

    So you’re testing Desktop HDD Seagate against NAS HDD WD on a server? Why not testing NAS HDD for all brand that are designed to work 24/7 or enterprise drives, that will be more accurate.

    • jp

      because they normally try out a pod or two of a lot of a given companies products because depending on supply and cost they don’t always buy the nas ones.

  • Dahc Renrut

    You mention tools to test how well the drive is doing currently. What are those tools and where can we obtain them to check the health status of our drives?

    • calmdownbro

      Check “smart” in Google. Its “raw data” is extremely helpful. (Like raw errors, spin up failures, etc.) other than that, you can surface scan a hard drive. Write the drive full, check if the data is intact. (They mentioned a tool that does not write the drive full, yet it can check health… dunno which tool is that, but it sure sounds magical.)

  • Do you have similar stats that only apply to external hard drives? Would love to see a comprehensive list of external hd failure rates..

    • LKJHyjniugd89fg

      External drives are just internal drives in an enclosure with the appropriate connection: USB, eSATA,…

      And AFAIK, BackBlaze doesn’t use external drives.

  • Mike Hawk

    One thing is for certain – I won’t be buying Seagate.

  • FollowTheORI

    Thank you! Keep up and see you next year!

  • EE

    It would be more interesting if you give us the age of each hard drive don’t u think ? Cause it could have an incidence on the failure rate…

    • Tony Dew

      If only they’d have added a column titled “Average Age in years” we could have that info.

  • Quebrive

    Regarding the drives with high annual failure rates: I am surprised that no one has questioned whether you are using them as intended. The Seagate Barracuda 7200.14, for example, supports workload rate limits of 55TB/year in 8×5 environments: presumably, then, you are not using them in a 24×7 environment.

    • LKJHyjniugd89fg

      Of course they are using them out of spec: BackBlaze is storing data, their servers need to be available 24/7.

      • Quebrive

        That was my point: the results would be more pertinent if they used the drives as intended.

  • TalkinHorse

    Historically I’d formed the concern about platter count, and that pushing to four platters and beyond was begging for mechanical failure. I don’t know how true that is today, or how true it ever was. Is this a factor in these statistics?

    • Mark

      Seagate built 12-platter 7200rpm (Barracuda) and even high platter count 10k rpm drives (Cheetah) in their 3.5″ enterprise SCSI line in the 1990s. Mechanical reliability was a non-issue. Electrical reliability of Seagate in the 1990s, unfortunately, was a whole different story, and I personally swapped out a few logic boards to do HDD data recovery.

  • This is useful, but I would also like to see the data about the distribution of failures.

  • dan

    Are you guys planning on transitioning to the new http://www.slashgear.com/seagate-ships-8tb-shingled-magnetic-recording-hdd-09358830/
    ??? They look like they will destroy the current price per gb…. I’m holding out to see some real world testing done before I populate my home server. .

    • Hard to say! We are going to be playing and testing with those drives as soon as they become available, but we’re excited to see how they perform!

  • Tim

    Does Backblaze use HDparm or another utility to disable APM (or at least lax it) on your drives? My guess is one reason you have such high failure rates on the 1.5TB and 3TB Seagate drives is related to their APM value of 64, opposed to say, 255 (disabled) to prevent head parking aka self destruction in a 24/7 environment.

    • Shane

      We purchased a server around 4 years ago with Seagate drives from that same run (although they are only 1TB), and can confirm that they are total crap. We don’t change any settings, since we don’t have time to much with such things. Every single one of them has failed within 3 years, and we’ve now gone through 2 of the refurbs that we got as warranty replacements as well.

      • Bjorn Wikstrom

        I’ve never had a Seagate that didn’t suck.
        Well, no, one lasted for good 2 years.

        • Alexandre Gauthier

          I have a 2008-2010 i think Seagate barracuda 4 platter 1tb 7200rpm here that was in about 4 motherboard since it life started with 35000 hour about 4 years + this thing is still going strong… Now i have no idea if i get 2 wd blue or 2 barracuda for raid 0 for game and recording duh

        • Tsais

          I have Samsung drives, where not a single one failed and they’re going on 10 years old…
          None of my HGST Touro (external Deskstars) drives failed either, but they’re less than 5 years old.

  • Rudi Halbright

    This is great information and valuable for all of us who are choosing hard drives, thank you for sharing it! I have a couple of questions: 1. Do you consider the power requirements of the different hard drives? For example, the HGST (Hitachi) drives take as much power at rest as the Western Digital Red drives use in use. This has a cost impact as well and thus may make the Western Digital drives the bargain choice not to mention the additional benefits of running drives that can be expected to run cooler and the resulting decline in cooling costs. 2. Seagate claims a given number of “Power-on hours” for their drives and the number seems quite low to me for drives used in a NAS setting. This seems to imply that while short term failure rates may be acceptable, long term failure may be premature as compared to other alternatives. Do you have any sense of the relevance of this issue?

    Finally, would you mind if I post an excerpt of this on my blog with a link to this page and proper attribution to BackBlaze? I’d also like to post it on some of the photographer specific websites that I use if that’s ok.

    -Rudi

    http://www.halbrght.com

    • @rudihalbright:disqus -> Yes, you may absolutely post excerpts, or link to this on other sites!

      -> We do consider power, which is one of the reason’s you’ll mostly find 5400 RPM drives in our data center. Unfortunately though I can’t really speculate on the power-on hours, it’s a bit out of my expertise!

    • GP

      Buy HGST’s Helium filled drives with with lowest Power Consumption and Best Watts-per-TB. In other words, best TCO (Total Cost of Ownership) in a long run.

  • Daryl Sawatzky

    Great, so basically when I paid you $200 for a backup of a drive that failed, you sent my data on THE MOST UNRELIABLE hard drive on the list. Thanks a lot. Next time, can I send you an empty HGST and you can load it up for free?

    • Daryl, that drive is going to last a while! Our environment is pretty taxing on the hard drives we have in them. Plus, the vast majority of drives in this study are from internal drives, which are different from the externals that we send for restores. You’ll be OK! Plus, it’s tough to find an external HGST nowadays ;-)

      • Daryl Sawatzky

        Coincidentally, or perhaps not, my drive that failed was also a just out of warranty Seagate 3TB 7200.

        • I assure you we’re not part of an anti-Daryl drive conspiracy :)

          • Peter Novák

            So Far Your data are in line with our (although very limited) experience. The two 24/7 running (moderately used) 1,5TB Seagate have passed out due to extreme reallocated sectors count after mere 2 years of service, the third is reaching critical limit after 3 years.
            The 3TB has suddenly died out within 1 year and we’ll see how will the second guy do. 4TB is working fine so far, hope it will keep good work.

            Please, could You publish “failure rate per year of service” statistics for each drive type?

          • Peter, we’ll be publishing more data within the raw data within the next few weeks, it’s possible that you can pull the data from there.

          • Nozomi Otori

            Ditto. One of my 1.5TB Seagate (both ST31500341AS) failed for extreme reallocate sectors count (and for mere 2 years of service too) while the other one have shown some signs.

      • Milk Manson

        what’s the difference between internal and external? They look the same to me.

        • They more or less are the same, but they are built for different purposes. Internal drives are meant to go in to computers or servers, while Externals are meant to plug in to a computer.

          • Milk Manson

            Thank you, but I was after a slightly more technical explanation than one goes on the inside and one goes on the outside.

          • Oh! Sorry! It has to do with how they are designed to run (sometimes this is done via the firmware of the drives). External drives are usually more prone to jostling about, whereas internals are usually intended for static servers. We might publish a bit more data on our internal vs. external drives in the future!

    • Milk Manson

      Hey genius, they didn’t send you the only copy (so settle down).

      • Hectic Charmander

        This. More this!

        Up-vote x100

  • prepaidwirelessguy

    What test software do you recommend I can use to avoid drive failure/data loss? I had a Seagate HomeFlex drive fail last year. It was replaced under warranty but I lost a lot of data. I now have a Seagate Central drive and would like to replace it before it dies this time. As these drive are used as wireless network drives to stream data to a Boxee Box, they’re not connected to by BackBlaze PC and don’t get backed up. Thank you!

  • SuperSmartHealth

    This may be a dumb question, but do the stats for these internal hard drives mirror the stats for external hard drives? I put a lot of videos on external hard drives and I need them to be reliable. Do you have stats on external hard drives?

    • Not a dumb question at all! It is a difficult one to answer though. Our hard drives are all running 24/7 in a datacenter environment in storage pods that we designed ourselves for our very particular use-case. It stands to reasons that failure rates of external hard drives would be similar (at least by some percentage) based on model type, but there’s no way for us to know for sure. Best recommendation we can give you, is to have a good backup strategy in place (we recommend 3-2-1 -> 3 copies, 2 on-site but on different mediums, and 1 off-site) and you’ll be covered in case of a disaster!

    • calmdownbro

      Of course it does. Check what brand of Hard Drive do you have in the external storage. If it matches any of the drives posted above, you will be either happy (if its WD/HGST), or you can start worrying.

      Aaanyway. Do backups. We are on Backblaze’s site. If you want to protect those videos, back them up online. If you just worry that one day you bring in a drive with no data to work, use two drives, and just have the same data twice. (That is still not a proper backup.)

    • Josiah Carberry

      It surely would not be surprising if external drives typically had somewhat different levels of heat and of vibration that this would be reflected in their levels of reliability.

    • A.J.

      Many of the external hard drives I’ve recently seen are using “green drives”. These drives stop spinning when not accessed, thus are low power and “green”. My anecdotal experience has been high failure rate. Ensure your data is backed up!

  • you guys…. you guys are so awesome!

  • Kandric

    Ack, I’m looking to buy hard drives as my old 1TB Seagate are absolute rubbish and happened to stumble across this. Thanks for this awesome data! Much more helpful than a guy/gal saying “I used this drive for a week and I give it 5 stars!”

  • HenkPoley

    Are you sure the failure rate of these specific drives is not coming from say a common power supply batch? As I understand it, way back when, that used to be a problem.

  • I was looking for Hitachi drives last week. Didn’t realize they re-branded.

    • Daryl Sawatzky

      Hitachi didn’t rebrand… They sold their HDD division to Western Digital.

      • Bump77

        WD bought Hitatchi but was forced to sell away the 3,5″ division to Toshiba to please the anti-monopoly divisions with in the governments.
        so the modern Toshiba 3.5″ is just re-badged old Hitatchi Deskstars

    • @sawatzky:disqus is correct, Hitachi sold to Western Digital, however their enterprise-grade drives are run as a separate division known as HGST!

      • Dr.Madhav

        Actually, it’s like this:

        IBM HDD Division -> Hitachi -> WD. Yes, HGST retained some exclusive powers to retain their highly coveted market place (Enterprise) before inking the deal with WD folks.

        I use about 750 Hitachi/HGST drives (~2/3rd are SAS) on various Servers, Workstations, NAS boxes and in two Hybrid clusters (HPC). I can vouch for their reliability (most are 4TBs), although they cost a little more + draw a bit more power. But I don’t really bother about paying some more when considering the vital business data (R&D; IP) and storage requirements (HA).

        Currently I look forward to build two custom hydrid NAS Servers using FreeNAS & ZFS for personal usage; each with 12 HDDs + 4 SSDs (ZIL + L2ARC in redundancy). I’m pondering to go for:

        24 x HGST Ultrastar 7K6000 Sata 6Gb/sec – 6TB or 5TB with 128MB cache (Ultrastar-7K6000-DS.pdf)
        —OR—
        24 x HGST Deskstar Sata 6Gb/sec – 6TB or 5TB with 128MB cache (DS_NAS_ds.pdf)

        The main differences between those two models: additional 1 million MTBF (in case of 7K6000), more warranty and drive software (not to be confused with firmware).

        Q: do you have any performance data of the above mentioned drives?

        Thank you very much.

        • Hi! Unfortunately not, all the data that we have is in the stats above (except for the one’s where we had too small of a sample size, even by our standards). We’ll release some more data soon though, so keep an eye on the blog!

          • Dr.Madhav

            Hello, YevP!

            I plan to build those machines by end of next month. Perhaps I’ll go ahead and take a plunge with 5TB ones, but not very clear on which NAS drive model yet. One of my focus areas is on ‘Resilvering’ scenario.

            Currently I’m in the process of hunting two micro ZTX Server boards with enough on board SAS/SATA ports (no HBA with ZFS). Each will have 32GB DDR3 ECC 1600 MHz RAM.

            I intend to write a good blog, and make youtube short videos on how to go for personal NAS Servers using ZFS using FreeNAS and OpenIndiana.

  • Hello! Can somebody explain me why is failure rate of 3TB drives so much bigger? Is it something with storage capacity of something in a hardware?

  • Ruli Manurung

    In the raw data tables you show the average age in years of the drives, but don’t seem to take this into account when analyzing the data or in drawing conclusions. Is there any specific reason for not doing this? I would imagine there would be a correlation between the two: older drives surely tend to fail more. A follow-up analysis would be great!

    • Hi Ruli! We’ll be releasing raw data in the coming weeks that should give a bit more over-all info! Subscribe or stayed tuned! :)

    • Brian Beach

      I talked about age a little bit in a post last year. The failure rate does tend to go up some after about the 3 year mark. Take a look at the “Disk Drive Survival Rates” section here: https://www.backblaze.com/blog/how-long-do-disk-drives-last/

  • iwod

    I thought there were some fairly cheap 8TB HDD from Seagate on request in Amazon. Did you test those out?

    • We’re currently playing with some 8TB drives, but don’t have enough for a good data set at the moment!

  • Nice to know that you BIAS the results by using external HDDS for Seagate but not the other brands, and fail to acknowledge that fact (or that it may significantly skew the results against seagate).

    This is especially harmful, because a lot of people take your site as gospel and avoid drives on your “recommendations”. And I feel that Seagate has a good basis for a slander lawsuit against you because you FAILED to disclose this properly.
    Or that fact that this is all anecdotal evidence and should NOT be taken as fact (despite people REPEATEDLY DOING SO).

    In fact, because you’re about the only company that is willing to disclose this sort of information, you ahve a responsibility to be crystal clear about … well everything. Failing to do so is harmful. To yourself (for failing to be honest and transparent), to customers (for giving them flawed information) and to companies (Seagate in particular, because you use subpar drives and then essentially blame them for a high failure rate).

    tl;dr: Nice lack of transparency.

    • Hi Drashna! Sorry if there was any confusion, but we purchase internal drives for our data center. We did have a period where we used externals (https://www.backblaze.com/blog/farming-hard-drives-2-years-and-1m-later/ ) but it was rather brief and it accounts for less than 6% of our entire fleet, and the stats from those mirror the internals of the same style. We’re actually going to publish the raw data in a few weeks as stated in the post as well! That should be pretty interesting :)

      • Bob Dickens

        “Guest” may not be exactly eloquent and full bottle on the facts, but it should be noted that your stats don’t match any of the real storage vendors own stats and that you statistics gathering is hopelessly simple in nature. I could talk about the failure rate of Porsche 911s vs Aston Martin DB9’s driving around a track and compare them to each other, and show how you should go for say, the 911 because it’s more reliable, but my test and my reputation would be trashed, and rightly so, if I didn’t up front explain my track was a corrugated gravel track through the Australian outback during the height of summer, and that the cars didn’t even drive on the same track but the DB9s consistently went over rougher terrain.

        • Phillip Remaker

          Backblaze has been pretty clear and transparent about their environment. They even publish specs about how to build your own Backblaze storage pod. They list exact model numbers. They have been clear about when they have ‘shucked’ external drives. Precisely what data are they not providing?

          They are very open about the fact that they use desktop-class drives in a server environment, and that they push the drives beyond their specifications. The design for drives to fail, and they publish their results.

          I don’t understand the nature of the criticism here. I think the information is provided in good faith.

      • Useless Lazy Bum

        Hi , This might seems out of context.
        But , Do you have any 5k300 failure rates for 1.5 TB Version . I tried to look for 3TB version and there is nothing available here Down Under. That would be nice if you let me know

        http://www.ebay.com.au/itm/like/331463621864?limghlpsr=true&hlpv=2&ops=true&viphx=1&hlpht=true&lpid=107&chn=ps

    • Uhmorphous

      The Backblaze staff is obligated to respond with its usual diplomacy; I, however, am under no such obligation.

      It might be helpful, before you spew vitriol all willy-nilly, to get your facts straight. An apology would certainly be civil, but given the level of pure bile you’re willing to smear around for no legitimate reason, I’d say that’s a big negatory.

      • Milk Manson

        You forgot the undiplomatic part.

      • Tim

        These statistics from Backblaze don’t indicate the firmware version of the drives, and all through 2011 they purchased hundreds of external drives with firmware designed for the chassis with aggressive APM.

        I feel Backblaze should disclose this.

        They should also make it abundantly clear their statistics reflect a very SPECIFIC scenario. Nobody using Backblaze is actually going to use their own drives like Backblaze does: in high vibration environments.

        In fact, all you did was dismiss those comment as not factual, insult it as “vitriol all willy-nilly” and ask him/her to apologize without providing a single constructive criticism of his (scathing) post.

    • Bill

      And I feel that Seagate has a good basis for a slander lawsuit against you because you FAILED to disclose this properly.

      1:

      distinguish between spoken defamation, called slander, and defamation in other media such as printed words or images, called libel.

      2:

      to constitute defamation, a claim must generally be false

      • Tim

        I don’t think this guys comment is improper. Backblaze should disclose they did in fact shuck a large portion of their drives over many months in 2011. All of these drives have different firmware than their internal counterparts and often have more aggressive power management parameters causing violently increased load/unload count if used in a 24/7 environments.

        It’s also important that Backblaze disclose most of their drive failures directly correlate to their environment. No consumer-class drive is built to withstand the vibration of dozens of drives. For example, I believe the WD RE4, Red and Red Pro are rated for 8-drive enclosures. Since Backblaze doesn’t use enterprise drives to keep costs down, which is commendable, they do realize this will result in a substantial increase in failure rates. I just wish they would reiterate this more often throughout their reliability surveys.

        Backblaze’s experience with drive reliability in no way will reflect the real-world use of these drives in consumer applications. That’s just simply a fact. These reliability surveys just don’t apply to us, and really don’t have a reason to exist other than for pure interest of THEIR datacenter statistics. No other company in the world does drive reliability surveys of their data centers for this very reason: they simply don’t apply to anybody that isn’t running a data center.

        • Bill

          Backblaze’s experience with drive reliability in no way will reflect the
          real-world use of these drives in consumer applications.

          Of course it does.

          • Tim

            Really? So how many 72-drive pods are you operating in your office?

          • Bill

            Zero. How many rocket-powered sharks do you have in your office?

          • Tim

            That doesn’t apply to the article.

          • Bill

            Correcto.

          • Tim

            You’re getting ahead of yourself. You stated the data applies to consumer applications. Backblaze isn’t a “consumer application.”

            NAS drives are rated for “bays” in capacities of 4-12 drives and enterprise drives are good for 16-24 bay storage arrangements.

            Backblazes are 72-bays. No consumer class drive is engineered for that kind of environment, therefor a consumer looking at this data shouldn’t consider it when purchasing a drive or two for their home PC or NAS.

          • Bill

            You stated the data applies to consumer applications.

            Yep.

            Backblaze isn’t a “consumer application.”

            Yep.

          • Milk Manson

            So how many of the tested backblaze drives have their power cord yanked in the middle of a read/write then literally be thrown in a book bag still spinning, then bang around in a hot or cold or hot and cold car for awhile, then literally dumped back out of the bag and onto the floor and plugged into a different computer? Then the reverse. Every day.

        • Arduie

          > I don’t think this guys comment is improper. Backblaze should disclose they did in fact shuck a large portion of their drives over many months in 2011.

          They did disclose it. Approximately 6 PB of their drives were shucked:

          https://www.backblaze.com/blog/how-long-do-disk-drives-last/

          > All of these drives have different firmware than their internal counterparts and often have more aggressive power management parameters causing violently increased load/unload count if used in a 24/7 environments.

          [citation needed]

          > It’s also important that Backblaze disclose most of their drive failures directly correlate to their environment.

          They do. It’s pretty obvious that the data they gathered and presented results are from their data center, in their chassis. It’s up to the reader to decide if that data and conclusions are applicable elsewhere. You’re not being a very active reader if you didn’t understand that.

          > No consumer-class drive is built to withstand the vibration of dozens of drives.

          [citation needed]

          > Since Backblaze doesn’t use enterprise drives to keep costs down, which is commendable, they do realize this will result in a substantial increase in failure rates. I just wish they would reiterate this more often throughout their reliability surveys.

          They don’t need to reiterate it. They clearly demonstrated that in their environment, enterprise drives are not more reliable despite the extra cost. You just need to read up on it. https://www.backblaze.com/blog/enterprise-drive-reliability/

          > Backblaze’s experience with drive reliability in no way will reflect the real-world use of these drives in consumer applications. That’s just simply a fact.

          So what’s the alternative? Just blindly buy whatever drive?

          As far as I know (and please educate me if I’ve missed one), there’s no other mass studies of hard drives that have been released to the public, naming specific brand names and models. Google has a 2007 white paper on the topic, but like Backblaze’s, it’s based off of their data centers, plus they didn’t reveal names and models.

          While Backblaze’s data center doesn’t directly equate to your home PC’s usage, they have done one thing that’s super useful – gather a statistically significant amount of data in a relatively variable controlled environment.

          Do you buy a car based on EPA reported milage? You’re aware that the EPA does similar testing to Backblaze: controlled environments. The EPA puts a car on rollers and runs it through a pattern of acceleration/deceleration to determine mileage. That’s not 100% identical to driving on the street, so by your logic – you should discard MPG ratings from the EPA…

          Realistically, you should use it to compare models. You can comfortably compare a Toyota Corolla’s MPG to a Chevrolet Spark’s MPG, since they were tested under the same conditions. Even though those conditions aren’t real world, they’re equivalent and relative to each other.

          Backblaze basically is saying the same thing. “Under the same conditions, HGST drives are more reliable than Seagate drives.” They’re not saying “In all conditions, these reliability statistics apply down to the exact percentage.” You’re a fool if you’ve thought otherwise.

          > These reliability surveys just don’t apply to us, and really don’t have a reason to exist other than for pure interest of THEIR datacenter statistics. No other company in the world does drive reliability surveys of their data centers for this very reason: they simply don’t apply to anybody that isn’t running a data center.

          Of course Backblaze a very vested interest in determining hard drive reliability. Their entire business is based off of data storage.

          You clearly have never worked on the tech side of almost all companies that deal with hardware. Determining failure rates is a *huge* part of business for all companies. Any company that employs more than a few thousand hard drives should do this, and we know for a fact that Google does reliability surveys as they released a white paper on it in 2007. I also know that most PC manufacturers do this, as I’ve worked in the sector since the mid 1980s.

          The unique thing about Backblaze is that they’re pulling back the curtain so we can see the results. They’re not afraid of stepping on the toes of the drive builders, like everyone else is. This should be commended, not discouraged from armchair quarterbacks like yourself.

          • Hectic Charmander

            Thank you, good sir, for mustering up the energy to retort these ridiculous assertions. I wish I could up-vote this 100 times. Why people make these fantastical leaps about what Backblaze are supposedly conveying is beyond me.

            The data is what it is and nothing more.

          • Ulaganath krishnasamy

            I see even a hdd needs to be maintained at an optimum temperature.

            I also see no of IOPS and optimum power is directly proportional to disk life span.

            Any device is ment to run for cerntain time frame. And by no means we can say this will die on so and so time. As there are cases where certain exeeds the life span and certain does not live upto half of the estimated time frame.

            As far as i see types of devices which determines the life span,.

            Nowadays even an ssd keeps failing for no reason and end story all data lost.

            Hope in future we have reliable storage with multiple option other than magnetic drive we get 10 fold speed and life span increase with theoritcally no limit on storage.

          • Tim

            My problem isn’t with transparency, per se, but Backblaze should keep these statistics to themselves like everyone else in the industry does because it only applies to their competitors. The lack of transparency is earned when 99/100 people look at the charts, laugh at Seagate, and go buy a competitor based on these charts for their home PC or NAS.

            The usage scenario applies to nobody on this forum unless you actually work at another data center that uses custom pods. The industry standard drive cages range from 8-24 drives, and there are NAS and enterprise-specific drives for these cases that are engineered to run in this type of environment.

            The two things I would like to see is the firmware/APM value of the drives in Backblaze tests, and a comparison to a cluster of enterprise drives that are more robust (dual axis spindle bearings, larger loading rampCVH – or disabled head parking like the Ultrastar, WD RE or Seagate Constellation.

            Then the data would actually become meaningful to a home/business user to base their buying decision.

          • Chris

            @Tim – I work at another datacenter, and love this data.

            @Backblaze – Thank you for sharing this information!

          • John Smith

            So your argument is – Seagate drives are built to a lower (Not over engineered) comparable standard to other brands in the same category but if used in the way and ONLY the way that the manufacturer specifies the failure rate will not be so high?

            Your argument only reinforces the issues the data alludes to. Wheat separates from the chaff when thrashed.

            Across all brands they use simlar class of HDD, ALL of them are not meant to run in this type of environment. The data is very valid and valuable. All have been tested outside the specifications and limits, all pushed past normal use. ALL of them are in the same environment.

            EVERY single data center should be releasing this information publicly and it should be correlated by a 3rd party with no vested interest in the outcome! With a addendum of real world personal use HDD failure rates/repairs/returns. If they started doing so, it would be a good bet that the manufacturers who were consistently showing higher failure rates would rework their production lines to ensure better QC and ensure higher quality and consistent products were delivered to all customers.

            So Seagate might be good for what they are specifically rated but may not be
            the best in class under these conditions that the data was correlated from. If used outside specifications then another option
            should be used. Whats wrong with that? Isn’t it backing up the claims Seagate themselves make with the specifications on their product? Its not the competitors problem that their products perform above the stated specifications or might be over engineered.

            It doesn’t mean Seagate won’t sell in the personal market. The majority are looking for the cheap daily driver to take to the shops and back twice a week and never go over 20.
            I hardly think that type of consumer is going to be bothered to even research enough to find this blog, let alone read the data to have a impact on what He/She buys for the home computer.
            I’ve got 4 Seagate external HDD and they are fine for the use they get. Gets plugged in for backups or to reload some old data every 3-6mths then shelved again.
            Internals have always been Hitachi with Zero problems up until getting talked into a Samsung SSD which has just had a fit after only 9 months.

            Or maybe you would have us believe that the data that has been quoted in this blog is a conspiracy against Seagate and all the other brands drives in the data sample were mounted in individual, vibration dampened, air conditioned enclosures while the Seagate ones were sandwiched together 50 deep and left out in the parking lot in the open to vibrate themselves into early death?

            Or the reality that ALL the brands HDD’s all went through TopGear for HDD’s, repeatedly thrashed by The STIG (Backblaze) and we now get to see the leader board. To top it off, they didn’t just test 1 of each but 1000’s!

            As for the comparison with enterprise drives, GREAT idea!! Seagate should pony up some to run in the same center, in same real world conditions. Run alongside the existing drives Backblaze purchase for use. If Seagate really are that concerned with the data findings, it would be money well spent on their part to setup this experimental comparison. This shouldn’t be an out of scope expense Backblaze absorbs, after all, all they are doing is making public the data collected on their existing infrastructure.

          • Cephalopod

            The data is true, you don’t like it, so it should be censored? Unless further data is added to your personal satisfaction??

            I am so grateful for internet freedoms, and the inability of the likes of yourself to censor information or suppress discussions such as this one.

        • Milk Manson

          So you’re saying that on the average and in the real-world, the Seagate drives might swap places and in fact be the more reliable drive?

          Do tell.

          • Tim

            What I’m saying is it shouldn’t be surprising to anybody that a Seagate drive shipped in an external single-bay enclosure running CC43 firmware with an APM of 64 shouldn’t be surprised when it headparks itself to death in a 24/7 environment or its bearings are shocked to death in a 72-bay enclosure. That’s all I’m trying to get at here. I was simply agreeing with “guest” on those principles.

          • Milk Manson

            Why would Seagate intentionally sacrifice durability, like ever?

        • In my case the 3TB Seagates took out a home RAID 1+0 after about two years with 3/4 of the drives all failing together. Only anecdotal, of course, but that’s a consumer application. I’m betting it’s not a coincidence, and will use different drives next time. Can you blame me?

          • Mark

            I’d suggest using drives from both manufacturers to avoid common failure modes in RAID implementations.

        • Phillip Remaker

          “No consumer-class drive is built to withstand the vibration of dozens of drives.” And yet, most of the consumer-class models that BackBlaze uses do just fine!

          • Tim

            Considering all Backblaze drive failures are above 1%/year yet manufactures state the actual reliability should be a failure rate of 1.5 million to 2.0 MTBF, their failure rate should be closer to 0.08%/10,000 hours (1 year)

          • Mark

            1 million hour MTBF = 1 failure for every 1 million hours on a fleet. There are 24 * 365.25 hours per year = 8766 hours. 8766 hours/year / 1 million hours = 0.876%. Which is fairly consistent with the numbers reported, ie: the 1%/year.
            The empirical data would appear to indicate that Seagate and WD (but not the former IBM/Hitachi-division manufactured drives) are slightly fibbing on the MTBF indication. Remember that MTBF is only valid over a drive’s specified “service life” which is typically 3-5 years. Above that, and all bets are off.

          • Considering the fact that these drives are not ment to run 24/7, they are working quite good.

    • Laz

      I had experience with WD and Seagate in data center. Well to be honest I had a loads of “fun” cloning Seagate drives which were about to die… To compare with WD yes Seagate are not so great at all. But with or with out this blog I’d go for Hitachi as always. Samsung, no comment. If someone has share his experience about something, that does not mean he/she should be sued for that. According to your point of view then Amazon should be sued, if they don’t close comments/reviews on a bad product. Amazing! ;)

      • Rudi Halbright

        Laz, I feel the same way as you regarding Seagate. I like the HGST drives for price performance but what about their significantly greater power requirements compared to the WD RED drives? It seems that with sufficient use that would lead to them being more expensive in addition to potential issues from heat buildup.

        • Mosquitobait

          Funny. My HGST drives always run at least 10C cooler than my Seagate drives, despite being sandwiched between them. I haven’t tracked power usage, but if they do draw more, it definitely does not result in them running hotter.

          • Rudi Halbright

            My apologies as my message wasn’t clear. I meant to compare the hgst drives to the wd red ones which run cooler and with less power than the hgst drives.

        • Laz

          Well it’s true, but power consumption is not an issue at all in my point of view. I was surprised when I saw this mentioned by someone else too. Take a look this site: http://www.tomshardware.com/reviews/4tb-3tb-hdd,3183-15.html

          Power usage difference between Seagate and Hitachi is about 2.5-3W, which means you can’t really measure this amount on your monthly bill at all. Regarding to the cooling of the Data-centers, well they charge you by the used power of the box, which depend on the CPU/motherboard/Memory and not significant by the hard drive.
          E.g. you have 4 HGST HDD with 9w each = 36W. Or you have 4 x Seagate with 7W = 28W. You saved 8W but you could need to swap over the HDD twice as much with HGST.

          Full power consumption of HDD usually less than 9W, but CPU usually 80-95W each core. I got myself a R610 Dell server which eats up around 130W with 3 running VMware vbox with 3 X 2.5′ HDD.
          So HDDs power consumption are about 20W and all other stuff 110W.

          So I think the power requirements was just brought up by the marketing guys from Seagate, if you know how I mean… ;)

          • Rudi Halbright

            Laz, the Western Digital RED 4TB drive uses 4.5watts, have the amount of the HGST. 4.5 watts is not a bit deal but multiply that by the 40k + drives used by BackBlaze, that could be as much as more than 180,000 watts. Assume that drives are constantly (never go into sleep mode) and that are used actively most of the time and it’s 24*7*52 = 8,736 hours and a total of 1,572,480 kW hours. I don’t know how much BackBlaze pays for kW hour but let’s assume a discouned charage of $ .06/kWh and that would be $94,348.80 per year. Even if the drives are only used half the time this is a lot of money and energy!

            Note: This is based on the assumption that all drives are HGST or similar in power use which is not the case, it also does not take into account the rate of annual growth in hard drives Backblaze will realize.)

            Doing the same caculation based on my Drobo 4 disk array shows that I could save up to $35/year per drive that is a WD Red vs. HGST. So I get to decide between cost and performance. My Drobo which is used for onsite backup for my photo library (I’m a photographer) and my Pegasus array which is used for online active storage would get the HGST drives for speed.

            http://www.halbright.com

          • Mark

            HGST offers their drives in the 7200rpm and the 5400rpm format. Most of the power delta is between HGST and WD “Reds” is eliminated if you simply select the 5400rpm model of the HGST instead. Higher rotational speed is almost always going to come with an energy consumption penalty.

    • If you have any reading comprehension at all, it’s pretty obvious that they are reporting the fail rates as per their particular uses. It’s a factual reporting. What’s more every posted question about their uses has been answered by YevP. Therefore, your entire post is minimally obtuse if not outright retarded.

    • tpcock

      As a huge Seagate fan (see post above) I have to say your comment is way off the mark. I have seen quite a few Seagate drives in the 7200 RPM family of all capacity sizes fail, whether internal or external. What in the world is the difference other than one comes with a case and a cable to connect to a USB port? Open that case and you will find the HDD inside is the same exact model number drive as what you can purchase sans case to put in you PC, NAS, or your own case (for diy externals).
      If anyone should be accused of slander or some other form of undue or unwarranted criticism I would have to say your comment is ripe for it, Mr. Keyboard Ninja Guest hiding behind your neutral shield of identity.
      Nice lack of transparency!

  • Arcest

    I guess external usb 2.5″ drives are not included here

  • what about SSD

    • I suspect that SSD drives do not have desired capacity, or at least storage to cost ratio that mechanical drives have for BB to invest in a large number of them. And even if they are used for some purposes, like boot drives, there may not be enough of them to derive meaningful stats about them. Please prove me wrong, BB staff.

    • Hi Doug! We don’t actually have any SSDs in our environment so we have no stats on them. They are a bit cost-prohibitive for us, but we’re hoping they get inexpensive enough for us to test with soon!

  • Chris Williams

    Confirms what I’ve known for a while now – Hitachi drives are better.

    • Western Digital also is quite good and with bigger capacity

  • disqus_SV4kXKmYMH

    Have you guys considered installing the drives in a counter rotating pattern to reduce vibration? One of the RAID vendors claims this significantly lowers their drive failure rates.

    I’m curious if there is actually any truth to it over a large sample size.

    • Unfortunately we don’t have any stats on that. If we ever did run that test it was likely on a singular pod, so not good for statistical analysis :-/

      • Justin Alberts

        I notice that you guys are using desktop class drives – have you ever had a server running with Enterprise class (eg, Seagate Constellation) drives? From what I’ve read, going desktop in a NAS type of device seriously degrades performance and drive lifespan. I’m interested in your experience. Tx.

        • Tsais

          They talked about this some other place, basically, enterprise class drives would make it hard to turn a profit in their business, and since their setup can handle drive failures without loosing data, the failure rates of non-enterprise drives were less costly than paying the price of enterprise class drives.

  • Interesting, I built a home FreeNAS a few years ago, with Seagate 3TB drives, 7200.12 iirc, and saw 3 of them (out of 12) fail well under 2 years… at this point, I’ve had so many issues, it’s pretty much sitting there. I put 4x4tb WD reds into my older Synology NAS and have been using that without issue.

  • JJ Geewax

    Is there any way you could share the raw data on something like Big Query? GitHub did this a while back and it’s pretty nifty: https://github.com/blog/1112-data-at-github

    • Andy Klein

      Good suggestion, we’ll take a look at this as we figure the best way to get the data to people. Our internal version was 3.5 GB, so making more portable and easy to update need to be addressed.

  • Josiah Carberry

    Can you share data about the distribution of the failures, rather than MTBF? MTBF is only really interesting if the failure distribution over time is a more or less normal distribution. But that is very probably not the case for devices like disk drives. After initial burn-in I suppose that the failure distribution looks more like an exponential curve. The point is that knowing the approximate distribution helps to plan a replacement budget, where MTBF does not.

    • I second this. Do you not care so much about buying entire batches or do you deliberately spread new purchases across different distributors to avoid batch rot?

    • We’ll be posting some additional data in the next few weeks! You might be able to gather some more interesting facts from there ;-)

      • Kim3Kat

        Josiah Carberry is raising a very interesting observation about these types of data. A proper analysis should use probabilistic statistical distributions if there is any intent of going beyond descriptions of the historical data to estimation of future service life and remaining life characteristics. I work with a firm primarily involved in that area of estimation and would love to analyze the data, mainly for academic reasons.

    • GP

      Here is some data from Backblaze last year posting, look at the 36 month survival rate chart:

      https://www.backblaze.com/blog/what-hard-drive-should-i-buy/

      • Josiah Carberry

        Thanks for this, GP. That chart is described as a “cumulative survival rate” chart. Do I understand it correctly if I say that the failure rate at any moment is the converse of the slope of the line? In other words, if the slope is 0, that means there are no failures. If the slope is very negative, such as for the Seagate drives, that means a good amount of failures at that time. Is this the correct interpretation?

        What happened after 3 years?

  • qA

    “Currently we have 270 of the Western Digital Red 6 TB drives. The failure rate is 3.1%, but there have been only 3 failures.”

    Ummm… 3.1% of 270 is 8.37.

    • Dan Neely

      3.1% is an annualized rate. 3/270 is 0.9% of drives failed; but they all failed in the few months the drives have been in use. 3.1% is the expected number of drives to go bad after a year if they keep failing at the same rate as they have over the last few months.