Hard Drive Reliability Update – Sep 2014

By | September 23rd, 2014


At Backblaze we now have 34,881 drives and store over 100 petabytes of data. We continually track how our disk drives are doing, which ones are reliable, and which ones need to be replaced.

I did a blog post back in January, called “What Hard Drive Should I Buy?” It covered the reliability of each of the drive models that we use. This month I’m updating those numbers and sharing some surprising new findings.

Reliability of Hard Drive Brands

Losing a disk drive at Backblaze is not a big deal. Every file we back up is replicated across multiple drives in the data center. When a drive fails, it is promptly replaced, and its data is restored. Even so, we still try to avoid failing drives, because replacing them costs money.

We carefully track which drives are doing well and which are not, to help us when selecting new drives to buy.

The good news is that the chart today looks a lot like the one from January, and that most of the drives are continuing to perform well. It’s nice when things are stable.

The surprising (and bad) news is that Seagate 3.0TB drives are failing a lot more, with their failure rate jumping from 9% to 15%. The Western Digital 3TB drives have also failed more, with their rate going up from 4% to 7%.

In the chart below, the grey bars are the failure rates up through the end of 2013, and the colored bars are the failure rates including all of the data up through the end of June, 2014.

Hard Drive Failure Rates by Model

Share this Image On Your Site

You can see that all the HGST (formerly Hitachi) drives, the Seagate 1.5 TB and 4.0 TB, and Western Digital 1.0 TB drives are all continuing to perform as well as they were before. But the Seagate and Western Digital 3.0 TB drives failure rates are up quite a bit.

What is the likely cause of this?

It may be that those drives are less well-suited to the data center environment. Or it could be that getting them by drive farming and removing them from external USB enclosures caused problems. We’ll continue to monitor and report on how these drives perform in the future.

Should we switch to enterprise drives?

Assuming we continue to see a failure rate of 15% on these drives, would it make sense to switch to “enterprise” drives instead?

There are two answers to this question:

  1. Today on Amazon, a Seagate 3 TB “enterprise” drive costs $235 versus a Seagate 3 TB “desktop” drive costs $102. Most of the drives we get have a 3-year warranty, making failures a non-issue from a cost perspective for that period. However, even if there were no warranty, a 15% annual failure rate on the consumer “desktop” drive and a 0% failure rate on the “enterprise” drive, the breakeven would be 10 years, which is longer than we expect to even run the drives for.
  2. The assumption that “enterprise” drives would work better than “consumer” drives has not been true in our tests. I analyzed both of these types of drives in our system and found that their failure rates in our environment were very similar — with the “consumer” drives actually being slightly more reliable.

Detailed Reliability of Hard Drive Models

This table shows the detailed breakdown of how many of which drives we have, how old they are on average, and what the failure rate is. It includes all drive models that we have at least 200 of. A couple of models are new to Backblaze and show a failure rate of “n/a” because there isn’t enough data yet for reliable numbers.

Number of Hard Drives by Model at Backblaze
Model Size Number
of Drives
Average Age
in years
Annual Failure Rate
Seagate Desktop HDD.15
4.0TB 9619 0.6 3.0%
HGST Deskstar 7K2000
(HGST HDS722020ALA330)
2.0TB 4706 3.4 1.1%
HGST Deskstar 5K3000
(HGST HDS5C3030ALA630)
3.0TB 4593 2.1 0.7%
Seagate Barracuda 7200.14
3.0TB 3846 1.9 15.7%
HGST Megascale 4000.B
(HGST HMS5C4040BLE640)
4.0TB 2884 0.2 n/a
HGST Deskstar 5K4000
(HGST HDS5C4040ALE630)
4.0TB 2627 1.2 1.2%
Seagate Barracuda LP
1.5TB 1699 4.3 9.6%
HGST Megascale 4000
(HGST HMS5C4040ALE640)
4.0TB 1305 0.1 n/a
HGST Deskstar 7K3000
(HGST HDS723030ALA640)
3.0TB 1022 2.6 1.4%
Western Digital Red
3.0TB 776 0.5 8.8%
Western Digital Caviar Green
1.0TB 476 4.6 3.8%
Seagate Barracuda 7200.11
1.5TB 365 4.3 24.9%
Seagate Barracuda XT
3.0TB 318 2.2 6.7%

We use two different models of Seagate 3TB drives. The Barracuda 7200.14 is having problems, but the Barracuda XT is doing well with less than half the failure rate.

There is a similar pattern with the Seagate 1.5TB drives. The Barracuda 7200.11 is having problems, but the Barracuda LP is doing well.


While the failure rate of Seagate and Western Digital 3 TB hard drives has started to rise, most of the consumer-grade drives in the Backblaze data center are continuing to perform well, and are a cost-effective way to provide unlimited online backup at a good price.


9-30-2014 – We were nicely asked by the folks at HGST to replace the name Hitachi with the name HGST given that HGST is no longer an Hitachi company. To that end we have changed Hitachi to HGST in this post and in the graph.


Brian Beach

Brian Beach

Brian has been writing software for three decades at HP Labs, Silicon Graphics, Netscape, TiVo, and now Backblaze. His passion is building things that make life better, like the TiVo DVR and Backblaze Online Backup.
Category:  Cloud Storage · Storage Pod
  • ceb322

    any reason why Toshiba drives are not listed ?
    my experience (but I dont have 34K drives): SG=D, WD=C, Hitachi=B and Toshiba (3TB)=A

  • Harry

    Any data on 2.5″ external hard drives? I had a Seagate 4TB FAST 2.5″ drive fail in less than a year – get a message that I have to reformat the drive because it can’t read the drive. Used on Mac OS X. My other Seagate 2.5″ factor drives have lasted for years, but it seems like they have certain models that are more failure prone. I paid a premium for a “better” product and got burned in the end. 4TB of lost data and no offer to help recover it.

  • geky

    What is meant by Average age in years? does it include the dead drives? Or is it the average age of the still working drives?
    How is the yearly failure rate calculated? Is it an even distribution of number of failures over the duration of operation before the HDDs failed?
    How can you base your opinion on reliability on such calculations?
    I.e. If a have a fleet of drives with 7 years MTBF it would be normal during years 5 and 6 to see many failures, perhaps in the region of 35-30% per year.. It would not be a safe assumption to calculate an average rate of failure per year and also distribute it n years 1-4., where in these years maybe the failure rate could be 1-2%.

    I think your data is very valuable but we need a bit more insight to judge the failure rates objectively across the board.

  • Sandra Dixon

    You have written a very informative article with great quality content and well laid out points. computer recycling cambridge

  • CBuntrock

    These blog posts are by far the most interesting articles I have read in the past years. Really.

  • epfvng


    Hitachi Global Storage Technologies was founded in 2003 as a merger of the hard disk drive businesses of IBM and Hitachi.[2] Hitachi paid IBM US$2.05 billion for its HDD business.[3]

    On March 8, 2012, Western Digital (WD) acquired Hitachi Global Storage Technologies for $3.9 billion in cash and 25 million shares of WD common stock valued at approximately $0.9 billion. The deal resulted in Hitachi, Ltd. owning approximately 10 percent of WD shares outstanding, and reserving the right to designate two individuals to the board of directors of WD. It was agreed that WD would operate with WD Technologies and HGST as wholly owned subsidiaries and they would compete in the marketplace with separate brands and product lines.[4][5][6]

    In May 2012, WD divested to Toshiba assets that enabled Toshiba to manufacture and sell 3.5-inch hard drives for the desktop and consumer electronics markets to address the requirements of regulatory agencies.[7][8]

    Hitachi Global Storage Technologies Japan, Ltd. is the Japanese branch of Hitachi Global Storage Technologies, Inc.

    In November 2013, the company announced a 6 TB capacity drive filled with helium.[9] In September 2014, the company announced a 10 TB helium drive, which uses shingled magnetic recording to improve density.[10]

  • This would appear to match what I’ve seen working with… somewhat smaller numbers of drives. Still, over the years and with many clients, thousands. At my home office I have literal stacks of dead Seagate drives on my shelves, a large number being external enclosure drives like the ones Backblaze “reclaimed” during the HDD drought. For comparison, my 8-drive times 3-array setup (24 HDDs total) hasn’t had a Western Digital GreenPower (mix of 1TB and 2TB) drive failure in almost five years. All four of the Seagate drives I bought for myself in fits of desperation in the last five years are sitting dead on the shelf.

  • psychok9

    Are WD RED less reliable than Green?
    I’m shocked. I’m looking the best reliable *consumer* Internal HDD for a long term backup.
    Red, Green, Blue… Black?!

  • wow

    No discs as you take care of, but I have a lot of Seagate s and now I have not been broken.
    For example:

    87k h, 0 problem.

    http://i.imgur.com/fr8zjlg.png (seagate model, renamed to Maxtor)

    41k, h 0 problem.

  • Chris Dunn

    Oh i guess i was right about this, you did get paid by HGST, to eraser my message, and waste my time.Ya this proves my point now, not to trust you at all now lol.

  • Milos Ivanovic

    First of all I want to thank you for taking the time to actually collate all this data and make it available for the general public at no additional cost. Few companies would bother going that far.

    It’s a long shot, and I would say unheard of in the industry, but it would be incredibly interesting to be able to get statistics at this magnitude on how failure rates correlate with the place of manufacture of the drives. For example, Seagate has “site codes” written clearly on the drive labels which specify the factory ID where their drives were manufactured. It would certainly mean something if it’s found that, for example, there is only a 1% annual failure rate for Seagate drives manufactured in Thailand (TK), with the remaining 15% in China (SU, WU).

    What do you think? If it’s not a total burden to add this additional piece of information, I think you may come to terms with some surprising conclusions.

  • Chris Dunn

    How much did HGST pay this guy ?.I read his early review, before this
    one,and i was quite surprised with the results that seagate, and western
    digital, have gone even worse in failure, since i did suffer 5 years ago a T1 seagate failure, got another one, and has been fine for 5 years,
    knock on wood lol. But i did some researching, and found this HGST to
    be giving used drives to people, buying new, and from what i found in
    the reviews, these had many problems,unlike what’s been shown, and told
    in the review, and older. I’m sorry, but even after this review, i will
    take my chances with seagate, and western digital, just sounds too good
    to be true, and for many years i have read, listened, and been told, to
    go the other way, only to find out, i got screwed again because of a review that told me to buy the other guy.,NO THANKS!.

    • TenYearTexan

      CD – Thanks for your informed and enlightening opinion (though you’re a little light on exclamation points!!!) Based on your experience with two drives and your claims that BB takes bribes, I’ll certainly ignore backblaze’s meticulously tracked sample of tens of thousands of drives. I mean, it just makes statistical sense: two opposing opinions – to get the best result, split the difference. Thanks network news for showing me the way.

  • Dr Silicon

    I actually run a site that dwarfs backblaze, you can’t give me anything but an Hitachi Enterprise class drive for ANYTHING anymore. Our failure rate across the board has been less then 1/2 of 1%. WD was always c..pshoot on a specific drive and Seagate’s just always s…k. All our storage we run minimally for 8 years, just buying from the market after the 5 year expires to keep all the raid units operational.

    • TenYearTexan

      DrS – [site that dwarfs backblaze] … and yet, because your site doesn’t publish detailed disk failure numbers, Backblaze is the place people go to for information and (at least to me) your opinion is not worth much more than any other poster. As well as having no way to verify your stated claims as a heavy drive user, I also have no way to know how closely your use case relates to mine (despite the fact that I have a similar bias toward Hitachi, honestly. I also love WD RE SAS drives).

      Pity, that. If more companies published detailed info, the resulting flock of educated consumers would certainly, in the short run, drive up the prices of good drives. In the medium term, I believe that it would cause the hard drive industry to take note of the demand for more reliable drives and to move to fill this demand. The entire industry would benefit.

      Thanks BackBlaze.

  • A very insightful read!

  • Zachary P Nickey

    Here’s a question…why is there such a dramatic difference in failure rates of the same brand & model depending upon size? For example, I used tons of 500 GB Western Digital RE4 enterprise drives…they consistently would give a 5 year service life despite daily usage and tons of data moving about them. Then I purchased some 1tb models and the reliability was dramatically less. I did some research and found that I was the only one to observe this. Why are the differences so pronounced in this area?

  • Bill

    An equally important part of this matrix is where are the drives being purchased from. My experience has been some vendors have a much higher drive failure rate that others, for the exact same model. I attribute this to improper storage, shipping, and handling of the drive. Another useful bit of information is to separate out the short term failures v.s. long term failures. Imagine for example if the drive was dropped in the warehouse before shipping. Likely, that will result in a drive that is either DOE or fails within a few months. As such, it would make sense to separate out the statistics for short time v.s. long term. e.g. What are the odds my drive will fail in the first 3 months? What is the annual failure rate for drives older than three months? I’m not sure if 3 months is a good time frame to separate long-term v.s. short term, but it seems reasonable guess.

  • Donald

    So based on this data and the price of drives which drive is giving you the best bang for your buck?

  • Peter Novák

    Mr. Beach, thank You for very valuable information.

    However, the test how it stands favourizes new drives that have not even taken chance to show their durability, and penalizes old drives that served well and eventually died out after many years of service.

    Please, could You produce charts that would show a probability of failure per year of service? That would give a good picture on what durability pattern can be expected off of any drive tested.

    For example, some users might buy new drive every 3 years, accordingly to their growing data storage demands. They seek a big drive that will serve reliably for that period, and don’t care for longer lifespan. Others might need the most reliable drive for 5 years period and the size is not so important for them.

    Some drives are just few months in Your service, and this would be immediately visible in such chart too. Users will be aware, that the durability of the drive is not yet tested enough. In contrast, the recent chart You show here, might lead to conclusion that 4TB Seagate drive is excellent, and might be a reason for some users to buy the drive. However the data You list below mention, that the drive is being used only 0,6 years on average. So there is no warranty that resuts will not “explode” next year or so. Based on Your data, I have hard time to choose the right drive for me…

  • Ben

    I’ve read that HGST still has not completed the merger with WD yet as of Dec 2014 and exists as a completely independent entity at present. While I wouldn’t say that things would go downhill after the completion of the merger, I suppose it doesn’t hurt to buy the HGST drives right now before the completion.

    According to Forbes http://www.forbes.com/sites/greatspeculations/2014/12/29/how-will-hgst-integration-impact-western-digital/2/

    WD even decided to eat the $100k fine for not meeting certain regulatory requirements (I’m assuming that these are anti-trust related) in order to hurry the merger up.

    • MemphisIsaac

      Yes, as stated in that article “HGST still hasn’t completely merged with Western Digital”. But that just means the 2 entities haven’t yet actually integrated. However, the purchase was indeed completed back in 2012. In other words, WD does indeed own HGST. Furthermore, per the following WD press release ( http://www.wdc.com/en/company/pressroom/releases/?release=abccf3e5-0d6a-4e4c-879c-b7889f91f84d ) they confirmed: “May 15, 2012 – Western Digital Corp. (NYSE: WDC) today announced that it has completed its divestiture of certain 3.5-inch hard drive assets to Toshiba Corporation, as required by regulatory agencies that conditionally approved the company’s completed acquisition of Viviti Technologies Ltd. (formerly Hitachi Global Storage Technologies).”

      Therefore, it would seem the issue still stands related to WD acquiring HGST’s 2.5 inch drive business but divesting HGST’s 3.5 inch drive assets to Toshiba back in 2012. Thus, there still needs to be clarification from BackBlaze regarding the statistics being reported for HGST drives. (see my post from Nov. 29th, 2014 for additional details)

      • Ben

        Very interesting, I suppose the exact nature of any changes in the manufacturing process for HGST drives would probably be opaque to us; i.e. we won’t ever know if the merger has or has not affected the quality of manufacturing for HGST drives until we get the actual statistics reporting of HGST drives manufactured before and after the merger.

  • Bill McKenzie

    If you are looking for the Hitachi HGST 2TB drive above #HGST HDS722020ALA330 don’t waste you time on eBay and Amazon. I’ve been through 2 and they are either refurb or well used but advertised as NEW.

    • Where do you buy new hgst drives that are truly new?

  • fUji MaNia

    @Backblaze : Thank you for sharing this information. No regular end-user or even manufacturers will be able to test over 25k different (brand and size wise) drives to generate these great statistics.

  • MemphisIsaac

    What is meant by “HGST (formerly Hitachi)” needs clarifying. It’s my understanding that Hitachi sold their 2.5 inch drive business to Western Digital and their 3.5 inch business to Toshiba. Based on the superior results in your January reliability post for Hitachi, I thought that was why you were also showing some new Toshibas within the mix. I therefore expected to see many more in this September update. However, it shows none. Furthermore, if I’m understanding the above sale correctly, more recent HGST drives would actually be WD mechanisms. As such, wouldn’t this essentially mean the HGST results are a hodge-podge of older Hitachi-based drives and newer WD-based ones? So, can you please clarify the HGST results? And also hopefully explain why you’re no longer showing any Toshiba (which I thought would more truly be “formerly Hitachi” rather than HGST) 3.5 inch drives? Thanks!

    • Me Be Square

      I was wondering the same thing, based on reports of people claiming “Toshiba took over Hitachi’s 3,5″ business.” I don’t think this is correct.

      Two things. Wikipedia states “To address the requirements of regulatory agencies, in May 2012 WD divested to Toshiba assets that enabled Toshiba to manufacture and sell 3.5-inch hard drives for the desktop and consumer electronics markets.[7][8]”

      This probably just means Toshiba got one factory or other. From what I heard, this was a factory hit by the big floods of a while ago.

      Second, HGST, which is the WD-owned company, also produces 3,5″ Deskstars. This means they didn’t sell the division, nor the IP I’d guess.

      • CBuntrock

        This topic also really confused me. So as far as I understand you correctly, the HGST drives above are still sold under HGST brand and the company HGST is still WD owned. ONLY a few factories have been given to Toshiba? Because some articles say the 3,5-Business(!) has been given to Toshiba (not just factories!)… Confusing! Business includes Research and Knowledge etc. for me. Important factor! Beside this, I still buy Toshiba drives in order to keep some competition.

    • Ryan

      Perhaps I can help clear this up. I used to work for HGST during the WD transition. HGST was short for Hitachi GST (Global Storage Technology) when it was owned by Hitachi. We had a strong footing in the enterprise level SAS and consumer SATA products (and still very much do) and when we merged with WD, there was concern about the name change as all our customers knew us as “HGST”. So, the decision was made to keep the HGST acronym name, even though it technically no longer stands for “Hitachi GST”. That’s why you will see it as “HGST, a Western Digital Company”. Customers still know they are dealing with HGST, the same people and products as before, they are just now owned by WD. I am not aware of any dealings with Toshiba and we do still develop consumer SATA products, although, the enterprise stuff usually gets priority. Before Hitachi, it was IBM’s HDD company. HGST Deskstar products used to be IBM Deskstar products and were some of the most reliable drive on the market.

      We operated very much like our own HDD company after the merge. We actually never changed from the original IBM culture. The only thing I felt like we had in common with the WD company was HR and the benefits, which sometimes didn’t align, haha. While the companies had merged, we kept our own identity, culture, and products. That’s why you will see both WD and HGST brands. Just think of it as a name tweak, and nothing more.

      • dosmastr

        Care to comment on the 75GXP drives?

  • Steven Bergstrom

    Seems the Backblaze info is unbiased. Nothing wrong with posting the facts. Use them as you will. If the facts scare you

  • molocho

    Thank you for keeping us updated! This is the only long-term test of it’s kind i could find on the internet, regarding hard drives available on the current market. Let’s just all hope that the bad performing manufacturers don’t start suing you for this. Basically the presentation of the shown results are not to be classified as scientific, but the quality of information still is above average for public use and in this case it is that what matters. In my opinion the quality of hard drives could be as high as alomost never failing. (I have worked as material scientist for a nanotech research lab) Anyway, it is an open secret that almost every manufacturer, no matter what product, tries to implement planned obsolescense into their products. From what i see, i assume that the nefarious engineering work for getting the drives physically fail after the guarantee period is over, may either not be excellent, or extremely hard to accomplish. As the russian IT-Specialist Vitaliy Kiselev discovered embedded failure code after a certain lifespan in printer devices over 10 years ago, manufacturers may have gotten cautious with doing it this way. Informing the public about the -actual- quality of products is the first step to force manufacturers into producing higher quality hardware, rather than cheap gear which will make it’s way soon to dumping grounds in countries like Ghana.

  • Calvin Dodge

    Thanks, Brian. Your initial post last year convinced me to mirror the 3 Seagates on my system, which meant my “home” files were safe when one of the Seagates died (it doesn’t think it has, but its’ VERY obvious that it has) yesterday. The drive had about 20,700 hours on it, so IMHO it wasn’t old age that killed it.

  • Max

    And what about WD40EZRX? Any data about the 4TB versione of WD Green?

  • C Bolton

    This is awesome information! Interesting that drive reliability doesn’t much factor into Backblaze’s purchase decision b/c of the way they spread out the date. I think the data needs a bit more explanation to be useful to end-users, though.

    I think the questions that end-users want to know is “How to get the best use out of a drive?” and also “When should I invest in a new drive?”

    The data as presented is a bit confusing and doesn’t answer these questions. For example, what exactly does the 9.6% failure rate of the Seagate Barracuda LP(ST31500541AS) that have been in service for 4.3 years mean? Does it mean that the drive has a 2.2% per year failure rate? Is that worse than the Western Digital Red (WDC WD30EFRX) which shows an 8.8% failure rate in 0.5 years or a whopping 17.6% failure rate per year?

    I’m guessing that there is an “infant fatality” phenomenon whereby a certain number of drives fail quickly. Then there’s the “wearout” stage, when drive failures start to accumulate as different pieces wear and then break. I guess these could be captured in some abbreviated ways.

    Someone else mentioned the idea that newly introduced drives may have a higher failure rate as manufacturers shake down the manufacturing process. This would be a challenging graph to create as you’d have to also record the manufacturing date, which is not necessarily related to the in-service date.

    A failure vs. time graph for each drive showing how many drives failed in say a 3 month period plus an aggregate failures per unit time summary might be a good way to get most people the data they need. Again, thanks a ton for putting this data on-line

    • mrez

      I think it shows the standard deviation. So for a drive with higher average age I think the failure rate would have higher confidence interval, but you also need to know the resampling rate. It means how many disks they had to test. So it works for me!

    • Peter Novák

      This is an excellent point. Any drive will die eventually. So drives that are operational for many years may paradoxicaly show worse results than new drives that has not even taken chance to show their durability. It is more than visible even within the chart where results from january and september differ significanty for the very reason.

      Please, Backblaze, can You do some charts on average-time-of-failure too?

    • David Rawling

      It’s reasonably clear that it’s the annual failure rate – this percentage of disks that will fail in a one year period (for appropriately bookended values of will). AFR of 15% suggests that about 150 out of each 1000 will fail each year. This also means the percentages are directly comparable – 8.8% per annum is generally comparable to 9.6% per annum, regardless of the overall time period – I’m assuming relatively constant failure rates as I seem to remember Backblaze saying they try to avoid infant failure and the resulting bathtub curve with testing

      So the paradox is that the chance of a faliure of a specific drive in 5 years is actually 56% and not 75% as you might think (you have to work out the likelihood of survival – 44% – first). Using this approach – the disks with an AFR of 3% means a particular disk has about an 86% chance of still working in 5 years, compared to the 44% of the disk above. That’s definitely worth considering.

      • scheuref

        I also find that those numbers are hard to understand.
        First the author uses a measure of ‘Annual Failure Rate’ without giving a precise definition of what he means with that.

        David suggests that this is the percentage of the the total disks that will fail in a one year period. If this is true then the numbers are misleading.

        For example if you buy 1000 disks of model X on the same month and let’s assume they have a mean life of 3 years (online 24/24), then some will die after only one year and some after more than 3 years, but a bigger amount will die during the 3rd year.
        So on the 3rd year the statistic data will give the wrong result that they are suddenly “unreliable”. That was purhaps the explanation regarding the 3TB disks of WD and Seagate that suddenly “lost reliability”…
        If you buy the same 1000 disks but let’s say 334 on the 1st year, then 333 on the 2nd and finally 333 on the 3rd year, then the results will show much better reliability and would actually be closer to the truth.

        The published results here could be completely misleading, depending of the buy rate and history…
        The author should, if possible, calculate the MTBF by keeping track of the lifetime of each specific disk when it dies.

        The published ‘Average Age in years’ is not the MTBF, for example some brand have 0.6 years…

  • nutjob2

    I can imagine it would be much harder to provide this, but age at the time of failure would be the most useful metric.

  • Mark C.

    It would be nice to see the raw data use for these statistics. I’m not sure I understand correctly…wouldn’t an average life for example Western Digital Red WD30EFRX with an Average Age in Years of 0.5 have an Annual Failure Rate of 50% instead of 8.8%.

    Also out of the 776 how many of theses drives were RMA replacement drives.

    It would be also nice to see when they started service and purchase date. Maybe the manufacturer changed how the device is been manufactured part way through the year which could lead to higher failures.

  • GuitarJam

    WD use to be the best but not anymore. Just anoher trustworthy company that decided to cut corners to increase profits. They don’t realize this will hurt them in the long run. Since WD owns HGST you can bet their failure rates will go up. I’m glad I bought a Hitachi drive before they sold out. The best drive I ever owned. If somebody gave me a stack of free brand new Seagate drives I would either give them way or throw them in a dumpster. I wouldn’t even use them for backups. running them would be a waste of electricity.

    • Mark

      I have entire machines full of 500gb and 320gb Seagate drives, closing in on 70k hours. No problems. The WD drives of the era were total garbage in comparison. Drive quality is cyclical. Some models are good some years, bad others. The key is to engineer solutions so you’re not dependant on the performance of a single drive. Services like Backblaze can be an important part of this for consumers and small business.

      • GuitarJam

        bought 3 WD drives in 2003 and they never hiccuped. had to end up retiring them after 10 years. If they were 6 gb drives I would have kept them but they were limited by slow IDE interface. I only buy Hitachi drives now. I wouldn’t go near WD, Seagate, or Toshiba. they are junk

  • Shaun Forsyth

    Love the information guys but this is a little tantamount to scaremongering, without other information we are unable to make real informed decisions. Even based on the information above I would suspect the average life of a Seagate 3TB drive (that gets over the initial 3-4 weeks failure zone) to be around 10 years in a home users desktop machine.

    I urge you to provide some more detail from the S.M.A.R.T data (as averages)

    – Start/Stop Count
    – Spin Retry Count
    – Power Cycle Count
    – Power-On Hours

    This will at least allow us to compare the data centre style use to home user use of the drives.

    Would also be good to see why and how you decide a drive has failed, do you use S.M.A.R.T to predict a failure and remove the drive before its really dead?

    Either way, I enjoy the posts and please keep them coming.

    • k_man

      Scaremongering? Don’t you think that is a little harsh. These are comparative results like products with many points. There is a lot of good information you can use as you wish.
      But there is not need to be alarmed about these numbers. Well, unless you work for Seagate.
      In my case I haven’t used Seagate (I use Hitachi instead) for years because I had been seeing the same issues with high failure rates.

      • Shaun Forsyth

        A little harsh maybe, but not everyone is as well informed or understand how these technologies work.

        I do have a, hmmm…. dreaded, seagate 2TB drive in my personal desktop computer and while I grip onto my desk for dear life as I say this, its working well with no problems. So for average users these drives should have no issues. I would hate for people to avoid the brand based on the extreme use case of these drives that we are lead to believe they are subjected to.

        On the other hand, I read these great posts because I too have servers in data centres with comsumer drives. Normally Western Digital RE drives (OK not completely consumer). So its a great source of knowledge for me, but I should always use myself as an example.

        • Matthew Austin

          I manage an IT help desk at a small college and let me tell you I’m scared to death of the Seagate 3TB ST3000DM001. In the past 4 weeks alone I’ve had two of them brought in to our help desk dead and just yesterday I decided to evacuate the data from my personal 3TB ST3000DM001 because the reallocated sector count was increasing steadily and throwing SMART errors in StableBit DriveScanner. MFG dates on the drives put them all about 1.5 years old at time of failure.

          So that anecdotal evidence, COMBINED with Backblaze noticing the failure rate shoot up to 15%…well that seals it for me, I would not touch a 3TB Seagate.

          • Stephen Sookdeo

            I have had this reallocated sector issue with most our 1TB and 2TB Seagate DM001’s in our Servers and DVRs in under a year of use. I have come to expect that eventually all will do this, its just a matter of time. Management didn’t want to approve other drives so I am stuck using it.

          • phonebanshee

            I ran into this article because I just had a ST3000DM001 die on me (good thing it was in a zfs pool). Purchased it in August 2013, RIP January 2014.

        • GuitarJam

          I would replace that Seagate immediately. I had a contract job and 7 out of 10 of the bad drives I pulled were Seagates. The recycle dumpster were full of them. I looked at all the date codes and they seemed to die in the 2 to 3 year span. a few made it to 5 years.

        • Tipografia Romania libera

          Make a full backup pronto.

        • Yah, well, my Seagate ST2000DM001 2TB drive just bricked after 1 year + 1 month. I’ve had good luck with Seagate drives for a long time, but I see the point. Think I’ll risk another one? When Seagate used to have 3 and 5 year warranties, that meant something. 1 year means something, too — Seagate doesn’t stand behind its product any more.

          • Sgt Pinback

            I have a few of the same model, ST2000DM001 – my first one died on me today I got just over 2 years out of it. I also had a couple of 5+ year 750Gb die on me both were seagate – and looking at my dead pile of 5 drives over the last year all were seagate and one WD RE3. Scratching seagate off my purchase list permenantly.

          • Well, I decided to give Seagate one more try. Replaced the dead M001 with another of same. It’s been about two months – so far, so good. If I lose one more, I’m done with them. While WD seems to fare better, I think drives are now so cheap and such commodities that quality has gone into the toilet everywhere. It’s gotten so you have to backup your backup drives.

          • Pete Lorenzo

            I disagree. Some technologies are continuing to drop in price, but disk drives have not. I just ordered a replacement 4tb hgst drive for one I’ve had for 2 to 3 years. sadly the best deal I can find is actually $30 more than it was back then for the same specs. Some drives are slowly dropping in price, but this one and others have gone up.

        • Jim Anderson

          I’m with the rest of them when it comes to Seagate drives. In the last 5 years Seagates quality has deteriorated a very significant amount. When they reduced their warranty to 1 year from 5years you knew it was bound to happen. I have had more Seagates fail than any other manufacturer on the market right now.

        • Peter Novák

          I have been using Seagate Barracuda drives for 10 years and until cca 500GB treshold, they were of exceptional reliability. Thus it has been quite a surprise for me when my first 1,5TB drive died out almost instantly and without warning within 2 years and the other has reached over thousand of reallocated sectors by then.

          The third drive is faring moderately, I might say, because it has worked for more than 4 years. It has relocated more than 2000 sectors already and although there probably is capacity for another 2000 available, there is already thousand of unrecoverable sectors – I conclude that sectors are no more being silently relocated by drive itself (SMART), they are instead being exposed as BAD to OS (read failed…). Even the SMART internal long test ends up with read error, instead of relocating the sectors. So I’m looking for immediate repacement. And frankly, I don’t know what drive should it be now.

          Another 3TB Seagate has died unexpectedy after mere 1 year, another is still operational however relocated sectors are rising quickly.

          Need to say, these drives have been almost always ON, that is not a typical usage pattern by Seagate specifications. However, workload has not been too harsh – doing nothing for most of day, and some 150GB read and write on daily average with neglible seek demands (backup drives).

          On the other hand, the older models (80, 120, 300GB) fared much better, even under much more demanding conditions.

          I see two typical patterns of failure.

          1, Drive begins dropping from SATA bus. In this scenario, backup immediately! Because the drive dropouts will be rising exponentialy, making backup increasingly difficult, and in a few days the drive will definitely die out. I assume this is some electronics related problem.

          2, The relocated sectors start climbing rapidly. As soon as it reaches some 1000, the drive has probaby reached the last third of its reliable usage. Although the theoretical relocation capacity might be 4000 or more, You can expect the relocation rate to accelerate, and You may start to experience incorrectly read sectors. Either the OS will notice read error with all consequences (You will be forced to do regular filesystem repairs and lose files). Or sometimes the sectors are read incorrecty in silence – the drive will not notify the OS of any read problem, however the sector is incorrectly read and thus the file is corrupt. This is quite sneaky behaviour. Be it a movie, You might even not see any problem. However be it a JPEG or OS file, this is disaster.
          So You can still use such drive for some weekly or monthly backup (if You just sync differences and power off the drive for rest of time), but take precautions – don’t expect any reliability, use recovery records capability of an archiver, or do SHA checksums for the files in order to be able to distinguish, which one is still intact and which is corrupt.

          • Peter Novák

            Correction: The still operational 1,5 TB has passed only 2,8 years of online time so far. The thousand unrecoverable is by SMART parameter 187 Reported_Uncorrect.

          • Keirnoth

            I’ll back this person’s post about the 500 GB Seagate. Anecdotal evidence, but I have a Seagate 7200.9 Barracuda 500GB that I bought 9 years ago that is still going strong. I had to resort to using it as a boot drive because the low quality OCZ Vertex 1st gen 60 GB SSD I bought had its onboard controller die.

            Got the Seagate 3TB mentioned in this post from a 2013 Amazon Black Friday sale and I’m currently running a chkdsk /R on it because it’s on the edge of dying on me. OS just freezes and files read VERY slowly from the drive. Drive lasted a total of 1 1/2 years, which seems to match up with the anecdotal evidence of everyone’s 3 TB dying after 1 year. Got the infamous 1.5TB 7200.11 awhile back and I had issues with the thing on day 1 of purchase (latter 750 GB of the drive wouldn’t format properly).

          • Bernald Solano

            The 7200.9 series are good, actually really good actually, when I see people complaining, is mostly the later series.

        • Brandon Edwards

          got 2x seagate 2.5″ 500gb external paperweights myself. They were less than a year old when they failed

        • Lieane

          Segate 2TB drive now starting to fail …. just into it’s 2nd year after purchasing as part of a custom home build in Nov 2014

    • Tipografia Romania libera

      1. What formula did you use to get this 10 years average lifespan?
      2. The problem is: I don’t use average hard disks, I use a particular one. And one external Seagate 1G FreeagentGoFlex just died on me without any prior warning. So much for the average life. I constantly and consistently check SMART data just to make sure I will not have surprises like this. Of course, I have backups of all important stuff, in triplicate, but that is not the point.

      • Shaun Forsyth

        Average Life of the Seagate 3TB (ST33000651AS) according the graph above
        is 2.2, in backblaze that would be (24*365)*2.2=19272 power on hours,
        home user desktop as mentioned in my post I would expect to have an on
        time on average per day of around 5 hours, (I know as a technical
        person, my machine is on all the time, but its sleeping, I use it around
        2.5 to 6 hours a day on average). so 19272 / 5 = 3854.4 days, then
        3854.4 / 365 = 10.56 years. Which is not bad, since in my first post it
        was a guess based on when I used to collect and service thousands of
        computers from business which went defunct.

        I am going to take
        away that you believe the smart data is not a good indicator of a drives
        imminent failure. However this is not what I was looking for, I was
        looking for averages from the smart data to indicate failure.

        • Tipografia Romania libera

          No way, the failure rate of a mechanical drive is not linear after the burn-in period.

        • SonyAD

          It is my personal theory that drives fail due to spin-up and spin-down cycles. Not because of hours in service.

          I only have anecdotal evidence of that but it’s pretty strong anecdotal evidence, with drives working reliably 9 years or more (I usually retire them before they actually fail on me),.

    • Haravikk

      I’m not sure about scare-mongering; you’re right that it can’t easily be used to make comparisons with different use-cases, as large scale storage use isn’t really the same as a home-user just trundling around on the internet now and then or playing a few games.

      That said, I’ve lost faith in Seagate entirely; I had a few Barracuda drives in one of my older machines that ran flawlessly for five years (and were still in perfect working order when I sold them, no sign of faults in S.M.A.R.T.), but newer drives have been abysmal (two lasted just outside the 1 year warranty then died without warning).

    • You should read the seminal google paper on disk failure trends:


      It shows that SMART only had relevant non-zero counters for 56% of failed drives, thus only being predictively helpful about half the time.

      However, for drives that had non-zero SMART parameters relevant to predicting failure (scan errors, realloc, offline realloc, and probational) – the critical threshold of *all* of them was 1 (one). That’s a huge deal. The research finds that if any of those parameters are non-zero, that drive has between a 14 times to 39 times greater chance of failure within the next 60 days.

      Other surprising results from the paper are that, after infancy, utilization rate is not strongly correlated with failure rate, and moderately high temperatures (35C – 40C!) actually decrease failure rate compared to cool or very high temps.

      Though drive models and characteristics have changed plenty since 2007, many of these findings are likely to be relevant still.

      • Edward Iskra

        The paper is from 2007, but their data was from 2005 and 2006, and reflects drives in service for up to five years. It includes data on 80 GB drive put in service in 2001! That’s 14-year-old technology! The findings are more dated than you think.

        “The disks are a combination of serial and parallel ATA consumer-grade hard disk drives, ranging in speed from 5400 to 7200 rpm, and in size from 80 to 400 GB. All units in this study were put into production in or after 2001. The population contains several models from many of the largest disk drive manufacturers and from at least nine different models. The data used for this study were collected between December 2005 and August 2006.”

    • Marcus Franulovich

      It isn’t scaremongering. Home user here with 8 Seagate 3tb drives in a home NAS environment. 4 have failed within 19 months. Ive been doing this a long time and have never seen failures like this. The drives are rubbish. I wish this article had been out when I bought them.. could have saved myself $1,500.

    • Peter Rajdl

      Last year I sold a FreeNas server with 5x Seagate 3tb drives. Within 3 months one drive failed. While it was being replaced a 2nd drive failed. Then 4 months later a SQL server with 4 of the same drives also suffered a failure.

      • David Wujcik

        That’s what you get for putting a SQL server on terrible spinning media…

    • Snuffo Laughagus

      I would have to disagree from personal experience from the past 5 years, Shaun.

      Among other that I don’t remember, I have owned drives by IBM (1994), Quantum (1994), Western Digital (1995), Toshiba (1996) Quantum (1996), Seagate (1998), several IBM Microdrives (2002), Toshiba (2006), Seagate (2010), WD (2010), Iomega EGO (2011) Samsung (2012), 2 Seagates (2013) and finally WD (2015).

      Out of all the drives mentioned in this list, the only that has failed in normal use and form old age was the Quantum, that had been used in a Windows 3.1 machine until its death in 2013 (it eventually stopped spinning). And I was able to retrieve all the data on it after freezing the drive, so it was just a matter of wear. Over 19 years, an average of 4 hours a day, it gave great service. The 1996 Toshiba died in a similar manner after the laptop fell during an attempted robbery but here as well I was able to recuperate all the data on it.

      The 1998 Seagate with its fluid bearing, a revolutionary concept at the time, super silent and fast (7200RPM) is still running.

      The 1996 Quantum “Fireball” died after a major power surge actually FRIED the motherboard of the computer in 1998 (the famous Ice Storm of January – a 25KV line fell and zapped everything in the area, computers, fax machines, TVs, microwave ovens…)

      The Samsung drives I use for a backup show zero issues after 3 years used as backup devices. They also have NO issues with head parking as the WD Greens have and run cooler than the WD Elements I am tesing now. However, one of the Samsungs failed after 2 weeks, developing bad sectors, and was getting noisy. the replacement is showing no such issues.

      The 2010 WD external fell on the floor a couple of times (OK, it’s carpeted and it fell only 2 feet or so), but shows NO issues whatsoever.

      The Iomega EGO fell several times on a carpeted floor from my desk as well – no issues, and it gets carried around every day in my briefcase. Yeah, never found out who made the drive, but it could well be Samsung, judging from the other 2 Samsungs that were sold as Iomega externals.

      TWO of my laptop Seagates went bad in a short amount of time. One, I had to replace, it was in a 2.5 year old laptop, still covered under warranty. The other in my present laptop is slowly wearing out, from its smart data and temperature measurements, after about four years of use, and I expect to have to replace it shortly…

      This is real life data. It tells me two things:

      1) No brand is safe from failure. A good brand can eventually turn out bad products, such as Seagate. A brand that had problems with some products (IBM ‘DeathStar’, later Quantum/Maxtors) can prove to be very reliable with others (my early IBM laptop drive, my microdrives)

      2) Statistically, my results correspond with the findings of this large pool of drives in the present study, whose data cannot be ignored.

      So I would agree with you that the poor results with Seagate drives doesn’t necessarily mean their drives are bad and that getting an HGST or WD drive will guarantee exemption from problems, but statistics are statistics and do tell a story of risk under given circumstances and for given model drives, and it clearly shows that larger drives especially from Seagate are simply less reliable than those from these other brands.

      Also, keep in mind that early versions of a given capacity drive are usually more problem prone than later revisions. NEVER buy the latest and greatest unless you enjoy being an unpaid beta tester!

      Now If I had the luxury of choice and were not constrained by a budget, I would have probably gone with HGST drives myself and would have my drives installed in a climate controlled environment on dampers, but this is the real world and since I could get a WD drive for about half the price of an HGST, I figure that getting a WD and another after throughly testing it, was a more sensible decision.

      As for the head parking issue with the WD green drives, WD has published several utilities that have significantly mitigated the problem: You can now adjust the Sleep Timer of the drive and/or adjust its head parking behaviour. With the Sleep Timer set at None, my WD Elements 4TB now parks its head perhaps once or twice an hour, which is quite manageable.

    • Dilbert

      The “Average Age” column in their table is the average age of these disks at the time of writing, not their life expectancy. What’s relevant is the annual failure rate, which is substantially higher for those drives.

      So, you have a *single* Seagate in your PC which is working fine, and you conclude that “for average users these drives should have no issues”? I had two, both died. Is that relevant for the average user?

      BackBlaze has *hundreds* of these drives working perfectly fine. With 15.7% failure rate, after a single year, “only” 157 out of 1000 will fail. That’s 843 drives working like a charm!

    • MaryReilly

      Not scaremongering. I have 20 Seagate 3TB drives and in 2 years, 5 have failed completely, while 8 others have varying degrees of significant degradation. Total crap. I am changing out my remaining drives for HGST drives. The added expense is worth the reduced headache and increased piece of mind.

    • dosmastr

      Are these drives ever doing more than an initial Start?
      if its a cloud cluster the drives probably never stop, power cycle or have to spin retry. Power on hours ZI think they are giving — in years, as the drives run 24/7

  • Alex Chen

    Do you change default drive parameters in any way? For example, I know WD green drives have 5 seconds default sleep timeout. This would be very bad for data center use, I think.

    • Haravikk

      In a data centre use-case I don’t think a WD Green would ever get a chance to sleep, kind of defeating the main benefit of the drive (low energy consumption in infrequent usage scenarios). They’re really not suited to drive array usage.

      • herr_akkar

        That means that in a home use scenario where the drive is not in active use all the time, the life span could actually be dramatically lower?

        I have had multiple Greens die on me when left running for a few months.

        • Haravikk

          I was under the impression that greens are intended for infrequent use, e.g – back-up drives, so they get a chance to sleep for longer. But it sounds like the behaviour of WD Greens has changed a lot, so I’m not 100% sure anymore, all I know is that the drives I’ve used (actually, still use since they miraculously still work despite some horrific S.M.A.R.T. values) that they aren’t great for continual usage.

          WD Reds meanwhile (again, in my experience) are intended for constant use with low noise, heat and energy consumption, and they’ve done just that for me.

      • Matt Buford

        Well, I’m doing home NAS use and not DC, but…

        From what I understand, the WD head parking issue was limited to specific models of green drives. They certainly don’t all do that. In fact, some quick googling turned up people with red drives having issues with too much head parking too.

        I’ve had 9 WD10EADS 1TB green drives (the same ones in this Backblaze report) in my NAS (Linux MD raid6) running 24×7 for about 5.5 years now, I have not changed any settings, and I have not had any issues. It seems that at least this model of green drive does not sleep or park the head at any short enough interval for it to ever happen. I have 47,800 power on hours, 23 start-stop-count, 23 load cycle count, and 21 power cycle count. The start-stop-count seems to be the one that indicates head parking. Apparently somehow I parked the heads twice (outside of power cycles) in 5.5 years.

        Back when I bought them, 10 watts active was the norm for drives and these greens only used 5. This made greens seem an obvious great choice for home NAS since they spin slower (less wear and tear) and used less power (which also means less heat). I’ve heard the stories about head parking, but never experienced it myself. I’ve been quite happy with these drives. If I were ready to upgrade, I would probably go with greens again, if nothing else just because they’re typically cheap. Cheap, long lasting, and low power is a winning combination for my needs.

        Back in 2009, WD red didn’t exist. There was their normal line of drives at 10 watts, and the green line at 5 watts. So, I went green. Red drives were released in 2012. Red drives are very similar to green (“intellipower” RPM) and have almost identical power usage.

        If I were in the market to upgrade, I’d have to do some more research to be sure, but I think I’d likely end up just buying the cheapest option between red and green (which is usually green). I’d prefer red, but I suspect I’d never notice the difference, so price wins.

        • Haravikk

          Hmm, strange, I have two WD Green 1.5tb drives and a single 1tb drive, but if I don’t use a script to keep them spinning all the time they’ll park their heads around 10,000 times a day, indicating a delay of about 5-8 seconds, it’s pretty ridiculous in fact, but it doesn’t seem like they came from one bad batch or model as they were bought years apart. I’ve also seen WD Greens taken from media centre PCs that have ridiculous numbers for start-stop-counts which shouldn’t be possible for a system that’s largely idle with bursts of streaming.

          Clearly not all WD Greens are created equal, so for that reason I don’t think I’d trust them regardless. But the WD Reds I’ve used have been nothing but smooth running with less power consumption than the greens. I guess I just assumed though that the greens were intended to park their heads quickly to spend their time idle, so keeping them running would be sub-optimal.

          That said, I’d still swear by the Reds for noise level; mine are almost completely silent except when spinning up initially, and they produce no audible vibration at all, even with a heap of them in a single system. I just wish I could afford to buy more of them to replace the older desktop drives I still have in my RAID.

          • Matt Buford

            Because it was never a problem for me I never bothered checking, but after your post I installed idle3-tools and checked out my own NAS drive’s idle timer:

            drivespace idle3-tools-0.9.1 # ./idle3ctl -g /dev/sdb
            Idle3 timer set to 80 (0x50)

            According to what I’ve read, that translates to 8 seconds. I wonder if the difference between our green experiences are as simple as load patterns. On one hand, 8 seconds is a long time for no IO at all. In my case, I have the OS booting from the array, so things like syslog are running and reading/writing to the disks even when I’m not streaming a video. A quick check of my disks using “vmstat -d 8” shows no idle 8 second intervals while I watched. On the other hand, it is hard for me to imagine that, in all the years I’ve used these disks, I never had any 8 second idle times at all (including as I was booting from CD and about to install the OS, right after the OS was installed before any server type apps were set up, or doing maintenance/uprades, etc).

            Anyway, I guess for me, cost matters (especially when buying 5-10 drives) but I do need them to not have this issue. So, if I were replacing/upgrading today, I would probably still lean toward starting with green drives for no other reason than price, and then if they have this problem I’d simply return them and either try another model or give in and go with red. Of course, if reds happened to be the same price, or even close, then I’d simply start with them.

      • Alex Chen

        The web is full of stories of WD greens dying within weeks of server use, until sleep parameters are changed (making them effectively WD reds).

  • Noside Justin

    I wish I’ve read this post weeks ago, my 3TB Western Digital Green failed within 1 week! Almost destroy all my data.

    • grf

      HGST is now owned by Western Digital

      • Centaur

        HGST’s mobile division is now owned by WD, their desktop drives are owned by toshiba. So if you want desktop HGST drives get toshiba 3.5″ drives

        • Sara

          I just bought a HGST Touro 4Tb external hard drive thinking it contained a Hitachi drive. So what is in the enclosure: a Hitachi drive or a WD drive? I can’t find any info about this.

        • Bill McKenzie

          I think you have this backwards. The 3.5″ drives are handled by WD now.