CSI: Backblaze – Dissecting 3TB Drive Failure

By | April 15th, 2015

Backblaze Blog: Dissecting 3TB Hard Drive Failure

Beginning in January 2012, Backblaze deployed 4,829 Seagate 3TB hard drives, model ST3000DM001, into Backblaze Storage Pods. In our experience, 80% of the hard drives we deploy will function at least 4 years. As of March 31, 2015, just 10% of the Seagate 3TB drives deployed in 2012 are still in service. This is the story of the 4,345 Seagate 3TB drives that are no longer in service.

November 2011 – The Thailand Drive Crisis

In November 2011 Backblaze, like everyone else who used hard drives, was reeling from the effects of the Thailand Drive Crisis. Prices had jumped 200-300% for hard drives and supplies were tight. The 3TB drives we normally used from HGST (formerly Hitachi) were difficult to find, but we still needed to buy 500-600 drives a month to run our online backup business. The 3TB drives we were able to find in decent quantity were from Seagate and we bought as many as we could. We purchased internal drives and also external USB drives, from which we removed the enclosed hard drive. The model number of the drive, ST3000DM001, was the same for both the internal and external drives.

Here is a chart of our Seagate drive purchases from November 2011 through December 2012.
Seagate Purchases

Our New Reality in the Face of the Thailand Drive Crisis

Looking back on 2012, it is safe to say that if we did not purchase the Seagate 3TB drives, our business would have been dramatically affected. We estimated that our costs would have been at least $1.14 Million more, making our goal of keeping our price at $5.00/month for unlimited storage difficult at best. In other words, the ability to purchase, at a reasonable price, the nearly 5,000 Seagate 3TB drives that we needed during 2012 was instrumental in meeting our business objectives.

Beginning in January 2012, we deployed 4,829 Seagate 3.0 TB drives as shown below.

The Slide to Failure

We would expect the Seagate 3TB drives to follow the bathtub-shaped failure rate curve described in our study on hard drive life expectancy. Instead the Seagate drives failure model was quite different.

Failed Drives by Quarter
In annual terms, 2.7% or the drives failed in 2012, 5.4% failed in 2013 and 47.2% failed in 2014.

As of March 31, 2015, 1,423 of the 4,829 deployed Seagate 3TB drives had failed, that’s 29.5% of the drives.

Drive Failure Replacement and Testing

Let’s take a minute to describe what happens when a drive in a Storage Pod fails. When a drive fails, no data is compromised since we distribute data redundantly across multiple drives. Simply, the bad drive is replaced and the system is tested and rebuilt. During the entire process, the data is safe and available for file recovery as needed.

If during the rebuilding process, a second drives fails, the data is migrated to another Storage Pod where it is safe and available, and the Storage Pod with the second failed drive is taken off-line. Once off-line, technicians go through a series of steps to assess the health of the system.

One of the health assessment steps can be to remove all the drives from the Storage Pod for testing. There are two different tests. The first test is similar to “advanced” reformatting and takes about 20 minutes. The second basically writes and reads all the sectors on the drive and takes several hours. Only if a drives passes both tests can it be reformatted and reused.

The Harbinger

The first sign of trouble was in May of 2013, when 27 drives failed. This was about 0.5% of the Seagate drives deployed at the time, a small number, but worth paying attention to. In June there were 25 failures and in July there were 29, but it was in July 2013 that the failing drives issue came to the forefront.

During July and August 2013, three Storage Pods, all with Seagate drives, had drive failures. In all three cases, each time a drive was replaced and the rebuilding process restarted, additional drive failures would occur. At this point all of the hard drives in each of the three Storage Pods were removed and scheduled for further testing. The Storage Pods themselves had new drives installed and went back into service.

The drives from the 3 Pods were removed and tested as noted above and about half of the drives from the three Storage Pods failed the first test. The remaining “good” drives were subjected to the second test and about 50% failed that test. The results were eye opening. It was decided that all of the drives from the 3 Storage Pods would be removed from service and not redeployed.

Over the next several months, Seagate hard drives failed in noticeable quantities: 31 in October 2013, 68 in November, 70 in December and the upward trend continued in 2014. The only saving grace was that in nearly all cases, once a failed drive was replaced, the system would rebuild without incident. Any time a drive gave the least sign of trouble, it was removed and tested. Failing either of the external tests meant the drive was removed from service and placed with the “suspect” drives. The “suspect” pile was getting larger by the day.

Hitting the Wall

The failure count continued to rise and in the Spring of 2014 we had decided that if a Storage Pod with Seagate 3TB drives showed any type of drive failure we would 1) immediately migrate all the data and then 2) remove and test all the drives in the Storage Pod.

In July alone, 189 hard drives failed and another 273 were removed from service. The total, 462, was 11.4% of the Seagate 3TB drives operational on July 1st, 2014.

To be clear, a drive is marked “Failed” because it failed in operation or during a rebuilding process. Drives marked “Removed” are those that were removed from a Storage Pod that contained failed drives. When the “Removed” drives were tested nearly 75% of them failed one of the two tests done after removal. It could be argued that 25% of the “Removed” drives were still good, even though they were assigned to the removed category, but these drives were never reinstalled.

Digging In

The Seagate 3TB drives purchased from November 2011 through December 2012 were failing at very high rates throughout 2014. If we look at the Seagate 3TB drives deployed during 2012 here is their status as of March 31, 2015.

Seagate Drives Deployed in 2012
Only 251 of the 4,190 Seagate 3TB hard drives deployed in 2012 are still in service as of March 31, 2015. Breaking it down:

As a reminder, about 75% of the “Removed” drives failed one of the bench tests once they were removed from a Storage Pod.

Thoughts and Theories

    Theory – The Backblaze System

    The first thing to consider is whether this was a systemic issue on our part. Let’s start with comparing the Seagate drives to other 3TB drives deployed in 2012.


    Given that the drives were deployed into the same environment, the Seagate 3TB drives didn’t fare as well.

    Theory – Storage Pod 2.0

    A second thing to consider is the model of the Storage Pod. In 2012, the only Storage Pods that were deployed were Version 2.0, Version 3.0 was not used until February 2013. So all of the 3TB drives deployed in 2012 were installed in a Storage Pod 2.0 system. In the case of Seagate, the 3TB drives installed in 2012 performed reasonably well during the first and second years of operation; 2.7% of the drives in service failed in 2012 and a total of 7.7% of the drives deployed in 2012 had failed through the end of 2013. It was in 2014 that the drives seemed to “hit the wall.”

    Conversely, as noted above we also deployed 2,511 HGST drives, all into Version 2.0 Storage Pods. To date they have not shown any signs of “hitting the wall,” with just 4.1% of the drives failing as of the March 31, 2015.

    Theory – Shucking External Drives

    A third thing to consider was the use of “External” drives. Did the “shucking” of external drives inflate the number of drive failures? Consider the following chart.

    Seagate Internal vs External

    From January to June most of the drives deployed were internal, but the percentage of drives that failed is higher during that period versus the July through December period where a majority of the drives deployed were external. In practice, the percentage of drives that failed is too high during either period regardless of whether or not the drive was shucked.

    Adding to this is the fact that 300 of the Hitachi 3TB drives deployed in 2012 were external drives. These drives showed no evidence of failing at a higher rate than their internal counterparts.

    Theory – The Drive Itself

    This brings us to the final thing to consider, the drives themselves. The drives in question were produced beginning in Q3 of 2011. It was during this period that the Thailand Drive Crisis began. As a reminder up to 50% of the world’s hard drive production was affected by the flooding in Thailand beginning in August 2011. The upheaval that occurred to the hard drive industry was well documented. The drive manufacturers generally did not discuss how specific drive models were impacted by the Thailand flooding, but perhaps the Seagate 3TB drives were impacted more than other models or other vendors. One thing is known, nearly every manufacturer reduced the warranty on their drives during the crisis with consumer drives like the Seagate model ST3000DM001 being reduced from 3 years to 1 year.


While this particular 3TB model had a painfully high rate of failure, subsequent Seagate models such as their 4TB drive, model: ST4000DM000, are performing well with an annualized 2014 failure rate of just 2.6% as of December 31, 2014. These drives come with 3-year warranties and show no signs of hitting the wall.

Backblaze currently has over 12,000 of these Seagate 4TB drives deployed and we have just purchased 5,000 more for use in our Backblaze Vaults.

Andy Klein

Andy Klein

Director of Product Marketing at Backblaze
Andy has 20+ years experience in technology marketing. He has shared his expertise in computer security and data backup at the Federal Trade Commission, Rootstech, RSA and over 100 other events. His current passion is to get everyone to back up their data before it's too late.
Category:  Cloud Storage
  • kittentheboss

    My ST3000DM008 was showing warning signs. I was compiling shaders in cemu for botw.

  • Bryan

    I have one of these…and it has failed on me. However, I’m thinking/hoping it’s the chip board on top and not the drive itself. So I am seeking to replace that board if it means I can re-access my data and get it moved to another drive.

  • Dave DotNet

    yep, I’ve got 24 of ST3000DM001, all failed in 4 years, also another 4 that goes replaced on warranty. So no more Seagate for me for next 10 years. That was not cheap failure choice for my private soho use…:(((
    Thanks guys for such report!!!

  • Qualified Expert

    OK, what about the updated version of Seagate BarraCuda ST3000DM008 3TB ?
    It is highly rated in https://www.newegg.com/Product/Product.aspx?Item=N82E16822178994 and also https://www.amazon.com/Seagate-BarraCuda-3-5-Inch-Internal-ST3000DM008/dp/B01IEKG4NE ?

    Has anyone notice any improvement ?

    • Siu Loong Woo

      Saw the 008 version on sale for $103 on NCIX…. I’m curious as well. I bought 4 of these infamous 3TB drives 4 years ago for a RAID10 array. 5 died over the past 4 years.
      Most of the reviews for the 008 are relatively new on newegg/ncix. Memoryexpress has 1 review stating the drives died after awhile so it may be a sign to stay away from these 3TB drives entirely? But the $/TB is sooooo attractive on these drives haha.

      • Qualified Expert

        Cool, thanks for the sharing @siuloongwoo:disqus. I ended up buying HGST Deskstar 3 TB drive instead of Seagate.

        • Siu Loong Woo

          hahaha i bought 4x3TB baracudda 008 Seagate drives few days ago. I’ll try to remember posting here if any of my eight -008 Seagate 3TB drives die on me. :)

          • Topias Olavi Salakka

            Have any of them died on you? I’ve been looking to buy one but i’m not sure about reliability.

          • Siu Loong Woo

            Hi, knock on wood, all four 008 series drives have been doing fine so far. :)

  • Mike Gervasi

    Yup. Mine was a backup drive and it just bit the dust after only removal and reinstallation.

  • Libra rye Files

    Okay, Now I read many posts from people who bought the ST3000DM001 with manufacture year between 2012-2013 have high chance of hard drive failure.But what about those drives manufactured in 2015 and 2016? Are they still reliable? I was leaning into ST3000DM001 for my desktop. do you still recommend this model for 2016 or should I look for another one?

    • Derullandei

      You’ve got to ask yourself one question. “Do I feel lucky?”

  • Libra rye Files

    So this is 2016, and I need 3TB drive for my home office. Does Seagate resolved these manufacturing issues/factory defects last 2015 or not? Should I trust Seagate’s ST3000DM001?

  • nothing like a failed Seagate. https://www.youtube.com/watch?v=iuue2RnkMeQ

  • Yuki_Sakuma

    I have this 3TB drive purchased May 2015 and as of now still no bad sectors BUT it has read errors! I just spend my money recently on a 4TB WD Black to add extra storage space. I’ll be damned if this starts to show bad sectors sooner. I had a 1 TB Seagate that failed after 3 years of use, purchased 2011. Now my friend’s 2TB Seagate started showing bad sectors and according to HD Sentinel only 15% health! After 2 years of use only! I will go full WD starting from now

    • Derullandei

      Interesting. None of my ST3000DM001s ever displayed any SMART errors before failing.

  • Wes

    YES, finally some validation of the experience I had. Ordered 8 of these drives, 4 were DOA. Had those 4 replaced. 2 failed within the next month, they were replaced. Over the next 1 year, 6 failed. I now sit having just lost my last Seagate drive. If you’re keeping track that’s 20 failures out of 8 original purchases, a 250% failure rate since 2012.

  • rotary_rasp

    Bummer! I just noticed my Barracuda 3TB has just failed! In the bin it goes. Time to make a backup on the last remaining Seagate 3TB HDD in my media server before it is too late! No more SEAGATE for me!

  • mhbgt

    My ST3000DM001 died yesterday! (Manufacture date 02/2012)
    Where do I sign up to join the club?

  • Austin Bailey

    Looks like things have gotten ugly with with Seagate.

    Seagate slapped with a class action lawsuit over hard drive failure rates

  • Steve Rand

    Just came across this article by chance. Of the 8 disks I have at home doing various things in one server, the one disk that has died over 6 years is exactly this Seagate model. It would spin up, tick and R2D2 then spin down. I tried all the usual tricks but eventually gave up on it.
    I took mine apart and the fluid-filled bearings had leaked or exploded all over the platters and heads etc. The platters were completely covered in a translucent grey tacky substance which I believe is dried up fluid.
    Would be interesting to see what the other disks looked like.

    • Derullandei

      I had one fail earlier today. Will disassemble and have a look.

  • Faux Grey

    Wish I had saw this article sooner. Recently lost quite a bit of (albeit unimportant) data on my array built from Seagate ST3000DM001. Hoping for better luck after replacing it with Toshiba disks.
    Going to keep far away from Seagate for the time being.

    • I own 7x Toshiba 3TB 7200RPM disks, some of which are in a 24/7 RAID-5 array. Only one failed and that happened within a couple of days after unboxing; the replacement and the other 6 drives show no signs of failure or ever stopping. Good choice!

      • Milk Manson

        Are those internals or shucked?

        • They’re retail-bought internal drives. I don’t shuck. I want to know what drive I am getting to make sure I get 7200 RPM rotation. Many external drives are slower because of lower rotation speeds.

  • Malte

    Same happened to us. We operate two Synology 1812+ NAS, both as RAID 6 arrays. So far, 5 of 8 Seagate 3TB drives have failed (the other 8 are WD REDs).

    We were in contact with Seagate representatives who showed no sign of goodwill (see here: https://plus.google.com/u/0/b/104655503559564193969/+Methodenlehren/posts/f613NuXm1w2).

    It is fair to say that we’ve had it with Seagate. We informed our IT department about not buying Seagate drives in the future. As will I recommend “not Seagate” to anyone asking.

    • Derullandei

      Same here. Someone asks me what drive they should buy, I usually answer “anything that’s not Seagate will probably be OK”.

  • Sam Cochran

    I have two ST3000DM001 internal HDDs running in a RAID-1 configuration within my NAS. Having used Seagate drives since the days of MFM/RLL/SCSI in the 1980s, my experience with Seagate has always been very positive. Unfortunately, the first ST3000 failed with less than 10K hours and the second around 18K hours. Both drives were in the same lot number and serial batch, so I too suspect the issue is systemic within a particular production run or runs from the supplier. Seagate provided me with RMA information and claims the warranty is still valid, although these two drives are well over the one year “standard” warranty period.

    My concern is with the replacement drives Seagate will be sending me – has anyone had an issue with a refurbished drive returned as part of an RMA within, say the last 90 days or so?

    Thanks in advance!

  • cnlson

    @@YevP:disqus is there any difference in the failure rate on the 3tb drives due to their age? for instance I have a 3tb (failed twice) that was purchased in 2012 (replaced in 2014) and an oct 2014 drive. should I expect the same poor lifespan on the 2014 drive as the 2012? or rather, would you expect the same poor lifespan?

    • Probably not no, our newer 3TB drives are performing pretty well! It was likely a large bad batch, so hopefully that’s behind them!

      • cnlson

        I noticed that the new drive shows made in China instead of Thailand or Malaysia. Maybe they spun up a new location 3yrs ago and something like that Michael Douglas /Demi Moore movie happened

  • aaronwt

    I just purchased six of these today while they were on sale for $75 at Newegg. Hopefully I don’t have issues. I have around thirty of them in use in my unRAIDs that I purchased in 2012 and 2013. One had an issue before it was put into service so I exchanged it. And recently I had a couple that had some reallocated sectors, (26 and 600), so I repalced them. But so far I’ve had nothing catastrophic happen with any of them. So hoepfully the ones I ordered today are fine as well.

    • Stoatwblr

      Apart from the early failures, I’m finding that DM001 family drives have a tendency to fall off the bus when driven hard. The most recent one to exhibit the trait had less than 1000 hours on it (it was a warranty replacement for a warranty replacement.)

  • Dave Keller

    I get that these are Consumer grade drives and not really rated for Enterprise storage. That said My friends and I bought 4 of these in total to use as storage in our Gaming towers they were on sale last year (2014) so we bought them about the same time. As of now 3 out of 4 have failed and I suspect that the 4th one has survived only because I haven’t used my desktop a lot over the last several months! the 3 drives have all failed with different symptoms 1 started to develop read errors 1 just stopped being detected in the bios and the 3rd shows up but the platter doesn’t spin up! I went as far as swapping the controller board from one to the other to try to access the data on 1 drive but the fault just moves with the controller board!

    • Stoatwblr

      “the fault just moves with the controller board!”

      THAT is valuable information, probably more so than you realise.

      It points to ROHS issues or similar. Have you checked for tin whiskers or poor reflow solder jobs?

      • Dave Keller

        The boards look clean but at that scale of circuitry I need some pretty powerful magnification to get a good look. There are some minor differences between the way a couple of the chips were heat sinked to the drive chassis! As a last attempt I took the board off the one drive that was still in working order and swapped it with my Friend’s drive that failed detetion , the drive now detects but I get an IO error trying to access it so I have deemed it a lost cause!

  • fhturner

    Anyone? Bueller? ….Do we assume the entire run is suspect, including those on sale today, or have QC issues been addressed in the intervening 2+ years?

    • No, it looks like the issues have been resolved and most still on the shelves are good to go!

      • fhturner


      • Brett

        You think the current ST3000DM001’s are decent? Seeing them for 90 bucks is so enticing since I purchased four early on for about 210 each back then.

        Since January 2012, three of the original 4 are still in my machine but I had one fail, and then the RMA replacement fail as well. Then I had two 3TB usb 3 externals, both had ST3000DM001’s both failed, RMA’d, then failed again.

        As much as I would hate to go back to seagate, 3cents per GB is really hard to ignore.

        • Bill Rookard

          The thing is, you can look around and find the HGST 7k3000 enterprise grade drives for a bit more ($150ish) with none of the reliability demons.

          • aaronwt

            That is twice what I paid for the the ST3000DM001 drives I purchased today.

        • Derullandei

          Losing your data has never been cheaper!

      • Stoatwblr

        The ST2000DM001s I have which are failing on me would indicate otherwise (all shipped within the last 6 months)

  • Andrew Serrell
  • hereiam2005

    A post mortem on the failed drives (and the removed ones) world be nice!

  • Faslane

    Not just seagate, I lost TWO 3TB ecternal My Books in 3 months also and they were under normal use of simply draging and dropping files as backups were needed….very strange indeed. My 4TB Canvino had been stellar since day one, not about s year old and when I run testes on it (file structure etc it passes with zero issues.

  • Fresaa

    My 4th of 6 are failing right now, and the fifth showing read failures. It’s like the IBM Deathstar all over, and the consumers will take the hit. Good going, Seagate.

  • fhturner

    So, are these compromised drives still being produced/sold? The stats in this article show purchases from Nov 2011 to Dec 2012. Are ALL ST3000DM001 drives bad? Reason I ask is, Newegg has a sale on 3TB Seagate 7200.14 ST3000DM001 today, $75. Tempting, but not knowing if anything has been corrected between these models on sale today and those sold/built 2-3 years ago gives me serious pause.

    Do we assume the entire run is suspect, including those on sale today, or have QC issues been addressed in the intervening 2+ years?

  • Johnson Lee

    I have 2 3TB Seagate drives which have failed one after one in the previous few weeks. The failure rate is unreasonably rate and that’s why I find this website.
    When my 1st drive was about to fail (extremely slow, sometimes not recognized), I back-up my important data to another drive which I did not realize it’s a 3TB Seagate. It just went dead suddenly without any sign. Fortunately, I can still recover my data from the half-dead one.
    Seagate should have recalled the whole series!! If you still have one operational. STOP using it!!
    I will not buy Seagate products anymore!

  • Clayton Moore

    I have a La Cie Raid that had two such hard drives and both have failed at this date. The one replacement which works today has the same serial number which scares me a bit. I’m about to send the entire box back and wonder if I can request anther serial number series.

  • Necessitas

    I had just one ST3000DM001 in my personal system, 7 other unrelated mostly WD (internal and external) drives comprising 16TB all inclusive. The ST3000DM001 failed after 4 weeks, it was barely ever used; I only made disk images on it about once a week to back up other drives.

  • Stuart Brown

    I have a Synology box with four of the ST3000DM001 drives and have had three of them fail within two years.

    As I get each replaced, I go with a WD Red drive in the Synology and move the replacement to my desktop. Very frustrating – I used to be quite a fan of Seagate, not so sure now.

    • Derullandei

      You’re lucky they failed one at a time. If the second failure occurs while you’re still rebuilding the RAID after the first failure, the data is – poof – gone. Ask me how I know.

      • Stuart Brown

        I’m not using RAID, but was able to recover data from two of them. The third was a complete loss, but in the grand scheme of things I was able to recover and move on.

        • Derullandei

          Same here, the only real loss for me was the far too many hours of my life that I wasted troubleshooting, upgrading, recovering, backing up, rebuilding, pampering and reading about these drives.

  • Tim Henning

    I have 5 of the 3TB drives in my Synology box I use at home. I had to replace every drive, one of them twice. Luckily for me, it was all under warranty, except for the last one. When they dropped for me, I had multiple failures. As the hot spare kicked in, another would fail before the rebuild would complete. This took my NAS box down twice and I had to restore everything. As I replace these drives going forward, I will not be using Seagate. They have a huge reliability issue with these 3TB drives and should do something about it, even if it’s a trade in program. I would consider upgrading to a 4TB drive and pay the difference just to get off this platform. This would keep their customers on Seagate and hopefully give them a reliable drive that would change their Seagate opinion.

  • DS

    I’m not sure how all of this blogging does anymore for your company except that now people are going to research it closely before forking over $5/month for your service. I think you’ve done your company an injustice by opening up discussions on your website here.

  • johnkristian

    I built a file server myself with 15 of these drives (+ i bought a couple of cold spare drives). The drives failed all the time. Some was sendt for replacement, some where tested OK, rebuilt on, failed again after a week or 7.
    Ended up with replacing all of them with WD RE4 drives last summer. Much more expensive (especially since I had to pay for both the seagate drives and the WD drives), but haven’t had a single failure since.
    17 drives is not much compared to what u are doing, but it’s more that most consumers buy. Short story … the drives are rotten. :P

  • Just lost the last of 5x3TB st3000dm001 drives purchased fro my NAS between 2012-2014. It’s may 2015 and I’ve lost 5 drives in 3 years. I’m really, really REALLY disappointed by the lack of reliability. I’m NEVER buying Seagate again as a matter of principle. I have hard drives in old machines dating from 2003 that are still working, hell I have an 80GB in my old Amiga that’s still working.

  • Interesting article and a great update on the “best” hard drive one. As we do data recovery on harddrives, we see the Seagate 7200.14 drives a lot and they are always in a very bad condition often not recoverable. From your statistics we would expect more Seagate drives with failures than other brands but thats not the case.

  • Ashlord

    I have about 20+ of 2TB and 3TB drives used for ZFS. After 2 years, most of the drives have been replaced at least THRICE. I have also replaced most of these junk. Perhaps the drives cannot stand the wear rate of weekly scrubbing. I cannot be sure. But this failure rate is indeed alarming.

    When they fail, it is pretty catastrophic. The circuit board just stopped working and the drive cannot even be detected in POST. In some other cases, the drive appears to work normally, but suddenly makes chirping sounds and grinds to a half, then drops off from the vdev. And prior to this, SMART is 100% healthy!

    • Derullandei

      Yep, same experience here, there’s usually no SMART warning before the ST3000DM001 suddenly dies. It’s not the scrubbing either, I’ve had several drives fail from sitting mostly empty and mostly idle in vibration damped bays in a well cooled, stationary computer. Others have died in a NAS, and I think one died in an external enclosure. No pampering can save these drives, they seem to be designed to fail.

    • Stephen Wiebelhaus

      Same here. Three times this has happened, whole box just locks up, and I have to pull the plug to reset. Upon POST, a failed drive would no longer show up. BTRFS would then throw errors. I’d add in a new drive and rebalance. The NAS started with two WD 1TB, expanded with 2 Seagate 3TB. Just out of warranty the first Seagate died. I replace with a new Seagate. 6 months down the road, the second Seagate died, replace with WD. Another 14 months, the third Seagate died, this was the newest one, again, just out of warranty. Thankfully, after the second Seagate failure I looked around and found Backblaze’s report on failure rates of drives, and I bought a replacement from HGST.

      Really frustrating when these die with no warning and completely stop the box when they do.

  • Vince

    Interesting, as usual. In our environment, our experience is that Seagate Drives are generally unreliable, and we don’t actively buy them – at least not since we used to buy 750GB drives many years ago, and normally buy WD Green drives – which overall have been excellent. The only exception has been the 2TB ones – manufactured around 2012 also – every single once of those has failed without exception – most did around 2 years top. Yet the 1TB Green’s are still running in some machines 6-7 years on 24/7 without any sign of an issue, and no real sign of any of the 3TB/4TB drives failing yet in any significant way.

    We have found however that 6TB WD Green, Red and Purple variants are generally reliable so far (although clearly too young for long term data), but “doa” and faulty within first month in service is considerably more likely and common with them, particularly the Red variant.

  • danieljcox

    Is there a way to access a Drobo device to see the date my drives were manufactured?

  • 何 建霖


    I just want to say thank you. Thank you so much for letting me know that I’m not alone.

    • o0cacoto0o

      What baffles me is the fact that hard drive is being sold on amazon today as part of the amazon prime sale.


      • Nathan Fletcher

        Part of the thing though is that the drives used here were five platter drives, while the amazon prime sale is using 3platter drives, which are good. It seems as though any five or six platter drives die. My original 1tb died after minimal use, but my 3tb was fine. The difference was only that the 1tb had five platters.

  • Lawrence Reed

    We have had similar failure rates with the Seagate 3Tb drives from 2012. We didn’t have the quantities Backblaze did but when the price sky rocketed we too turned to “shucking” drives. Learned our lesson the hard way.

    • Milk Manson

      What lesson? Learning a lesson implies that you can avoid making the same mistake in the future… what exactly are you doing differently?

    • Vince

      Shucking doesn’t seem to be a factor here – the drives were the same model and showed broadly similar trends. The only difference is in a *normal* scenario, you won’t get warranty cover as you’ve technically “messed” with what was shipped for those shucked ones.

  • Reaper

    I also had 6 of those Seagate 3TB drives in Raid5, 24/7 low load enviroment (backup server). All have failed by now. Warranty was 1 years and drives started failing when 1.5 years old. Good to know that Blackblaze confirmas that I was not alone with this problem.

    In my case drives started geting more and more reallocated sectors and dot thrown out of RAID array all the time.

    At the same time I have several 9 years old 320GB Seagate drives still working without any problem!

    • Sentinel Jones

      Yup. I’ve had 3 of these 3TB drives fail in my MacPro (the older silver tower model) – the failure rate seemed high, but I also thought it could have been electrical/lightning/brownout type stuff on the first one. The others I got more suspicious. One drive failed a year and a couple of months after purchase – Seagate said it was out of warranty.

      After reading this, I’m heading over to Amazon to get different replacements for the replacement 3TB drives I bought. Frustrating. I’m not typically a fan of lawsuits and I never pay attention to the stupid and unsolicited “settlement” things I get in the mail (they too often seem like a scam) but there might be a class action here.

      • Mathew

        I’ve had both Segate 3TB drives fail within six months of each other. Fortunately they were mirrored and I replaced the both failure with Western Digital.

        • Faslane

          be careful if you bought the MyBook 3TB WD, I’ve lost two with 3 month of eachother so they were tossed regardless of warranty, I took the hit financially and just went with Toshiba Canivo 4Tb and it’s been perfect, so its not just seagate!!!!

          • Dave Keller

            We had a 2TB model fail on my Son’s machine he used it to backup his steam game library then unplugged it and it sat! he reconnected it to restore after he reformatted and it wouldn’t power on. I removed the hard drive from the housing and connected it to sata and discovered although the drive worked fine the WD interface board encrypts the data so it was inaccessible!

          • Faslane

            I was able to use EaseUS Data Recovery program (there’s a free trial to test if it’ll see recoverable files before you buy, but does require purchase to recover. I think it’s about 29.99 or 39.99 but worth every cent. It’s saved me a couple times and got my files back but it took a couple days to do, lots of scanning and waiting of course. It seems the issues are more in the hardware controlling the drives than the drives themselves but in my case it was the drives but I’ve heard others have the same issue as you where the box fried but drive mounted fine in another. A good trick to try when all else fails is to stick the bare drive in the freezer for a half hour and plug into a external sata connection or one of those external slot Sata boxes where you just drop the bare drive into, usually it’ll mount long enough to grab everything you can. It’s worked for me a few times but not always. Ive never seen where it’s encrypted data though, maybe corrupted it but tht’s what the EaseUS data recovery app does it “decrypts” for lack of a better term and recovers the files. it nearly recovered everything I lost previously….I had about 1000 or so movies and tond of pics and docs, got 90% of it back intact and watchable/editable/viewable etc. BTW, I don’t work for them or get kickbacks, it’s just a great tool.

          • GlueFactoryBJJ

            I’d also recommend Spinrite. It is a miracle worker for those times you just need to get the data off the drive. If it can be recovered, Spinrite is likely to be the product to get it done. I’ve been using it both professionally and personally for over 20 years. You can check it out at http://www.grc.com

    • Matthew F

      you should not be running 3TB drives in raid 5, you are asking for data loss with the rebuild stress alone of a single drive dying.

      • Adam Klein

        Rubbish. That’s like saying you should never run a single 3TB drive by itself or defrag it.

        • Matthew F

          Its not rubbish, research raid 5, flipped bits / Soft “URE” and failures with large drives in raid 5 arrays. This is why you will no longer find raid 5 in larger storage arrays with spinning rust, or recommended by anyone in the storage world. Also a defrag is no where near as intense as a raid with parity rebuild.

          Also running a single drive is not always wise other if you care about your data, so again unless you have backups, don’t be surprised if that drive dies one day.

          I had some old 120G WD IDE drives that still spun up a few years ago before i tossed em, I have had other drives die in weeks, months, years.

          Raid 5 on SSD’s, go nuts, raid 5 on 1TB + drives, I hope you have a good backup strategy. There is also no scenario where Raid 5 fits with larger drives except you can not afford a 4th drive for Raid 10. If you want Parity go raid 6. If your a business your costing your self money in the long run, no only is the rebuild time ridiculous, you may as well trash the raid 5, do a raid 10 and restore your backups. But unless you have a very high end raid card performance is going to be crap anyways.

          Let me help you out to get you going on why Raid 5 is essentially dead for rust. (Scott Allan Miller is very well knowledgeable in the world of storage)





        • Jorge C

          It helps to know what RAID 5 is before criticizing, Adam.

  • Mark Nelson

    I’m curious to know what firmware was running on the drives, I know the 7200.14 drives had a firmware bug that caused issues with them in raid arrays.


    • Stoatwblr

      That applies to seriously old 7200.14s – model 9YNxx (and mine all failed even with this firmware in place)

      model 1CHxx uses CC29 and 1E6xx uses SC48

    • Derullandei

      After I first started experiencing problems with the ST3000DM001, I read about the firmware problems, and upgraded the firmware on all my drives to the recommended version. As far as I can tell, it didn’t help one bit. They’ve been failing in RAID arrays, and they’ve been failing in standalone setups. In short, they’re failing all over the place, and there’s nothing you can do about it.

  • Peter

    I put three of these in my NAS in Q4 2012. One of these failed recently and I had to swap it out. Oddly enough the two that are (currently) working fine I purchased from a different vendor and were probably a different production run.

  • Michael Gardner

    Odd, not a bit of discussion (did I miss it?) as to HOW the drives failed. Excessive bad areas on the platter? Dead interface? Random data errors?

    • @disqus_XDDpPg8FEd:disqus we didn’t chat about drive failure in this post because we talk about it ad nauseam in other posts. Take a look at: https://www.backblaze.com/blog/best-hard-drive/ under the section: “What is a Drive Failure for Backblaze”. That’ll give you some more info!

      • Kyle

        That post still didn’t explain HOW the drives failed. That section of the article you linked just shows what criteria you use to determine a failed drive. You seem to be dodging the question. We want to know about how these drives in particular failed.

        • ThePinkGuy

          I guess we won’t be getting an answer.

          • Sen Choi

            at least you will be getting smiles! :o :) :) :) :) :) :) :) :) :) :) :) :) :) :) :) :)

    • Steve Rand

      Good question. I just posted how mine failed if you are interested.

    • They just stopped being accessible – my NAS would attempt to access the disks, I could hear them spin up then clickety-click, click, click spin down. Then the NAS would simply light up a red LED by the drive saying it had failed. I’ve tried connecting each failed disk to my Mac using en external dock. Some spin up, make noises and are never accesses by the drive controller at all. Some are simply “unreadable” and can’t be accessed by the OS. Some are readable but refuse to be formatted (initialisation failed), some can be accessed, but a bad sector scan using Tech Tool Pro or Drive Genius finds an excessive amount of bad blocks, too many to reallocate. All in all I’m thinking the failure is either for varied reasons, or the failure causes varying types of damage to the disk. All in all, screw it. They suck. Don’t buy.

    • Wes

      I lost 20 of these drives. Eventually almost all turned into clicks of death. Prior to that, they were booted from my RAID setups due to bad sectors – counts in the hundreds and thousands by the time I tested them. Some reported SMART errors before failing, but most did not.

  • The CSI header is appropriate.
    Your data shares as much factual data, as the average episode of CSI. None.

    WIthout tracking the drives, seeing where each drive was sourced, and where they were used at, what controllers were in use, where they were purchased from and the failure time.
    Without ALL of this information… your data is useless and heavily misleading.

    • Ryan B

      Seriously. I read through this waiting for the useful infoormation to emerge. It didn’t.

      I’m surprised Seagate is even OK with this getting published as it rpovides no insight into the problem other than “3TB Seagate drives really sucked for a while.”

      • In previous posts, I’ve commented about that as well. I’d sue or get them to remove the posts, if I was Seagate.

        • Kyle Benzo

          Get them to remove the posts how? Sue them for what? Truth is a solid defense to libel/slander in the USA. Potentially not so much in some European countries. Just because they didn’t display all of the data you wanted to see doesn’t make it illegal…

          • We actually have a good relationship with Seagate! We continue to buy their drives in droves. All we’re doing is showing how they and other drives fare in our environment. The reason for this specific post was that people noticed a large drop-off of this drive in our previous posts and were curious as to what happened.

          • Kyle Benzo

            Yes I assumed you did but even if you didn’t they would have no legal standing to get this article removed. On a side note, release a linux client so I can leave Crashplan! Thanks!

          • Heh, I’ll talk to the engineers for you Kyle :-p

          • Jesper Monsted

            If you’re putting work into a linux client, please make it portable to the rest of the Unix platforms too. Shouldn’t take much to include the rest of us (and i do believe there’s a lot of freebsd/freenas users around, especially with ZFS on the scene). Heck, you’d probably have to go out of your way to make it linux only.

        • Sentinel Jones

          I realize the futility of trying to educate a troll, but for anyone else who somehow think’s he has a point:

          You really don’t understand basic 1st amendment protections, do you? Just because a news/blog/story about your product is negative or you disagree with it doesn’t mean you can sue someone to retract or remove it. I can write “seagate drives suck” a bajillion times and there is nothing Seagate can do about it. Nothing. BB wrote a well-reasoned post, replete with data and process explanations. If Seagate doesn’t like this post, they can post a rebuttal if they choose and explain why BB’s conclusions are wrong. That’s it. No threats, no lawsuits, no taking anything down.

          To illustrate the contrast of an actionable claim: If I write that the Seagate CEO is involved in human trafficking while knowing such a claim is false – that’s libel, which is very different. I can be held accountable in a civil court. If I go on TV and say that 50 Seagate drives have had catastrophic failures sending shrapnel into school children and killing them – while knowing this is false, that’s slander. In both cases, Seagate would have to also show how my actions resulted in monetary damage to them.

          “30 drives failed in June, 40 failed in July … we’re seeing a pattern” is not something you can sue over.

        • silvestris

          You can’t sue over the truth or over opinions.

        • Hennie Mulder

          comment ignored , ignorance present.

      • Kyle Benzo

        Seagate’s opinion on the matter is irrelevant. He doesn’t need their approval nor should he ask for it.

      • Christopher McCord

        Especially since they are using a Desktop built HDD in a POD and RAID application, they were not built for this kind of use. That was never mentioned. Had they used an enterprise class drive or even a NAS type drive result might have been a lot better. They are trying to keep their costs down with these cheaper price point drives but sometimes that doesn’t work out.

        • Milk Manson

          I missed where they say the Seagate’s were used differently than the other brands. Could you kindly point that out for me?

          • T War

            No all drives where used out of spec

          • Milk Manson

            so it was fair in other words.

        • Hennie Mulder

          i am a consumer , therefore i demand value and quality. mine failed after a year. my toshiba is eight years old and has not even a single bad sector on it. why would other drives last longer and this drives fail. no justification in your comment.

    • Milk Manson

      I have a dozen brand new in the box Seagate ST3000DM001’s. Would you like to buy them?


      • Depends on how much you’re selling them for.
        And if they are desktop drives, or shelled externals.

        • Hennie Mulder

          being shelled or not doesnt matter internal enclosures are the exact same model … they will fail

      • Vince

        Yes, I’ll buy them, providing they’re less than £20 ($15) a unit. That way the expected lifetime vs cost works for us.

        • Milk Manson

          But I have Hitachi’s for the same price…

      • Hennie Mulder

        no thanks.. i see amazon has the

        Seagate 3TB BarraCuda SATA 6Gb/s 64MB Cache 3.5-Inch Internal Hard Drive (ST3000DM008) for less 89.99 stat clear of st3000dm001 drives.

  • kevin

    Looking back on some of your other data, it appears you use a mixture of desktop and NAS drives. Do your storage pods support TLER? It seems like you should be using one or the other, but not a mixture.

  • kevin

    Are you not able to track serial numbers to get a better handle on what failures were external vs internal? Drives also have factory codes on them. The same part number could be built in several different factories or manufacturing lines. Are you tracking down to that level.

  • Paul van den Bergen

    how does the Seagate drives stack up when you take that set of 3TB out of the equation? or… are there other Seagate 3TB drives purchased outside the crisis you can compare with?

  • Gabe Anguiano

    I have a number of these drives. Can you clarify if your Pods spin down the drives? I hear this can increase failure rate.

    • Frank Van Der Mast

      Spinning down drives is a bad thing and should always be disabled for any drive(s) you have in a system. I am no expert on the matter but I do know that spinning up/down is a major contributor to drive wear.. I wish the people who thought of it (to reduce power consumption) realised people prefer longevity of their disks over a few cents a year in electricity bill savings

      • Aitor Bleda

        I agree with you, spinning down should be done only after plenty of time (say 30 mins of no activity). Otherwise you go into WD Green mode -> drive destroyed in one year.

  • Could you share what type of utilities or apps are used for hard drive testing? Are they publicly available, or something internal or proprietary?

    • Hey! We can’t talk about some of the methods used, but secure erase is part of the process. Sorry we can’t divulge some of that info!

    • Mathew Binkley

      There may be some super-secret proprietary utility, but you can probably get 99% of the utility by looking at the SMART attributes for the drive. If the drive is failing or trending towards failure, it’s usually pretty easy to catch.

      I wrote a BASH script to scan drives for failing/marginal attributes. You can find a list of SMART utilities for your OS here:


      • alex kent

        if you want to do a (much) more thorough test of a drive you can use Spinrite. It is a super powerful utility for write, read, repeat testing of the entire surface of a disk. https://www.grc.com/sr/spinrite.htm

        • Ian Worthington

          Sorry, but spinrite is a just a promotional vehicle for Gibson. If you read how it purports to work it makes no sense at all. Don’t touch it with a bargepole.

    • Use a Linux live CD and a few simple commands. Any good disk should be able to successfully read all user-accessible sectors from start to finish, so typing something like ‘cat /dev/sda > /dev/null’ (with /dev/sda being replaced by the device name for the disk you’re interested in) should take a few hours (one hour per terabyte, perhaps a bit more) to finish and complete silently; any reported errors would mean the end-to-end sequential read of the entire disk has failed and the disk is almost certainly bad. Motherboard and cable failures can also trigger a read failure, but these are very rare compared to disk failures.

      Using ‘smartctl -a /dev/sda’ to dump the SMART data from the drive can be interesting as well; look at raw numbers for Reallocated_Sector_Ct and Reported_Uncorrect and if they’re above 0 but below an absurdly high number like 65536 then you have sectors on the disk that have failed (RSCs indicate the drive relocated failing but still readable sectors to reserved good sectors, while uncorrectable errors mean there was previous data loss that the on-platter ECC wasn’t able to correct.) Likewise, look at Load_Cycle_Count and if it’s over 600,000 then your drive is absolutely guaranteed to be living on borrowed time (the highest LCC I’ve seen in active use was a laptop drive at about 740,000 cycles; I recommended an immediate replacement.) Extremely high LCCs over the course of one year due to aggressive head unloading were the prime cause of WD Green (“EADS” variety) hard drive failures, particularly in Linux systems operating 24/7.

      While these are not necessarily the methods used by Backblaze, they are extremely effective ways to detect drive failures and make educated guesses about remaining drive longevity.

  • Ian Worthington

    Interesting article. I’m sure you’ve been in contact with Seagate: are you able to share their response?

    • We do speak to Seagate on occasion, but unfortunately can’t disclose what it is we chat about. We can say that it’s amicable though! :)

      • kevin

        That seems a bit odd, with the high rate of failures you are seeing, I would expect a drive manufacturer would be very involved to get to a root cause. Are you procuring your Seagate drives from a authorized channel/distributor?

  • Rockdrigo Satch

    ammm ok im understand i have 1 in 10 posibilities that my HD fails in 4 years or less LOL

    • Only if you have that specific model…and maybe not! Each use-case is different. Our drives probably go through different usage than yours ;-)

      • Aitor Bleda

        My company didn’t have many of these.. buty poor me I have a 100% fail rate after a couple of years.

        My experience tells me that they do not really fail: I have been able to create the errors an recover disks from them. If you have a 100% disk load consisting on mix of random (small files) and pure sequential (huge files) writes and hammer them for hours, they just fail.

        But then you can recover the disks (many hours) and they work as if they were new.
        Maybe something overheats, and/or there are head positioning problems with heavy random load?