Seagate Introduces a 60TB SSD – Is a 3.6PB Storage Pod Next?

By | August 11th, 2016

60TB SSD

Seagate just introduced a 60TB SSD. Wow. As Backblaze scurries about upgrading from 2TB to 8TB hard drives in our Storage Pods, we just have to stop for the moment and consider what a 3.6PB Storage Pod would look like and how much it would cost. Let’s dive in to the details…

What we know about the Seagate 60TB SSD

A number of sources (engadget, Computerworld, Mashable, and Tom’s Hardware, to name a few) covered the news. From the Backblaze Storage Pod point of view here are some important things to know. The Seagate 60TB SSD comes in a 3.5-inch form factor. It uses a 12 Gbps SAS interface and consumes 15 watts of power (average while active.) There are a few other fun facts in the articles above, but these will work for now as we design our hypothetical 3.6PB Storage Pod.

What we don’t know today

We don’t know the price. Seagate is calling this enterprise storage, which in Backblaze vernacular translates to spending more money for the same thing. Let’s see if we can at least estimate a list price so we can do our math later on. We’ll start with the Samsung 16TB SSD drive recently introduced. As they make their way to the market, their price is roughly $7,000 each. Using that number, simple math would get us a price for the Seagate 60TB drives to be $26,250. That seems high, even for enterprise storage, so let’s give a discount of 25% for scalability bringing us to $19,687.50 each. Applying marketing math, I’ll round that up to $19,995.00 each Seagate 60TB SSD. That’s as good a WAG as any, let’s use that for the price.

Our 3.6PB Storage Pod design

The most economical way for us to proceed would be to use our current 60-drive chassis (Storage Pod 6.0) with as few modifications as possible. On the plus side, the Seagate 60TB drive has a 3.5” form factor, meaning it will fit very nicely into our 60-drive Storage Pod chassis. On the minus side, we currently use SATA backplanes, cables and boards throughout, so there’s a bit of work switching over to SAS with the hard part being 5-port SAS backplanes. In a very quick search, we could only locate one 5-port SAS backplane and we weren’t sure it was being made anymore. Also, we’d need to update the mother-board, CPU, memory, and probably convert to 100Gb network cards, but since these are all readily available parts, that’s fairly straight-forward (says the guy who is not the designer.)

We do have the time to redesign the entire Storage Pod to work with SAS drives given that Seagate doesn’t expect to deliver the 60TB drives until 2017, but before we do anything radical, let’s figure out if it’s worth it.

Drive math

Currently the Seagate 8TB drives we use (model: ST8000DM002) today lists for $295.95 on Amazon. That’s about $0.037/GB or $37/TB. Using our $19,995 price for the Seagate 60TB SSD, we get about $0.333/GB or $333/TB. That’s 9 times the cost, meaning a 60-drive Storage Pod filled with the Seagate 60TB drives would cost $1.2M each.

Let’s look at it another way. The Seagate 60TB drives give us 3.6PB in one Storage Pod. What would it cost for us to have 3.6PB of storage using just 8TB hard drives in a Storage Pod? To start, it would take roughly 7.5 Storage Pods full of 8TB drives to give us 3.6PB. Each 8TB Storage Pod costs us about $20,000 each or $150,000 for the 7.5 Storage Pods needed to get us 3.6PB. Since we can’t have a half of a Storage Pod, let’s go with 8 Storage Pods at a total cost of $160,000, which is still a bit lower than the $1.2 Million price tag for the 60TB units.

What about the increase in Storage density? In simple terms, 1 rack of Storage Pods filled with 60TB SSDs would replace 8 racks of storage using the 8TB Storage Pods. I won’t go into the details of the math here, but getting 8 times the storage for 9 times the cost doesn’t work out well. Other factors, including the 70% increase in electrical draw per rack, make the increase in storage density currently moot.

If you build it…

There is one more thing to consider in building a Storage Pod full of Seagate 60TB drives, Backblaze Vaults. As a reminder, a Backblaze Vault consists of 20 Storage Pods that act as a single storage unit. Data is spread across the 20 Storage Pods to improve durability and overall performance. To populate a Backblaze Vault requires 1,200 drives. Using our $19,995 price per drive that’s roughly $24M to populate one Backblaze Vault. Of course that would give us 72PB of storage in one Vault. Given we’re adding about 25PB of storage a quarter right now, it would give us 3 quarters of storage runway. Then we’d get to do it again. On the bright side, our Ops folks would have to deploy only one Backblaze Vault every 8 or 9 months.

Breaking News

Toshiba just announced a 100TB SSD, due out in 2017. No pricing is available yet and I’m tired of doing math for the moment but a 6PB Backblaze Storage Pod sounds amazing.

In the mean time, if either Seagate or Toshiba need someone to test say 1,200 of their new SSD drives for free we might be interested, just saying.

Andy Klein

Andy Klein

Director of Product Marketing at Backblaze
Andy has 20+ years experience in technology marketing. He has shared his expertise in computer security and data backup at the Federal Trade Commission, Rootstech, RSA and over 100 other events. His current passion is to get everyone to back up their data before it's too late.
Category:  Cloud Storage
  • John

    I love your posts! Keep ’em coming!

  • Luong Anh
    • mm ds

      ok , let me see…………………….

  • Scott Tactical

    I feel like the density is great, but the IO on those SSDs probably would be completely underutilized. It is what I wish for in a SAN in terms of disks but its like putting a race car motor in a dump truck.

  • Robert Boyle

    Great article. I wonder what storage will be in 10 years? Unimaginable. When I think that back in 1986 1 Meg more of memory cost $5,000.00 I say well……..

    • Okurka

      2 Megs of memory cost $599 at the end of 1985.

      http://www.jcmit.com/memoryprice.htm

      • Scott Tactical

        maybe he meant 1GB of memory.

        • Okurka

          1GB of memory in 1986?
          Even hard disks were only 20 MB back then.

          • Hugues

            1GB of disk was the order of magnitude of storage space in mainframes back then, to be divided between 1,000 students, say. In 1991 the available persistent storage space was 2.88 MB per student (equivalent to 2 floppy disks, if you remember those…). The whole backend was probably in the order of 100GB, an enormous amount then. That was in a top 5 US university.

          • Okurka

            A hard disk of the size of 20 MB came only out in 1988.
            Besides, mainframes used tape back then.

          • Hugues

            You are thinking HDD for PCs. You would be right, but I’m talking about HDD for mainframes:

            From wikipedia:

            The 9345 HDD first shipped in Nov 1990 as an RPQ on IBMs SCSE (SuperComputing Systems Extensions). Developed at IBM’s San Jose, California laboratory under the code name Sawmill it was an up to 1.5 GB full height 5¼-inch HDD using up to 8 130 mm disks. It was the first HDD to use MR (Magneto Resistive) heads.

          • Okurka

            Not sure why you quote an article about a HDD released in Nov 1990 when we are talking about 1986.

          • Hugues

            I was talking about 1991, but no matter, here’s a disk from 1980. I hope this is early enough.

            IBM 3380 disk drive module
            The IBM 3380 Direct Access Storage Device was introduced in June 1980. It uses film head technology and has a unit capacity of 2.52 gigabytes (two hard disk assemblies each with two independent actuators each accessing 630 MB within one chassis) with a data transfer rate of 3 megabytes per second.

            Bottom line: large drives have existed for a long time. In 1986, capacities of a few GB were common in datacenters. Not for PCs, obviously.

  • Just $1.2M each? Well, I will wait until the price will few hundred bucks :D
    How much then will Toshiba 100TB drive cost? :O
    SSD’s are good for home usage not data centers :)

  • Wow, that’s amazing. It would be great if SSD prices would fall much closer to that of their mechanical siblings, and if large capacities would be available at the same prices. Still, 60TB even for a tech demonstration, is insane.

  • Stephen Shankland

    I’m guessing there are performance advantages to SSDs over HDs that compensates for the higher price per GB for many customers — but probably not Backblaze. Backup and recovery are not high-latency tasks, especially when already throttled by people’s relatively slow broadband connections used for upload and download. That’s my suspicion, anyhow.

  • Stefan Seidel

    What about the ~900 Watts of power usage for the SSds *only*? That’s quite a bit more than the 540 of the 8TB drives you use now. Not insurmountable, and certainly more efficient per TB, but still have to consider all the cooling and power delivery.

    • Okurka

      “Other factors, including the 70% increase in electrical draw per rack, make the increase in storage density currently moot.”

  • kingmouf

    At this point I think that the pricing is pretty much theoritical, and that makes the whole analysis equally theoretical. But I have a feeling that we are not making a true consideration here but rather a mere “another blog post”. Please dont get me wrong, I dont want to sound offensive. I can see some valid arguments popping around that are not taken into consideration here:

    1. you are going the pod route out of need of more storage per rack space. Now if you are suddenly considering a breakthrough technology (going suddenly with devices almost 10 times the capacity) then this changes everything. If you keep the storage per server (or pod) the same, then this means that you just need 8 bays and SAS/SATA ports. So you can use a pretty standard case, low cost, no customizations. You need a much much simpler backplane (you can even get a mobo that has all these ports and you dont even need a specific card! ). For such a simple scenario, you can have a pod ready in minutes by trained personnel rather than hours. Considering that these are SSD devices you can have the whole setup, testing, raid build-up etc etc in what? A fraction of the time? This is true cost. Furthermore, maintainability goes to the roof. And also, I think that considering power, 8 power hungry SSD (and a fraction of other cards, backplanes etc) are probably consuming less than 60 HDDs spinning all the time with all their backplanes, multipliers raid cards etc.

    2. If you consider now the same amount of drives per server (pod), then there are other factors to consider. For a given storage target, it simply means a tenth less ports in the networking gear, a tenth reduction in the number of servers supported, a tenth less rack space in the data center, less KVM ports, less management personnel-hours, and so it goes.

    3. A critical thing to also consider. Performance is not the same. Maybe your application is not performance critical, but I am guessing that having better IOPS and better throughput can have a lot of positive effects. I believe that when you are replacing drives, RAID rebuilds for example are going to be a lot shorter in duration. Testing before bringing up new equipment is also going to be less time consuming. OS installation and the such. And all these are not considering other benefits conserning your primary (storage) application.

    So yes, at 20k a piece all these become theoretical, but if you bring it down to 15? 12? I dont know the threshold, if you consider all other things, then it quickly may become far more viable a solution.

    • Andy Klein

      Lots of good ideas and thoughts worth considering. I think that the storage vendors are trying lots of different techniques to store data at higher densities in response to the insane amount of data being created daily. Whether we continue with a 4U Storage Pod design or move on to something else is certainly part of the conversation here at Backblaze. As you noted there are many parts to the equation with some moving rapidly and some not. The next few years will be interesting in the storage space.

      • Gerald Cooper

        Dear Andy- I think you are right people are needing more and more storage. PB is coming very soon, at a price, but there is a cheaper alternative. Phillips made the old ‘laser disc’ on a 14″ platter. Just think of how much storage can be made on 14″ discs. Using modern techniques, double sided and Blue Ray we are talking of hundreds of PBs. I am surprised the big companies do not go down this route.

  • SteveBlowJobsSucksDickInHell

    Have you guys thought about or are planning on using Seagate’s 10TB or 12TB Helium drives? I think 14TB drives are around the corner as well.

  • “SDD” appears a couple times; is it supposed to be SSD each time?

    • Andy Klein

      Fixed that, good eye.

  • Gijs Noorlander

    Another nice question could be at what price per GB these huge (in capacity) SSD’s will be economically to deploy.

    The hdd’s now cost 3.7 ct/GB (apparently not the Seagate Archive series), but you need a lot more hardware for pods and power (nice aliteration :) )
    Hdd’s need power when idle, but the SSDs consume hardly anything when idle, so 15 Watt looks like the upper limit per SSD, but even when that’s a constant, then the power per TB is still 25% compared to the less than 1 Watt/TB for the Helium drives of HGST.
    And less pods also mean less network-infrastructure.

    So at what price per GB will the SSDs be feasable to deploy at the scale you’re working on?

    • You’ll need more PCIe lanes for SAS, so this change might not be as straightforward at it may seem. Am I correct?

      • Gijs Noorlander

        You also need to get all that extra bandwidth out of the pod, so there should be more redesign than just inserting other drives.
        The ethernet devices must also be a lot faster to accommodate that extra bandwidth SAS may offer.

        • Chris Moore

          They have stipulated that speed is of little importance in the design of the individual pod.

  • Robert Klein

    “round that up to $19,995.00” – LOL. What’s wrong with 20,000? ;-)

    • > Applying marketing math

      $19,995 looks a lot better to consumers than $20,000 ;-)

  • Discpad

    A couple questions:

    1) Why not use a small SSD (or even a CF card & adapter) for the pod boot drive instead of HDD’s? The most you need is maybe 32-64 gB which includes a generous *nix swapfile in the SSD. [Windows 10 screams when PAGEFILE.sys is on an SSD boot drive!]

    2) Have you considered using 1-4 SSD’s in a pod with the rest spinning HDD’s? These SSD’s would act independently as “data spoolers” (or “buffer/spoolers”) to accept data from your LAN, then “spool” the data out to the slower HDD’s. This will allow you to buy slower, less expensive HDD’s; and also allow for slower RAID algorithms, possibly gaining space from using fewer redundant drives.

    [This concept is not new: 20 years ago, when laser & inkjet printers were both Very Slow and Very Expensive, we used Windows NT 3.51 print spoolers on the network to print jobs to pools of printers.]

    Tying the two items together — SSD boot drive and 1-4 SSD’s in a buffer/spooler — You can partition off a couple-few dozen gB and use it for boot drive & *nix swapfile space.

    Dan Schwartz
    [email protected]
    Editor, The Hearing Blog
    http://www.TheHearingBlog.com

    • Brian

      Usually the OS drive in an application like this sees very little use, short of local service logs and initial boot / startup operations. Also, there are devices known as “disk on module”, or DOMs that you can use that operate as an extremely low-profile boot disk (i.e. old ATA DOMs were no wider than the 40-pin IDC connector and maybe an inch tall, newer SATA and SAS DOMs are no bigger than two postage stamps side by side).

      …or you could do the crazy and just boot PXE/iSCSI and have your OS drive reside elsewhere or run out of RAM and ship all logs off-system as they play…

      • Discpad

        In years past for high-performance workstations, I would spread the swap file onto a RAID 0 (stripe) array (along with the Photoshop &/or Premiere scratch disk). Today, we now have SSD, which does a yeoman’s job for these duties.

    • Elliott Sims

      For #1, we’ve considered booting from SSDs. We don’t really require the performance on the OS drive, and the extra space on spinning drives can be nice for logs and crash dumps, but we’ll probably revisit this as the price on a “big enough” SSD drops below the cheapest spinning drives.

      We’ve looked slightly at some sort of SSD writeback cache setup, but for the most part we get adequate performance out of spreading uploads across lots of spinning drives. We do use SSDs in a few places for caches and internal indexes/metadata.

      • Discpad

        Elliott Sims:
        1) Putting the unix swap file — And you can’t disable VM in unix — onto an SSD will speed up the system and contribute to stability;

        2) Reliability improves, as you have no moving parts; and unlike the storage drives which are configured in a RAID, if the boot drive fails, the pod goes down. Also, reliability slightly improves as the physical volume of a

        3) Is 64gB enough? Let’s say your motherboard holds 16gB of RAM. Also, let’s say your OS takes up 4gB on the boot drive, and we set the unix swap file at 32gB. This means with a 64gB SSD boot drive, you still have 28gB for crash dumps & log files… And there’s no stopping you moving older log files onto the pod’s RAID. A good Kingston 60gB mSATA “SSD on a board” is $34.99; and a 120 gB drive is $49.99

        http://www.newegg.com/Product/Product.aspx?Item=N82E16820239731

        • Elliott Sims

          Oh, no question that there’s a lot of likely benefits :) The price crossover was pretty recent, and taking the time and effort to switch over hasn’t made it to the top of the priority list quite yet, but it definitely will eventually, and probably fairly soon.

      • mm ds

        ok ,

        For #2, uhh…………….

        just look this, https://www.youtube.com/watch?v=0JOwkySp-0w

        enjoy

  • Matt Viverette

    You didn’t factor in the cost savings from laying off several Ops staff when you “would have to deploy only one Backblaze Vault every 8 or 9 months.” Probably not a pleasant topic to put on the blog, but a true cost comparison would require considering HR cost as well.

    • Well we thought of that, and truth is it may not be much of a cost savings. Drives die ALL the time – so even taking in to account that SSDs theoretically fail much less often than HDDs we’d still need to keep a lot of staff to maintain the pods and vaults. And b/c of the expense of a single pod, we might even want to keep a CLOSER eye on it, so that we don’t lose an entire pod at a time (considering they’d be very expensive).

      Plus, we like our employees, so there’s that :P

      • John

        There’s also minimum staffing levels for vacation coverage and other stuff you can’t go below (or potentially more valuable work you can shift the staff to). Reducing repetitive task xx doesn’t always amount to yy in cost reduction. My concern is RAID 6 would break down at this scale. (Dig out Adam Levanthol’s Triple Parity and Beyond ACM paper). I’m thinking 26+4 Erasure codes or something, or more seriously looking at networking based erasure codes (which carries more networking overhead). Also with IO access getting that dense would you need to increase your networking (40/100Gbps?) at some point for node re-heals or evacuations.

    • Scott Tactical

      Every time people talk about cutting jobs for technology I only see more people working to make up for the unexpected problems the future tech presents. Usually the tech is irrelevant to long term employees.

    • mm ds

      yo , i like that

  • Matt Viverette

    I think you have a typo:

    “…with the hard part being 5-port SAS backplanes. In a very quick search, we could only locate one 5-port SATA backplane and we weren’t sure it was being made anymore.”

    Do you mean 5-port SAS backplane in the second sentence?

    • Good eye! Fixed :)