Backblaze B2: The World’s Lowest Cost Cloud Storage

September 22nd, 2015

blog-b2-launch

Ever since we open-sourced the original Backblaze Storage Pod and shared our storage costs, people have been saying, “I love your backup service, but I need a place to just store data. Can you give me direct access to your cloud storage?” After a year of hard work and tremendous excitement from our team, I’m delighted that today the answer is, “Yes!” We are announcing a brand new service, Backblaze B2 Cloud Storage.

What is Backblaze B2 Cloud Storage?

B2 Cloud Storage is a service that enables developers, IT people, and everyone else to store data in the cloud. Often referred to as Infrastructure-as-a-Service (IaaS) or object storage, it provides the ability to store, retrieve and/or share data and scale up and down, while only paying for what you use. B2 offers cloud storage similar to Amazon S3, Microsoft Azure storage and Google Cloud Storage – but at a much lower cost.

This. Is. HUGE.

This is huge. So huge that we had to say it twice. Not “here’s a cool feature” huge. But “jump up and down huge.” Before getting into all the details, I have to thank the entire team for all of their hard work, late nights and unabashed enthusiasm. There’s nothing like a team yearning to offer an alternative to services from Amazon, Microsoft and Google and making it a reality. It has been absolutely thrilling to see Backblaze go from a company in a one-bedroom apartment in 2007 to a player on the global IaaS stage.

I also think this is huge for a lot of other people. In 2013 The Verge wrote an article about a company called Everpix titled “Why the world’s best photo startup is going out of business”. The reason the company vaporized? It was because “a forthcoming bill from Amazon Web Services” was too high. Lower cost cloud storage might have let the company stay in business, employees continue to have jobs and customers continue to use a photo service they loved.

Everyone needs storage. Everything from Artificial Intelligence to virtual reality, drones, 3D video, Internet of Things, live video, connected medicine…all of it needs storage. And lower-cost storage means more companies can launch world-changing products and offer better experiences at lower costs and higher margins. It’s a virtuous cycle. And we think that Backblaze B2 can help.

Is B2 Like Dropbox, iCloud, or Google Drive?

No, B2 is the underlying storage that services such as Dropbox would be built upon. As an example, Dropbox was originally built on Amazon S3.

OK, So How Can I Use It?

Developer? Use B2 for the storage your applications need through the RESTful API.
IT? Use B2 to store your corporate data, backup servers, etc. through the CLI.
Regular Joe or Jane with superhero storage needs? Use B2 to upload files you want to store and sometimes share online through the web interface.

Some other possibilities:

  • Already use S3 or one of the other providers? Keep a copy in B2. It’s less expensive than keeping a copy in another S3 region.
  • Running out of storage in your corporate datacenter? Expand into B2 to minimize your costs and reduce your storage headaches.
  • Need to share a large file online? Upload and share it.

We know many of you have worked with storage systems for years. We’d love to hear from you on how you think you could use the B2 service – so please share via the comments!

How Much Does B2 Cost?

So here’s the best part: B2 costs just $0.005/GB/month to store data. Yes, just ½ a penny. Uploads are free. Downloads are just $0.05/GB. No complicated tiers to track or penalties for accessing your data. See B2 pricing.

B2 charges 3x – 30x less than others for cloud storage. We realize that sounds unbelievable – and that’s why we’re so excited. For example, the lowest-cost tier of S3 costs $0.022/GB/month. At $0.005/GB/month, B2 costs less than 1/4th as much. See the detailed comparison of cloud storage prices.

Aren’t There a Thousand Cloud Storage Services?

Yes and no. There are a thousand services offering consumers the ability to store and share files. However, there are only a handful that provide IaaS, the raw storage for developers and IT to leverage through an API or CLI. The list of enterprise cloud storage services includes Amazon S3, Microsoft Azure, Google Cloud Storage, Verizon, Rackspace, and CenturyLink.

How Does This Affect Users of the Backblaze Cloud Backup Service?

This adds a third offering from Backblaze alongside our Personal Backup and Business Backup services. Those services continue unchanged, offering completely automatic and unlimited cloud backup for just $5/computer/month. Backblaze customers can use their existing Backblaze account to also enable the B2 service if they wish to store data more permanently in the cloud.

Why Did Backblaze Create B2?

I think it’s critical that as we launch our next big thing, we look back at the very beginning. As some of you know, five of us quit our jobs in 2007 and started Backblaze to try and solve the problem of people not backing up their data. We wanted to make it astonishingly easy and low-cost so that nobody ever lost their precious files again.

While we had raised venture funding for previous companies, we wanted to build Backblaze as a sustainable company with a culture that expected money to come from customers, not VCs. That meant we had to build a profitable business from the beginning. Initially, we planned to use Amazon S3 or equipment from EMC, NetApp, Dell or another vendor for our backend storage. We quickly realized that all of these options were too expensive and we would lose money on every customer.

To solve this problem, we developed and open-sourced our own hardware, the Backblaze Storage Pod, which has been used by organizations across the world for low-cost storage. We architected our own cloud storage file system, Backblaze Vault, a highly scalable and efficient data store. And we have built up processes and gained invaluable experience (such as analyzing about 50,000 hard drives for reliability).

For years we’ve published data about the processes and costs of running our cloud storage. And for years people and companies have asked to us to let them use it. About a year ago, our storage systems were large enough, the tech advanced enough and the team large enough to pull this off. We’ve spent the last year building APIs, CLI access, the usage and billing systems and more – and now we’re ready to let some of you start to play with it.

Get Free Storage.

Hopefully you’re as excited as we are. Head over to www.Backblaze.com/B2 and sign up for an account (it’s free, easy and quick). You’ll be placed on our Beta waiting list and not only be among the first to get access to the B2 Beta, but once you’re accepted, you’ll get 10GB of free storage as well!

What do You Think?

Obviously, we’re incredibly excited about this next chapter at Backblaze and we can’t wait for you to try the service and give us your feedback! For now, we would love to hear in the comments below:
* What do you use storage for now?
* What will you do with B2?
* Who should we partner with?
* Who should use B2 or integrate B2 into their systems?

Gleb Budman
Co-founder and CEO of Backblaze. Founded three prior companies. He has been a speaker at GigaOm Structure, Ignite: Lean Startup, FailCon, CloudCon; profiled by Inc. and Forbes; a mentor for Teens in Tech; and holds 5 patents on security.

Follow Gleb on: Twitter / LinkedIn / Google+
Gleb Budman

Latest posts by Gleb Budman (see all)

  • Stefan Seidel

    So I’ve just switched my server backups from Google Nearline (0.01$/GB) to B2 (0.005$/GB). Nice to pay only ½ the price, but uploads are really slow. I “only” have a 100Mbit connection on that VPS, which Google Cloud could easily saturate. To Backblaze … 15Mbit/s at max, average is more like 5 Mbit/s. It takes ages to upload. How am I ever going to upload TBs of data? Located in a datacenter in Germany, by the way.

  • Have a synology client and you have me

  • A desktop client a la Cyberduck would be awesome.

  • Roger Deloy Pack

    Hate to be negative and I think it was released after this blog, but as a note, Oracle cloud archive bills itself as $0.001 per GB https://cloud.oracle.com/en_US/storage?tabID=1406491833493 (but it isn’t web accessible like this stuff is, B2 is like amazon’s S3 but waaaay cheaper, nice contribution to the industry, disruptive! [slightly cheaper transfer rates would make it look better in the comparison graphs ] :)

  • jonnyleaf

    Is there a cost for deletion of data?

  • Gavin Stirling

    The ability to move data from an existing Backblaze backup into B2 storage would be great!

  • randian

    How is data integrity maintained? One of the things S3 does is compute an MD5 sum (though I would prefer something like SHA512 for this purpose) on incoming data and rejects the request if the computed checksum differs from what the sender says it should be. Adding this as your own private metadata won’t work because the sender doesn’t know what the receiver actually wrote to disk.

    • Roger Deloy Pack

      They also compute it and “recover” data from redundant sources if corruption is detected. I think backblaze does something like this (redundancy) but I’m not sure about the constant checking aspect…

  • disqus_qyi0VcH7LD

    I need a sync function as I would love to use it as an archive for my photos which I often wish to share and/or retrieve from offsite.

  • W Vito Montone

    Is the “move service” from A3?

  • W Vito Montone

    Does is use a CDN for media delivery?

    • Roger Deloy Pack

      I doubt it, somehow. Basically it’s “one datacenter only” AFAICT.

  • Michael Quinlan

    Are there availability guarantees? What about durability? Without knowing these it isn’t possible to compare the prices to the other services.

    • Roger Deloy Pack

      Yes, my question as well “durability” and “SLA” compared to S3…

  • gavingreenwalt

    Will we be able to request a drive download if we have say 16TB of files on the cloud that all need to be pulled down quickly (within 2-3 days) and fedex’ed ala Backblaze?

    Would that still be at $0.05 per GB? We would like off-site backups of our archives but a 2-month download over a 100mbps pipe would make us have to think twice.

  • Reefiasty

    Could you please put your ‘b2’ python client on Github so that we can… improve it?

    While we are at it, could you also change the license of it, as the current “All Rights Reserved” kinda officially forbids us from using/copying/modifying/etc it (unless we get a written permission or something). Please consider WTFPL, Affero license, MIT license and BSD license.

    • > Could you please put your ‘b2’ python client on Github so that we can… improve it?

      Should be appearing shortly. We couldn’t put it in common repositories until we announced B2, but now that the cat is out of the bag…..

      > could you also change the license of it, as the current “All Rights Reserved”

      We will change that plus add a link to several licenses that are acceptable. In the past we have released code (like our Reed-Solomon encoding software) by an MIT license: https://en.wikipedia.org/wiki/MIT_License and we released Reed-Solomon to GitHub https://github.com/Backblaze/JavaReedSolomon

  • jhetland

    How does this compare to Amazon Glacier?

  • adsfsm6

    Not sure if it has been mentioned yet, I propose to liaise with another 3rd party app developer, CloudberryLab, to have B2 included as storage provider. They offer a nice Desktop Backup app.

    • We’d definitely be open to that once we open this up to everyone!

  • hjuk

    From a notatechie – can i use it as a home for 360 image tours like those built using autopano?

    • It might be possible, you’d just be serving the video from our servers, though someone may need to build a “player” that could properly show those videos.

  • acmedata
  • KC Lam

    I think this is a good effort but maybe a bit “behind the times”. I don’t have my data sit on one device anymore, and I don’t believe many people still do. My data is everywhere these days – it’s on my desktop, on my laptop, on google drive, onedrive, on my 3 phones, my 2 tablets, countless external drives/flashdrives.
    I think if you truly want to be innovative (other than price) in 2015, you’d have to offer a solution which can consolidate all my data from all my devices/services. But good effort though.

    • Thanks for the feedback KC! With Backblaze B2 you can write your own applications, or have someone do that for you, that can tie all those things in together and into the Backblaze B2 infrastructure! So that’s exactly what you can do with B2!

    • B Brad

      There’s quite a few backup programs that will encrypt, dedupe, and upload to the cloud. Any of which should allow you to have data shared across all your devices without having pay per device.

  • Kristian

    A S3 compitable gateway, will be usefull!

  • Gustavo

    Great news! I have been wondering for a while how and when Backblaze would solve the 30-days max storage limit, and this is a really great solution. Well done!

    In your post you wrote: “Backblaze customers can use their existing Backblaze account to also enable the B2 service if they wish to store data more permanently in the cloud.”

    Does this mean that the B2 service will be integrated in the actual Backblaze client?

    If not, this would be my suggestion. To be able to assign and upload files, disks and/or folders to my B2 bucket using the same Backblaze app that I use to manage my Personal Backups.

    Congrats for the great service!

    • Gustavo –

      We’re hoping that will be the case eventually, but we’ll definitely mark it down as a recommendation!

      • Gustavo

        Great! So at the moment the only way to upload files is by using the web interface, right? Would you be open to support apps like Arq? They would make the upload task more streamlined and automated, but I understand they are somehow a competitor of you Personal Backup service.

  • Andrew

    Sorry if it’s already been asked – Will there be a way to copy files directly from an existing BackBlaze account (I.e. say via the web interface) directly into a B2 bucket? One example is say you have an 1TB external drive backed up with Backblaze, and you want to ensure the content is permanently safe (at present you are required to plug in the external drive every 30 days or it will be deleted). This could save potentially weeks/even months of upload time trying to upload hundreds of Gb’s over a slow remote connection (I’m based in asia at the moment) Vs it simply being copied across your local network at Gbps+ speeds!
    P.s Long time BB user and look forward to trying out B2.

    • Andrew –

      We’re hoping that’ll eventually be the case, but no time frame for that cross-functionality yet.

    • btn

      I too would love an option in the Backblaze client (and web admin) to permanently store a copy of my backup on B2 so I don’t have to worry about it being deleted if an external drive goes “missing” for more than 30 days. Thanks!

      • Roger Deloy Pack

        Email their tech support and say you wish you had an option “not delete remotely” when local files are removed…maybe backblaze should create a getsatisfaction or uservoice so they could see how many people want this

  • Tomáš Kolinger

    Duplicity integration is really important from my point of view. Ability to backup directly like with S3 can be killer-feature.

  • Dan

    Accessing your storage via SFTP or SSH so we can mount tve drive as a Windows Network Drive!

  • a7medo778

    guys this is awesome, would be awesome if you could partner up with a cloud hosting provider

    i am the owner of a subsonic based sound streaming service, with more than a tb of data, would sure utilize this for backups, but would love to use it for direct streaming as well

    • Roger Deloy Pack

      Or open source the software so others can use it as well, locally? You’ve open sourced the hardware, take it to the next step :)

  • Would LOVE to see tunneled rsync with key exchange on a per-bucket basis. Or is it no longer a thing? Understand it does not cleanly fit in current pricing, but I can dream….

  • Pol Llovet

    I work in research computing in an R1 research institution. We have persistent and growing data needs and *will* be moving off-premise, but the transfer math is troubling. We need a partner, not just a vendor. Who should I talk to in order to start a conversation?

  • scott

    filepicker.io integration, one click backup/import of dropbox, AWS s3, integration with cloudflare/akamai/EdgeCast (make it simple to use B2 as my origin), sponsorship of some tech accelerators to gain more attention, especially from developers (tech wildcatters, galvanize, Denver Startup Week), ability to pay more for an additional region/SLA, ability to serve everything over SSL/TLS, ability to change headers, add custom meta data, ability to mount a bucket as a folder on my desktop, integration with Softlayer. What a fun offering – thanks for all of your hard work!

  • Congratulations, it looks like you’ve built an awesome product with B2, I know I’ve loved your regular services and your scrappy approach, but I’ve needed object storage and not backups. I’m super excited to try it. Unfortunately, I’m worried about bandwidth and latency. For instance, from our Miami data center, the Backblaze api is over 80ms away (from both our end and looking at the Looking Glass at Unwired from your end), from our Amsterdam data center, over 180ms away.

    While such latency provides minimal problems for backups, it would add a significant overhead both in bandwidth costs and latency to push data back and forth from a remote data center. Have you considered allowing other companies to run some of your instances in their own data centers? I know we’d be more than willing to discuss putting some instances in our data centers, which are attached to two large peering exchanges — NotA and AMS-IX. That way, we could have fast B2 access to ourselves, and a great “colocated” storage product to point our customers to.

  • Narrbarrey

    That’s nice and all but what I am really interested in is using the custom software you use to manage all the files Backblaze stores on my own home file servers.

  • selim

    Please give us the ability to use rsync or something similar.

    The ability to calculate checksums (md5, SHA, etc) without completely downloading a file would also be nice.

    • Agreed. This is the feature that will make me switch.

    • gavingreenwalt

      Just a thought but since there is metadata, if you pre-calculate this on upload you could add this layer on your own.

    • slovokia

      I would encourage Backblaze to provide a checksum on demand service for B2. I would only trust that you had stored my data properly if either A: I was able retrieve an exact copy (thereby incurring egress costs for the download) or B: Backblaze would compute a on demand checksum of my choice (sha256sum) of the data items I identify so I can compare the results to what I have locally.

      The economics of your service suffers if I must spend $0.05 in egress costs just to verify what I am storing for $0.005 per month. Nothing personal but after experiencing much flakiness with your backup product I don’t trust the quality of your software without hard proof that it works properly.

    • Cedrick Moraes Maia Mendes

      MD5 checksum is not optionaly. Its TOTALY MANDATORY in this kind of service.
      Figure out: B2 is cheaper than other cloud storages. but… you dont have any kind of warranty of the file integrity…

      B2 team, please deploy Md5 checksum file as soon as possible.

      Im waiting this to use B2.

  • Igor Khomyakov

    It seems like a game changer for me. I hope ARQ will support Backblaze Cloud very soon.

  • Adam Baxter

    Just quickly looking at the API, there’s no way to see current storage / download usage for an account, is there?

    • Reefiasty

      Good find. There is no API call for retrieval of such information, however in the “buckets” tab the number of files in a bucket, their total size and “GB average” (?) are shown, so it would seem like they have that information and they use it, they just don’t expose it via API (yet).

    • That is correct (for now). For time to market reasons, we purposefully kept the API very simple and trimmed anything we could plausibly trim just to get it out into the market. You can sign into the web GUI account and set spending limits and see the current values.

      I fully expect one extra call being added very soon which gets your bucket/account summary.

  • Adam Baxter

    You definitely need a UserVoice or similar for tracking your ideas. Maybe Trello?

  • Adam Baxter

    Can you ask DigitalOcean whether they’d partner with you to offer Object Storage for Droplets? It’s one of their most requested features

  • This is great – but no Australian datacentre (yet?)

    • Not…yet? We’re expanding as fast as we’re able. While we don’t have plans for an Aussie datacenter at the moment, it’s certainly not out of the realm of possibilities!

      • Australia /really/ needs more competition in these kinds of areas, at the moment we’re at the mercy of Amazon and Rackspace – both of which are generally cost ineffective unless you’re either really small and can’t self-host or really large and can negotiate special pricing and ‘not-shit’ support.

        • We’re not venture backed so we can’t grow randomly, we have to do it very methodically. That said, if we think it’ll make sense to fire up a datacenter in Australia, we’ll absolutely do it! Right now we’re constrained by manpower and capital, but all those are looking up :)

    • Adam Baxter

      Maybe if you’d all voted for the NBN we’d have the required infrastructure for something like that. Datacentres in Australia are stupidly expensive. What? No, I’m not bitter at all…

  • James

    Where is your datacenter located? Do you have plans to open/run datacenters in other locations? APAC?

    • Right now our datacenter is located in Sacramento, California. We are currently looking to expand to other locations, probably starting with another datacenter in the US. Not sure we’re ready to go to APAC (https://en.wikipedia.org/wiki/Apac ??) quite yet :)

      • I think he means Asia / Pacific (aka Australia/NZ/Japan)

        • I know, that was just a little joke I threw in for myself…it’s been a long day :)

      • Adam Baxter

        Are you able to disclose where / who you peer with (network wise)? In Australia, the routes to certain parts of the US can be really strange and cause ~50ms in unneccesary latency.

  • Josh

    The ability to request “chunks” of a file.

    For example: For video, this enables starting playback in the middle.

    • Adam Baxter

      Should be doable with HTTP Range requests

      • Reefiasty

        In “snapshots” tab there is something about a downloader which you can use to have restartable downloads, so probably it is possible (but the documentation doesn’t specify whether they allow Range header or not)

  • Billy

    I want to be able to mount a drive in Windows so that I have a Z:/ that is B2. It would act just like a local file structure (high latency/low speed though). No need to back up since it’s already cloud backed.

  • Rockdrigo Satch

    hi i have a question Amazon AWS has 11 9’s of reliability: https://blog.cloudsecurityalliance.org/2010/05/24/amazon-aws-11-9s-of-reliability/ and you’?? your infrastructure how protect our data?? and can i use this as a dropbox replacement??

    • iansltx

      Sunday morning’s outage alone woulda blown that 11 9’s assuming you were in US-East-1.

      • Molomby

        First line of the article linked to.. “data stored in the ‘durable storage’ class is 99.999999999% ‘durable’ [..] not to be confused with availability”

  • Robert Trevellyan

    Would be great to have a tool similar to Google’s gsutil, especially for its rsync command.

  • Me

    Is there any de-duplication going on here ? Specifically thinking about the versioning feature.

    I would love to be able to add a nfs type connection to B2 to my vmware stack and use B2 as a nightly backup archive, however de-dup is a must in that scenario.

    • Scott

      I suspect there isn’t customer-facing deduplication going on, so you might have to use backup software that does it client-side. (Which is probably better in the long run since you pay for downloads anyways! ;) )

  • This is friggin’ awesome. I’ve been wanting to do more types of projects to help people live a more decentralized digital lifestyle and you guys landed this awesome bomb on me. Hells yeah.

  • icodeforlove

    Will you guys be supporting the Amazon S3 api? Dreamhost has a great example of doing so https://www.dreamhost.com/cloud/storage/

    • We are able to be a little cheaper by NOT supporting the Amazon S3 API. We’re not far away and we hope that isn’t too much of a burden for programmers.

      Alternatively, we have thought about offering a slightly more expensive to use API (that pays for the extra equipment) that is S3 compatible.

      • Adam Baxter

        The deliniation via API capability makes a lot of sense, although what would you do if someone wrote a S3 protocol -> B2 proxy?

  • Tarak

    Awesome, just signed for the Beta.. and eager to try the API..

  • Tibor Szentmarjay

    Would be good to have B2 as backup solution on Synology NAS systems additionally to S3.

    • apmeyer

      I second and third this.

    • 4th!

      • CCWTech

        Please please make this work on Synology NAS!

    • Riccardo Pieri

      Synology!!!!

    • Dianne A

      Yes, to Synology NAS solution………..

    • futureboy

      Yes, Synology would be fantastic! Please please build some Backblaze app for Synology.

    • RussTaylor240

      And I would like it as well please!

    • Krishna

      Synology NAS intergration with B2 would let me forget about Amazon S3/Glacier completely!

  • Brandon Kruse

    Amazon guarantees the 11 9s for it’s normal S3. It’s been long thought that s3 represents 3 copies of the data. Is the backblaze price per gig for a single copy in a single data center?

    • We’re trying to be about as reliable as Amason S3. But it’s a little more complicated than “single copy”. Backblaze uses Reed-Solomon (17+3) to spread data across 20 storage pods where any three pods can be destroyed and you haven’t lost any data. The pods are stored in different locations, but yes, inside one datacenter. We’re completely transparent, you can read about our solution here: https://www.backblaze.com/blog/vault-cloud-storage-architecture/ Also, we publish our drive failure rates in our datacenter every quarter (Amazon hides their drive failure rates from you) so you can do your own calculations and make an informed decision. Here is an example of those statistics: https://www.backblaze.com/blog/hard-drive-reliability-stats-for-q2-2015/

  • johanejohansson

    I would love to be able to use BB to backup to B2 in order to get more than 30 days of backups. Coming soon?

    • I am pitching for this feature to be added: check a single checkbox and your backup would be billed as the storage used instead of $5/month unlimited and THEN you could dial your retention policy to any length of time (because you would then be paying for it).

      No promises, we have a TON of work to do and it might get prioritized down for a while.

      • I too am pitching this feature. +1?

  • Mark Zeman

    Awesome news! How compatible are you with existing libraries that use S3? It would be awesome if your API mirrored or alised S3 calls so it’s just a matter of changing the endpoint and credentials in an existing S3 app and they save to B2 instead. That would lower any switch over pain. Lower storage costs are great but if I have to rewrite my app to talk to B2 then that’s a big barrier.

    • _byron_

      Much, MUCH agreed. Google Cloud Storage took a similar strategy, matching the S3 API at the wire level to make migration painless.

      This would be key for us as a business considering B2; We have way, wayy too many applications that natively speak S3 to consider recoding to B2 (and 3rd party tools that we can’t recode even if we wanted to).

  • Samuel Reed

    A lot of the comments here are talking about integrations with FUSE, Cyberduck, Duplicity, and so on.

    It seems to me that the best option is to consider offering an API translation layer that is 100% compatible with S3’s API.

    We could then begin using this service absolutely immediately with our existing tools, instead of waiting months for all our favorite storage software to catch up.

  • ujay68

    Great! There’s two questions I have: (1) Is the stored data protected against “bitrot” like with ZFS or btrfs? (2) Is there an standardized API that would allow me to use standard clients like sftp, rsync or something like ExpanDrive?

  • Discpad

    I see a business model issue for customers who want to use B2 for dead file storage of things like Exchange or other e-mail server archives, where access is required only for things such as litigation (Hello, Lois Lerner? Hillary?). Your price of $60/terabyte/year barely covers the cost of the storage hardware alone.
    One of the biggest mistakes entrepreneurs — Especially teenagers! — make is to underprice their services, pricing their service on a “cost plus” basis instead of on the economic value the service provides to the customer. Think about that for a minute
    Dan Schwartz
    Cherry Hill, NJ

    • Yev from Backblaze here -> Yea, we’ve definitely given the pricing a lot of thought. Truth is, we’re profitable now, offering an unlimited service at $5/month, and a lot of folks have over a terabyte of data at that rate. A 1TB hard drive costs about $50, now and with the Backblaze Vaults it all makes sense!

  • Scott

    If someone writes a Duplicity backend for this, I will start backing up with you guys *immediately*.

    • Hey Scott, it was impossible to enter into these discussions before our announcement, so we’re all really excited to start up conversations with open source projects now. Thanks for the reference to Duplicity, I’ll add it to our list to reach out to (or simply contribute to ourselves).

      • Greg W

        Is Veeam Backup & Replication on your list?

  • Josh

    Arbitrary metadata!

    Allow us to assign metadata on upload, modify it in-system, and query for metadata using the API.
    On a personal level: photo/video tags
    On a professional level: tags for each project, gps coordinates, and so on.

    • Arbitrary metadata is allowed in the current API, but it is limited to 10 fields right now and I think maybe 1K of data (?I need to double check that?). Check out: https://www.backblaze.com/b2/docs/b2_upload_file.html and look for “X-Bz-Info-*”. We want to know if this satisfies the general need or if we need to go further, so any feedback is REALLY welcome right now.

      • Josh

        10 definitely doesn’t satisfy most of what could be done with metadata.
        If at all possible, leave it open-ended in both number of fields and size of each.

      • Adam Baxter

        Maybe JSON blobs associated with files/directories?

        Edit: yes, JSON blobs – go go go! This means I could store UNIX ACLs along with the files, as well as migrate photos from Flickr while maintaining all the metadata.

        Hell, I could even store my bookmarks in there.

        Count the JSON blobs as part of the storage limit to stop people getting extra creative with cramming all their files into metadata.

        Although, that does bring up the question of encryption of metadata.

        • Josh

          A thousand times yes!

          Encryption can be handled client side for those that want it, there’s no need for B2 to implement it specially.

  • Can it be used to host a static website, such as one built with Jekyl?

    • Yes static sites should work right away, and I plan to move my personal web site over very soon. But we need to add the feature to map a top level domain name to a sub-bucket to complete the “static website support” – right now it will all look like it’s hosted on https://f001.backblaze.com/your-website-name which needs to become https://your-website-name/…..

      • Adam Baxter

        Got a UserVoice or similar?

  • Pablo

    Is it suitable for streaming static mp4 files, 100MB each, to HTML5 video player in the browser?
    What about HLS?

    • Very suitable. Files are accessible by a public URL, and one of our testers was surprised when he could stream a movie file from the very first day the system became available internally. RIGHT NOW there is a limit of 5 GBytes per file, but we’ll be adding larger file support soon. Hopefully the 5 GBytes will be enough for full length “HD Movies” slightly compressed, but it clearly won’t work for 4K movies yet.

      • Pablo

        Is there guaranteed bandwidth?
        Publishing files to a large number of clients is a feature of the service?
        The reason I’m asking is because I think that some of the object storage services are considered slow and only meant as a backup. To stream files you need to cache them on a server or use a CDN.

      • Reefiasty

        This is interesting. Why is there a file size limit?

        (by the way, in documentation it says the limit is 5 billion bytes)

        • The current largest file size is 5 GBytes, but we want to support much larger (imagine a 1 TByte encrypted disk image). That will be by appending chunks to files followed by a “commit” declaring the file as complete.

          The only reason we didn’t do this immediately was to get to market faster and get feedback.

  • Josh

    Filesystem-level access in OSX and Linux with sane approaches to connection drops or reboots.

    This is the most far-reaching tool that I can think of, and one that I’ve dreamed of BackBlaze supporting.
    Being able to spin up a virtual machine that already has access to B2? I can’t think of anything We couldn’t do with that.

    • Scott

      Seconded, filesystem access (or SFTP) would be an amazing boon.

    • We’re hoping to partner with other groups, like Cyberduck comes to mind to provide SFTP and other type access. It was impossible to enter into these discussions before our announcement, so we’re all really excited to start up conversations with other companies and open source groups now.

      • Transmit is a SFTP app that also supports S3. Arq supports a bunch of backup targets. It would be great if they supported your service as well.

      • Josh

        I’m reading between the lines and hearing that you’re looking to partner with SFTP client software so that the client software will be able to access B2 (similarly to how Cyberduck accesses S3)

        Along with this, how about something like sshfs.
        B2 could mount in linux (as for example “/media/B2/bucketname/”) so that command-line applications that do not natively support B2 could read and write files.
        By sane approaches to connection drops or reboots, I mean that if a file is 20GB, and 10GB is uploaded before an arbitrary crash, the:
        – transfer can be resumed
        – B2 filehandle is still valid
        – B2 file can be erased if needed
        – local filesystem is not corrupted (and if it is, it’s easy to detect/fix)

        • Adam Baxter

          When I get access I’ll throw together a POC with the Python FUSE bindings

      • Adam Baxter

        Arq, definitely. I’m also going to be playing around with a C# client for this tonight (read: can you move me up the list so I can code tonight?)

      • ujay68

        Please consider supporting some kind of standardized API. I’m using ExpanDrive, eg, to mount Cloud Storage as an OS X filesystem. Something like that would be great. Persuading countless third-party tools to integrate with yet another API might be hard.

  • RichG

    I am excited about B2. I am curious about the encryption of the data and how the keys are stored. Storage is cheap, security is NOT!!!

    This is from SpiderOak, a service I happily subscribe to: Our systems have Zero Knowledge of your data. In non-technical terms it means that your data is 100% private and only readable by you. No plaintext data, no keys, or file meta data is ever stored on our servers. All this ensures absolute confidentiality of your data.

    • You can encrypt files before you send them to B2. But B2 does not yet have ANY encryption built into it at the B2 level. Our expectation is that a backup service can be implemented on top of B2 today where B2 has “zero knowledge” of the encryption. So the backup service would encrypt the data, store it on B2 where nobody at Backblaze could possibly decrypt it, and the backup program is responsible for storing the keys “elsewhere”.

      • Adam Baxter

        I’ve written about JSON metadata in another comment. I think the encryption of metadata should also be considered by anyone implementing a client

  • Chris M.

    You should partner with Plex to give PlexPass subscribers the ability to operate private instances completely out of the cloud.

    • Josh

      Could you elaborate on what you mean by “operate private instances”?

      Background: I’m long on Plex

    • I’m a big fan of Plex – I started using it back when it was called OSXbmc – but B2 isn’t the use case scenario for a Plex instance (assuming you meant PMS – Plex Media Server), as B2 is storage only, not computing.

      Perhaps B2 could be used as a Plex Cloud Sync Provider at some point, but we’ll have to see. https://support.plex.tv/hc/en-us/articles/201889756-Cloud-Sync-Overview

      • Dan

        I think he means so Plex uses your cloud to search for video files.

        • Could you explain what you mean by that? While you could store video files on B2, accessing them via a Plex Client would be impossible without a Plex Media Server, and if you ran that Plex Media Server locally, it defeats Chris M.’s original statement of “completely out of the cloud.”

          • Josh

            With filesystem-level access to B2, it would be easy to store all the video in B2, then run any supported linux PMS on any cloud instance.

            That would be PMS run completely from the cloud.

            Full disclosure: I’ve done something very similar already, but it’s kindergarden level compared to what would be possible with B2.

  • Chris M.

    With this kind of pricing, I’m thinking of using it to sync my local time machine backups to the cloud, so that I have the option to do a bare metal restore from off-site data.

    • Yep I like the idea of that, but I remember there were issues with moving time machine backups to another disk? I always wondered why BackBlaze couldn’t backup the time machine file on my current disk that is being backed up. IDK so confusing for my head sometimes.

  • This is extremely great news! I’m currently storing many terabytes on AWS S3, and I’m also an extremely happy Backblaze customer. I’ve got a few questions I don’t see answered here.

    One feature that brought me to Amazon was the integrity guarantees—S3 (and Glacier) promise 99.999999999% durability with protection against corruption (i.e., bitrot). Combined with replication and versioning, you’ve got an ideal “archive” solution. How does B2 compare? What are its (or, at least, expected) service guarantees? What about versioning?

    • Backblaze has versioning, check out: https://www.backblaze.com/b2/docs/file_versions.html

      Backblaze is attempting to be about the same durability as Amazon S3, but you can do some of the math yourself because we’re very open with our designs and how we store your data. We use 17+3 Reed-Solomon across 20 different computers in 20 different locations in our datacenter, you can read about these “vaults” here: https://www.backblaze.com/blog/vault-cloud-storage-architecture/

      Backblaze also releases quarterly reports on our drive failure statistics to help feed your calculations, here is a recent report on drive failure numbers in our datacenter: https://www.backblaze.com/blog/hard-drive-reliability-stats-for-q2-2015/

      • So Backblaze is protecting B2 data using 17+3 erasure coding. Reading erasure coded data usually has a higher latency than reading replicated data, and is typically used for older and “colder” data where increased latency in retrieving data is not a problem. Also, erasure coded data typically is not stored across multiple locations (regions or data centers) for latency reasons. That said, the use of hierarchical erasure codes may alleviate the latency issue. Backblaze only operates a single data center in Sacramento, so this may not be on the radar yet, but it is something to think about.

  • Chris M.

    I think you’re going to need more security – multi-factor authentication for clients, multi-user account support, etc.

    • We have 2FV and it’ll be required for B2 accounts, so we’re working on all aspects! Keep the suggestions coming!

      • Chris M.

        That’s excellent, and I would like to see some of those security measures flow back downstream to consumer clients on the Backblaze side who could benefit from them. I love your service, been with it for years, but I hate the fact that I have to rely on my email provider as security of last resort to prevent account compromise.

        • Chris, you can enable 2FV on your personal account -> https://www.backblaze.com/blog/two-factor-verification-for-backblaze/

          • Are you planning to support U2F FIDO? Yubico makes inexpensive U2F FIDO USB keys you can get on Amazon for $18 each. Google supports U2F FIDO in Google Apps…easy to register you key and you can “authorize” your PC for 30 days at a time so you don’t have to keep putting your U2F FIDO key in the USB port. If you haven’t spoken to Yubico, I suggest that you do.

  • Patrick Grote

    Any chance of offering an import service like Amazon? Send a drive, you import it locally?

    • We call this “Drive Seeding” and it is on our roadmap, although probably not for the first 3 – 6 months as we scale up to meet demand.

  • Me

    Comments:

    Geographical diversity ?
    Upload/download bandwidth ?

    Third party apps that will utilize B2?
    Integration into popular Small office NAS devices (synology, qnap, etc)?

    • > Geographical diversity

      Not yet, Backblaze only has one datacenter, you can read about how we selected it here: https://www.backblaze.com/blog/our-secret-data-center/

      > Upload/download bandwidth ?

      I claim infinite from a practical viewpoint – but if YOUR connection is faster than 100 Gbit/sec or if you have more than 5 petabytes to upload you should contact us and let us know you are about to hit us with a blast of data. We’ll work with you to make sure everything goes smoothly.

      > Third party apps that will utilize B2?

      Stay tuned. Our problem up until this morning was we couldn’t actually talk to any 3rd party app developers about B2. So now that the cat is out of the bag we can begin those talks. We’re really hoping to add support to third party apps like Cyberduck very soon.

      > Integration into popular Small office NAS devices (synology, qnap, etc)?

      None yet. This last one will take a a few months, but hopefully the developers of these other products will help out. The API doesn’t require linking with any toolkits, it’s just a nice, simple RESTFUL API. We’ve provided examples in 6 languages already, and will add three or four more over the next few weeks. See here (scroll to the very bottom for the code examples): https://www.backblaze.com/b2/docs/b2_authorize_account.html

  • sstave

    You asked for feature requests via comments:

    From a personal consumer perspective, and also being an IT person… I’d like to be able to easily associate a Bucket in B2 as the file repo for one of each of my machines. I have too many machines in my life:

    Kids machine
    Wifes laptop
    Office Desktop
    Mac mini
    Work laptop 1, 2
    Work desktop 1, 2
    Linux laptop 1, 2
    Windows laptop
    Gaming PC

    mobile client with upload to the pics bucket:

    kid’s phones 1, 2, 3
    Dad’s phones 1, 2
    Moms phone

    All y’all motherf*ckers need a bucket — and a mobile upload client for my pics from mobiles

    Would be great to have the client on each of these machines set to their own backup bucket — BUT setup to have some shared media buckets:

    Always save all my pics from all machines to my pics bucket.
    Always save all my movies from all machines to the movies bucket
    Here’s how to setup VLC to stream from your movies bucket
    Here’s how you use a central encrypted bucket for all passwords, med files, etc personal documents -accessible from each home machine with central password

    etc..

    It would be great to make it so the B2 service will help regular consumers (who have even far fewer machines than people like me) to understand how to properly store and _structure_ their data in a service like this that reduces duplication, centralizes the types of things that will come from all machines and devices.

    Lots of “familiy management” apps could be built on top of this…

    • Great suggestion! Definitely will send it to management/dev teams!

      • karl

        will thrid party software developers be able to make apps such as FastGlacier to upload to and use B2

        • iansltx

          Yes. There’s an API already documented. I’ll build a PHP binding to a popular file system abstraction library (flysystem) if someone doesn’t beat me to it…

          • karl

            I wish that I kept up with programming. Would have made a command-line app for Linux for my server.

  • OllieJones

    Wow. From driving around looking for Best Buys with disk drives for sale to global infrastructure! Way to get things done! Congratulations.