With new reports of ransomware attacks surfacing every day, it’s no surprise that thousands of customers have already started using Object Lock functionality and support for Veeam® immutability via the Backblaze S3 Compatible API since we launched the feature.
We were proud to be the first public cloud storage alternative to Amazon S3 to earn the Veeam Ready-Object with Immutability qualification, but the work started well before that. In this post, I’ll walk you through how we approached development, sifted through discrepancies between AWS documentation and S3 API behavior, solved for problematic retention scenarios, and tested the solutions.
Object Lock and File Lock both allow you to store objects using a Write Once, Read Many (WORM) model, meaning after it’s written, data cannot be modified or deleted for a defined period of time or indefinitely. Object Lock and File Lock are the same thing, but Object Lock is the terminology used in the S3 Compatible API documentation, and File Lock is the terminology used in the B2 Native API documentation.
How We Developed Object Lock and File Lock
Big picture, we wanted to offer our customers the ability to lock their data, but achieving that functionality for all customers involved a few different development objectives:
- First, we wanted to answer the call for immutability support from Veeam + Backblaze B2 customers via the Backblaze S3 Compatible API, but we knew that Veeam was only part of the answer.
- We also wanted to offer the ability to lock objects via the S3 Compatible API for non-Veeam customers.
- And we wanted to offer the ability to lock files via the B2 Native API.
To avoid overlapping work and achieve priority objectives first, we took a phased approach. Within each phase, we identified tasks that had dependencies and tasks that could be completed in parallel. First, we focused on S3 Compatible API support and the subset of APIs that Veeam used to achieve the Veeam Ready-Object with Immutability qualification. Phase two brought the remainder of the S3 Compatible API as well as File Lock capabilities for the B2 Native API. Phasing development allowed us to be efficient and minimize rework for the B2 Native API after the S3 Compatible API was completed, in keeping with general good software principles of code reuse. For organizations that don’t use Veeam, our S3 Compatible API and B2 Native API solutions have been exactly what they needed to lock their files in a cost effective, easy to use way.
AWS Documentation Challenges: Solving for Unexpected Behavior
At the start of the project, we spent a lot of time testing various documented and undocumented scenarios in AWS. For example, the AWS documentation at that point did not specify what happens if you attempt to switch from governance mode to compliance mode and vice versa, so we issued API calls to find out. Moreover, if we saw inconsistencies between the final outputs of the AWS Command Line Interface and the Java SDK library, we would take the raw XML response from the AWS S3 server as the basis for our implementation.
In compliance mode, users can extend the retention period, but they cannot shorten it under any circumstances. In governance mode, users can alter the retention period to be shorter or longer, remove it altogether, or even remove the file itself if they have an enhanced application key capability along with the standard read and write capabilities. Without the enhanced application key capability, governance mode behaves similarly to compliance mode.
Not only did the AWS documentation fail to account for some scenarios, there were instances when the AWS documentation didn’t match up with the actual system behavior. We utilized an existing AWS S3 service to test the API responses with Postman, an API development platform, and compared them to the documentation. We made the decision to mimic the behavior rather than what the documentation said in order to maximize compatibility. We resolved the inconsistencies by making the same feature API invocation against the AWS S3 service and our server, then verified that our server brought back similar XML as the AWS S3 service.
Retention Challenges: What If a Customer Wants to Close Their Account?
Our team raised an intriguing question in the development process: What if a customer accidentally sets the retention term far in the future, and then they want to close their account?
Originally, we required customers to delete the buckets and files they created or uploaded before closing their account. If, for example, they enabled Object Lock on any files in compliance mode, and had not yet reached the retention expiration date when they wanted to close their account, they couldn’t delete those files. A good thing for data protection. A bad thing for customers who want to leave (even though we hate to see them go, we still want to make it as easy as possible).
The question spawned a separate project that allowed customers to close their account without deleting files and buckets. After the account was closed, the system would asynchronously delete the files even if they were under retention and the associated buckets afterward. However, this led to another problem: If we actually allow files with retention to be deleted asynchronously for this scenario, how do we ensure that no other files with retention would be mistakenly deleted?
The tedious but truthful answer is that we added extensive checks and tests to ensure that the system would only delete files under retention in two scenarios (assuming the retention date had not already expired):
- If a customer closed an account.
- If the file was retained under governance mode, and the customer had the appropriate application key capability when submitting the delete request.
Testing, Testing: Out-thinking Threats
Features like Object Lock or File Lock have to be bulletproof. As such, testing different scenarios, like the retention example above and many others, posed the most interesting challenges. One critical example: We had to ensure that we protected locked files such that there was no back door or sequence of API calls that would allow someone to delete a file with Object Lock or File Lock enabled. Not only that, we also had to prevent the metadata of the lock properties from being changed.
We approached this problem like a bank teller approaches counterfeit bill identification. They don’t study the counterfeits, they study the real thing to know the difference. What does that mean for us? There are an infinite number of ways a nefarious actor could try to game the system, just like there are an infinite number of counterfeits out there. Instead of thinking of every possible scenario, we identified the handful of ways a user could delete a file, then solved for how to reject anything outside of those strict parameters.
Developing and testing Object Lock and File Lock was truly a team effort, and making sure we had everything accounted for and covered was an exercise that we all welcomed. We expected challenges along the way, and thanks to our great team members, both on the Engineering team and in Compliance, TechOps, and QA, we were able to meet them. When all was said and done, it felt great to be able to work on a much sought-after feature and deliver even more data protection to our customers.
“The immutability support from Backblaze made the decision to tier our Veeam backups to Backblaze B2 easy. Immutability has given us one more level of protection against the hackers. That’s why that was so important to us and most importantly, to our customers.”
—Gregory Tellone, CEO, Continuity Centers