Backblaze B2 Developer Quick-Start Guide

Use this guide to build applications that use Backblaze B2 Cloud Storage in thirty minutes or less using our S3-compatible API even if you do not have experience working with cloud object storage.

Overview

Backblaze B2 is public cloud object storage. B2 has an S3-compatible API, so you can configure almost any application, tool, or SDK that is designed for Amazon S3.

As well as its S3-compatible API, Backblaze B2 provides its own B2-native API. Most of the native API operations are equivalent to S3 actions. There are a few native API calls for B2-specific functionality, but the S3-compatible API has everything that most applications need. In fact, the majority of existing applications and tools that were originally written against Amazon S3 work with Backblaze B2's S3-compatible API.

Enable B2 Cloud Storage

Before you begin, you must have a Backblaze account. You can sign up here, and B2 Cloud Storage is automatically enabled. If you already have a Backblaze account and your left navigation menu contains a B2 Cloud Storage section, your account is already enabled for B2 Cloud Storage!

  1. Sign in to your Backblaze account.
  2. In the left navigation menu under Account, click My Settings.
  3. Under Enabled Products, select the checkbox to enable B2 Cloud Storage.
  4. Click OK.

Create a Bucket

A bucket is a container that holds files that are uploaded into B2 Cloud Storage.

  1. Sign in to your Backblaze account.
  2. In the left navigation menu under B2 Cloud Storage, click Buckets.
  3. Click Create a Bucket.
  4. Enter a name for your bucket, and select Public.
    Bucket names must be at least six characters and globally unique. A message is displayed if your bucket name is already in use.
  5. Ensure that Default Encryption and Object Lock are disabled.
  6. Click Create a Bucket.
  7. Save the value that is in the Endpoint field; you will need this in another step.
  8. Click Lifecycle Settings to control how long to keep the files in your new bucket.
    The file lifecycle configuration defaults to keep all versions of the files. This is in contrast to Amazon S3, which disables versioning on newly created buckets.

Upload a File

You can upload an image to this public B2 bucket from your local drive.

  1. Sign in to your Backblaze account.
  2. In the left navigation menu under B2 Cloud Storage, click Buckets.
  3. In your bucket details, click Upload/Download and click Upload.
  4. Drop your image from your local drive into the dialog box, or manually select your file.
  5. In the left navigation menu, click Browse Files to see your uploaded file and click the image filename to see more details.

File Details

After your file is uploaded, you can view the following details.

Name

File name

Bucket Name

Unique bucket name

Bucket Type

File visibility in B2 is set on the bucket rather than the file. Possible values are private and public.
If you set this value to public, the file is accessible using any of the three URLs without any credentials. Public buckets are never publicly writable; only publicly readable. You need appropriate credentials to upload files to the bucket.
If you set this value to false, only the bucket owner can access the bucket.

Friendly URL
S3 URL
Native URL

If your bucket is public, you can click any of these three URLs to view the image. In general, it is easiest to use the S3 URL in the format https://<bucket name>.<endpoint>/<file name>.

Kind

Type of file, for example, image/png

Size

File size, for example, 79.6 KB

Uploaded

Date and time the file was uploaded

Fguid

File version's unique identifier within B2
This is the same value that is in the fileId query parameter in the Native URL.

Sha1

SHA-1 digest of the file content. The digest is useful to verify that the file was not corrupted in transmission.

File Info

Set of up to 10 key-value pairs that hold user-defined metadata for the file.
In the following example, the web console set src_last_modified_millis to the last modified time of the uploaded file on your local drive, a few minutes before it was uploaded.

src_last_modified_millis: 123456789123 (12/15/2022 14:04)

File Info corresponds to Amazon S3's user-defined object metadata. It is returned with x-amz-meta- prefixed HTTP headers when getting the file using its S3 URL:

% curl -I
https://my-unique-bucket-name.s3.us-west-004.backblazeb2.com/my-sample-image.png
HTTP/1.1 200
Accept-Ranges: bytes
Last-Modified: Wed, 20 Jul 2022 21:18:44 GMT
ETag: "94ef3ecd9772a6dd5d4bdaebddce6a16"
x-amz-meta-src_last_modified_millis: 1658351049017
x-amz-request-id: a2286d690971ff97
x-amz-id-2: aM7Vk6DhsMVhmSTaDN1g4DmI7OB80nzMB
x-amz-version-id: 4_ze30d68217f1617d88b280413_f10477368fe8fd6f1_d20220720_m211844_c004_v0402004_t0001_u01658351924129
Content-Type: image/png
Content-Length: 79571
Date: Wed, 20 Jul 2022 22:07:18 GMT

Encryption

This is the value that you saved in step 7 above.

Create an Application Key

Because your application accesses B2 using an S3-compatible API, you need an application key. Application keys control access to your Backblaze B2 account and your buckets that are contained in your account. Application keys in Backblaze B2 broadly correspond to access keys in Amazon S3. For more information, see Application Keys.

  1. Sign in to your Backblaze account.
  2. In the left navigation menu under Account, click App Keys.
  3. Click Add a New Application Key, and enter a key name.
    Application key names are not globally unique.
  4. In the Allow Access to Bucket(s) field, ensure that All is selected.
    You can restrict an application key to a single bucket, but to create a new bucket, allow this key full access.
  5. Ensure that the type of access is set to Read and Write.
    File name prefix and duration are optional values.
  6. Click Create New Key, and note the resulting keyID and applicationKey values.

Important note: You can always find the keyID on this page, but for security, the applicationKey appears only once. Make sure you copy and securely save this value elsewhere.

A *keyID* is typically similar to a secret access key or an access key ID. An *application Key* is similar to a password.

Create Applications to Access Backblaze B2

You will use Amazon's tools to access B2 programmatically. This guide provides procedures to install, configure, and create applications for command-line interface (CLI), Java, and Python.

Use CLI to Create an Application

The following procedures demonstrate how to create an application using CLI.

Install the AWS CLI

Click AWS CLI installation instructions and follow the instructions for your operating system.

Configure AWS (CLI)

Follow this procedure to create a named profile to access Backblaze B2; this allows you to easily access B2 and alternative S3-compatible cloud object stores. You can also configure your default profile, set environment variables, or use any other configuration mechanism that is supported by CLI.

  1. In Terminal, enter the following command to use the AWS configure comment to create a named profile for the AWS CLI to access your Backblaze B2 account.
    1. Use the key ID and the application key that you created for the AWS access key ID and secret access key.
    2. Leave the default region name blank, and set the default output format to json.
    $ aws configure --profile b2tutorial
    AWS Access Key ID [None]: <the key id you just created>
    AWS Secret Access Key [None]: <the application key you just created>
    Default region name [None]:
    Default output format [None]: json
    
  2. Enter the following command to set the AWS Signature Version. Backblaze B2 supports AWS Signature Version 4.
    aws configure --profile b2tutorial set default.s3.signature_version s3v4
    

Since the AWS CLI does not allow you to save the endpoint configuration in a profile, you must specify the endpoint with the --endpoint-url option every time you use the CLI to access Backblaze B2.

If you do not specify the endpoint in an AWS command, the following error is returned:

An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided does not exist in our records.

If you specify a valid B2 endpoint that does not match your application key in an AWS command, the following error is returned:

An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The key '<your key id>' is not valid

Similarly, you must specify the named profile with --profile b2tutorial each time you use the AWS CLI, for example:

aws –-profile b2tutorial --endpoint-url https://s3.us-west-004.backblazeb2.com s3api list-buckets

List Existing Buckets (CLI)

The simplest S3 action is 'List Buckets'. It requires no parameters and returns a list of all of the buckets within the account.
The AWS CLI contains two commands for working with Amazon S3 and compatible object stores such as Backblaze B2.

This procedure uses the lower-level s3api since it allows greater control over the requests you send to Backblaze B2 and greater flexibility in outputting responses.

  1. In a Terminal window, run the following command to list existing buckets in your B2 account:
    aws –-profile b2tutorial --endpoint-url https://<your endpoint> s3api list-buckets
    
    Example
    % aws --profile b2tutorial --endpoint-url https://s3.us-west-004.backblazeb2.com s3api list-buckets              
         {
           "Buckets": [
               {
                   "Name": "my-unique-bucket-name",
                   "CreationDate": "2022-07-20T21:09:06.528000+00:00"
               }
         ],
         "Owner": {
            "DisplayName": "",
            "ID": "3d81f678b843"
            }
    }
    
  2. Enter the following command to show only the bucket names:
    % aws –-profile b2tutorial --endpoint-url https://s3.us-west-004.backblazeb2.com s3api list-buckets --query 'Buckets[].[Name]' --output text
    my-unique-bucket-name
    
    This technique lets you build powerful shell scripts to manipulate data in Backblaze B2.

Create a Private Bucket (CLI)

You already created a public bucket in the web console. Use this procedure to use the S3 'Create Bucket' action to create a private bucket programmatically.

  1. In a terminal window, run the following command:
    aws --profile b2tutorial --endpoint-url https://<your endpoint> s3api create-bucket <another-unique-bucket-name>
    
    Example
    % aws --profile b2tutorial --endpoint-url https://s3.us-west-004.backblazeb2.com s3api create-bucket --bucket another-unique-bucket-name --acl private
    
    An output similar to the following example is returned.
    {
        "Location": "/another-unique-bucket-name"
    }
    
    If the bucket already exists in another account, the following message is returned:
    An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: Bucket name is already in use!
    
  2. If no errors are returned, run the following command again.
    aws --profile b2tutorial --endpoint-url https://<your endpoint> s3api create-bucket <another-unique-bucket-name>
    
    The following message is returned:
    An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
    

When you use the AWS command in a script, you can capture the error output and test for the presence of BucketAlreadyExists or BucketAlreadyOwnedByYou so your script can handle the error accordingly.

Use the following example of bash for macOS and Linux:

# Replace the bucket name with your own.
bucket_name=another-unique-bucket-name

# Create a new private bucket, capturing error output from the aws command
if error=$(aws --profile b2tutorial \
    --endpoint-url https://s3.us-west-004.backblazeb2.com \
    s3api create-bucket \
    --bucket ${bucket_name} 2>&1 1>/dev/null); then
  echo "Success! Created ${bucket_name}"
else
  if [[ "$error" == *"BucketAlreadyOwnedByYou"* ]]; then
    echo "You already created ${bucket_name}. \nCarrying on..."
  elif [[ "$error" == *"BucketAlreadyExists"* ]]; then
    echo "${bucket_name} already exists in another account.\nExiting."
  fi
fi

Upload a File to a Bucket (CLI)

In this final section of the tutorial, you will upload a file to the private bucket using the S3 'Put Object' action.

  1. To upload a single file to your private bucket in B2, run the following command.
    aws --profile b2tutorial --endpoint-url https://<your endpoint> s3api put-object --bucket <another-unique-bucket-name> --key <file-name-in-b2> --body /path/to/local-file
    
    Example
    % aws --profile b2tutorial --endpoint-url https://s3.us-west-004.backblazeb2.com s3api put-object --bucket another-unique-bucket-name --key myimage.png --body ~/Pictures/myimage.png
    
    An output similar to the following example is returned.
    {
            "ETag": "\"78024721ce114961c53ddb2114e8759d\"",
       "VersionId": "4_z838d18a1bf8627788b280413_f10015013bc918a5f_d20220722_m234152_c004_v0402002_t0040_u01658533312064"
    }
    

Etag and VersionId Output (CLI)

The Etag value (represented in Boto3 as e_tag) identifies a specific version of the file's content. Etag is a standard HTTP header that is included when clients download files from B2. Etag enables caches to be more efficient and save bandwidth because a web server does not need to resend a full response if the content was not changed. VersionId (version_id) identifies a specific version of the file within B2. If a file is uploaded to an existing key in a bucket, a new version of the file is stored even if the file content is the same.

To see the difference between ETag and VersionId, run the 'upload file' commands a second time and upload the same file content to the same bucket and key. The ETag is the same since the content hasn't changed, but a new VersionId is returned.
An output similar to the following example is returned.

{
    "ETag": "\"78024721ce114961c53ddb2114e8759d\"",
    "VersionId": "4_z838d18a1bf8627788b280413_f1108c86b42292bdf_d20220722_m234154_c004_v0402002_t0058_u01658533314875"
}

Use the put-object command to upload a single file. You can use the AWS higher level s3 command to upload multiple files in a single command. For example, to copy all PNG files from ~/Pictures to a bucket, run the following command:

aws --profile b2tutorial --endpoint-url https://s3.us-west-004.backblazeb2.com s3 cp ~/Pictures s3://another-unique-bucket-name --recursive --exclude "*" --include "*.png"

Note: The --include and --exclude parameters can be tricky. Use the --dryrun option to verify that the command will run as you expect.

Browse Files (CLI)

In the web console, navigate to your private bucket on the Browse Files page. Your file is displayed with a (2) next to the filename.

If you click the (2), and click one of the file versions, you will see that the Fguid matches the VersionId that was returned when the file was created.

There is also no File Info for this file. The web console set the src_last_modified_millis attribute for the file that you uploaded earlier, but you did not specify one when you uploaded the file.

Click one of the URLs to open it in the browser. You cannot access the file because it is in a private bucket. The S3-compatible API returns the following XML-formatted error for the S3 URL.

<Error>
    <Code>UnauthorizedAccess</Code>
    <Message>bucket is not authorized: another-unique-bucket-name</Message>
</Error>

The B2 Native API returns a similar, JSON-formatted error for the Native and Friendly URLs:

{
  "code": "unauthorized",
  "message": "",
  "status": 401
}

Use Java to Create an Application

The following procedures demonstrate how to create an application using Java.

Install the AWS SDK

You must have Java Development Kit 8 or later and Apache Maven. You do not need to install the AWS SDK for Java 2.x because Maven will take care of this later.

Configure AWS (Java)

Follow this procedure to create a named profile to access Backblaze B2; this allows you to easily access B2 and alternative S3-compatible cloud object stores. You can also configure your default profile, set environment variables, or use any other configuration mechanism that is supported by Java.
If you don't have the CLI, you can create a new AWS profile by creating or editing the AWS configuration files.

You can find the AWS credentials file at the following locations:

You can find the AWS configuration file at the following locations:

  1. Create the .aws directory and credentials file, if they do not already exist, and add the following section to the file, substituting your credentials.
    [b2tutorial]
    aws_access_key_id = <your_key_id>
    aws_secret_access_key = <your_application_key>
    
  2. Create the configuration file if it does not already exist and add the following section.
    [b2tutorial]
    output = json
    s3 =
        signature_version = s3v4
    

List Existing Buckets (Java)

The simplest S3 action is 'List Buckets'. It requires no parameters and returns a list of all of the buckets within the account.

  1. Use the following command to create a new Maven project.
    mvn -B archetype:generate \                                       
    -DarchetypeGroupId=org.apache.maven.archetypes \
    -DarchetypeArtifactId=maven-archetype-quickstart \
    -DgroupId=com.example.b2client \
    -DartifactId=b2client
    
  2. To configure the project with a dependency for the AWS SDK and to specify Java 8 as the compiler version, in the b2client directory that you created in the previous step, replace the contents of the generated pom.xml file with the following code.
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
     <modelVersion>4.0.0</modelVersion>
      <groupId>com.example.b2client</groupId>
      <artifactId>b2client</artifactId>
      <packaging>jar</packaging>
      <version>1.0-SNAPSHOT</version>
      <name>b2client</name>
      <url>http://maven.apache.org</url>
      <dependencies>
        <dependency>
          <groupId>junit</groupId>
          <artifactId>junit</artifactId>
          <version>3.8.1</version>
          <scope>test</scope>
        </dependency>
        <dependency>
          <groupId>software.amazon.awssdk</groupId>
          <artifactId>bom</artifactId>
          <version>2.16.60</version>
        </dependency>
        <dependency>
          <groupId>org.slf4j</groupId>
          <artifactId>slf4j-nop</artifactId>
          <version>1.7.36</version>
        </dependency>
      </dependencies>
           <build>
             <plugins>
               <plugin>
                 <groupId>org.apache.maven.plugins</groupId>
                 <artifactId>maven-compiler-plugin</artifactId>
                 <version>3.8.1</version>
                 <configuration>
                   <source>8</source>
                   <target>8</target>
                 </configuration>
               </plugin>
             </plugins>
           </build>
    </project>
    
  3. Replace the contents of the generated App.java file with the following code.
    package com.example.b2client;
    import java.net.URI;
    import java.nio.file.Path;
    import java.nio.file.Paths;
    import java.util.List;
    import java.util.regex.Matcher;
    import java.util.regex.Pattern;
    
    import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
    import software.amazon.awssdk.regions.Region;
    import software.amazon.awssdk.services.s3.S3Client;
    import software.amazon.awssdk.services.s3.model.*;
    
    public class App
    {
        // Change this to the endpoint from your bucket details, prefixed with "https://"
        private static String ENDPOINT_URL = "https://<your endpoint>";
    
       public static void main( String[] args )
               throws Exception
        {
            // Extract the region from the endpoint URL
            Matcher matcher = Pattern.compile("https://s3\\.([a-z0-9-]+)\\.backblazeb2\\.com").matcher(ENDPOINT_URL);
            if (!matcher.find()) {
                System.err.println("Can't find a region in the endpoint URL: " + ENDPOINT_URL);
            }
            String region = matcher.group(1);
    
            // Create a client. The try-with-resources pattern ensures the client is cleaned up when we're done with it
           try (S3Client b2 = S3Client.builder()
                   .region(Region.of(region))
                    .credentialsProvider(ProfileCredentialsProvider.create("b2tutorial"))
                    .endpointOverride(new URI(ENDPOINT_URL)).build()) {
    
                // Get the list of buckets
               List<Bucket> buckets = b2.listBuckets().buckets();
    
               // Iterate through list, printing each bucket's name
                System.out.println("Buckets in account:");
                for (Bucket bucket : buckets) {
                    System.out.println(bucket.name());
                }
            }
        }
    }
    
    A warning may appear that indicates some of the imports are unused. They are all included so that you do not have to add more later.
  4. Edit the value of the ENDPOINT_URL string constant to match your endpoint, for example:
    private static String ENDPOINT_URL = "https://s3.us-west-004.backblazeb2.com";
    
    The AWS SDK for Java 2.x requires a region when creating an S3 client, so the application extracts the region substring from the endpoint URL with a regular expression. This means you do not have to define a separate string constant for the region for the region to match the endpoint.

    The app builds an S3Client instance with the region, profile, and endpoint, and invokes the client's listBuckets()method, calling buckets() on the response to obtain a List of Bucket objects. Finally, the app iterates through the list, printing each bucket's name.

  5. Compile and run the following sample.
    mvn package exec:java -Dexec.mainClass="com.example.b2client.App" -DskipTests --quiet
    
The first time you build the project, Maven downloads the project dependencies, so it might take a minute or two to complete. An output similar to the following example is returned.
Buckets in account:
my-unique-bucket-name

Create a Private Bucket (Java)

You already created a public bucket in the web console. Use this procedure to use the S3 'Create Bucket' action to create a private bucket programmatically.

  1. Add the following code at the bottom of the main() method in App.java, and replace the bucket name with a unique name.
    String bucketName = "another-unique-bucket-name";
                try {
                    System.out.println("\nTrying to create bucket: " + bucketName);
                    CreateBucketResponse createBucketResponse = b2.createBucket(CreateBucketRequest
                            .builder()
                            .bucket(bucketName)
                            .acl(BucketCannedACL.PRIVATE)
                            .build());
                    System.out.println("Success! Response is: " + createBucketResponse.toString());
                } catch (BucketAlreadyOwnedByYouException e) {
                    System.out.println("You already created " + bucketName + ". \nCarrying on...");
                } catch (BucketAlreadyExistsException e) {
                    System.out.println(bucketName + " already exists in another account. \nExiting.");
                    System.exit(1);
    
    The app builds a CreateBucketRequest instance with the bucket name and the private canned ACL, passing it to the client's createBucket()method and displaying the response from B2.

    The app may already exist, in which case createBucket() throws an exception. The exception's class indicates whether the bucket is owned by your account. If so, then the app carries on to the next step; otherwise, the app exits with an error.

  2. Compile and run the following code again.
    mvn package exec:java -Dexec.mainClass="com.example.b2client.App" -DskipTests --quiet
    
    An output similar to the following example is returned.
    Buckets in account:
    my-unique-bucket-name
    
    Trying to create bucket: another-unique-bucket-name
    Success! Response is: CreateBucketResponse(Location=/another-unique-bucket-name)
    another-unique-bucket-name created
    
    If the bucket already exists in another account, the following message is returned:
    Buckets in account:
    my-unique-bucket-name
    
    Trying to create bucket: tester
    tester already exists in another account. 
    Exiting.
    
  3. After the bucket is created, run the following code again.
    mvn package exec:java -Dexec.mainClass="com.example.b2client.App" -DskipTests --quiet
    
    The following output is returned that indicates that the exception was handled.
    Buckets in account:
    another-unique-bucket-name
    my-unique-bucket-name
    
    Trying to create bucket: another-unique-bucket-name
    You already created another-unique-bucket-name. 
    Carrying on...
    

Upload a File to a Bucket (Java)

In this final section of the tutorial, you will upload a file to the private bucket using the S3 'Put Object' action.

  1. To upload a single file to your private bucket in B2, add the following code after the last section that you pasted into App.java, and replace the path with the path of your file to upload.
    // The key in B2 is set to the file name.
    Path pathToFile = Paths.get("./myimage.png");
    PutObjectResponse putObjectResponse = b2.putObject(PutObjectRequest.builder()
            .bucket(bucketName)
            .key(pathToFile.getFileName().toString())
            .build(),
            pathToFile);
    System.out.println("Success! Response is: " + putObjectResponse.toString());
    
    This section of code builds a PutObjectRequest that specifies the same bucket name as before and sets the key to the file name from the path you specify. The request is passed to the putObject method with the path to your file.
  2. Use the following code to build and run the app again.
    mvn package exec:java -Dexec.mainClass="com.example.b2client.App" -DskipTests --quiet
    
    An output similar to the following example is returned.
    Buckets in account:
    another-unique-bucket-name
    my-unique-bucket-name
    
    Trying to create bucket: another-unique-bucket-name
    You already created another-unique-bucket-name. 
    Carrying on...
    Success! Response is: PutObjectResponse(ETag="af8b4f7279198443eea8d67b85bb794c", VersionId=4_z838d18a1bf8627788b280413_f115e5462aef0ba24_d20220725_m200812_c004_v0402006_t0051_u01658779692638)
    

Etag and VersionId Output (Java)

The Etag value (represented in Boto3 as e_tag) identifies a specific version of the file's content. Etag is a standard HTTP header that is included when clients download files from B2. Etag enables caches to be more efficient and save bandwidth because a web server does not need to resend a full response if the content was not changed. VersionId (version_id) identifies a specific version of the file within B2. If a file is uploaded to an existing key in a bucket, a new version of the file is stored even if the file content is the same.

To see the difference between ETag and VersionId, run the 'upload file' commands a second time and upload the same file content to the same bucket and key. The ETag is the same since the content hasn't changed, but a new VersionId is returned.
An output similar to the following example is returned.

Buckets in account:
another-unique-bucket-name
my-unique-bucket-name

Trying to create bucket: another-unique-bucket-name
You already created another-unique-bucket-name. 
Carrying on…

Uploading: ./myimage.png
Success! Response is: PutObjectResponse(ETag="af8b4f7279198443eea8d67b85bb794c", VersionId=4_z838d18a1bf8627788b280413_f112be370c4d86f6a_d20220725_m205707_c004_v0402009_t0021_u01658782627043)

Use the putObject method to upload a single file. To upload multiple files, your application must build a list of files to upload and iterate through it. Use the asynchronous programming features of the AWS SDK for Java 2.x to upload multiple files concurrently.

Browse Files (Java)

In the web console, navigate to your private bucket on the Browse Files page. Your file is displayed with a (2) next to the filename.

If you click the (2), and click one of the file versions, you will see that the Fguid matches the VersionId that was returned when the file was created.

There is also no File Info for this file. The web console set the src_last_modified_millis attribute for the file that you uploaded earlier, but you did not specify one when you uploaded the file.

Click one of the URLs to open it in the browser. You cannot access the file because it is in a private bucket. The S3-compatible API returns the following XML-formatted error for the S3 URL.

<Error>
    <Code>UnauthorizedAccess</Code>
    <Message>bucket is not authorized: another-unique-bucket-name</Message>
</Error>

The B2 Native API returns a similar, JSON-formatted error for the Native and Friendly URLs:

{
  "code": "unauthorized",
  "message": "",
  "status": 401
}

Use Python to Create an Application

The following procedures demonstrate how to create an application using Python.

Install the AWS SDK

You must have Python 3.7 or later. Use the following command to install the current version of the AWS SDK for Python (Boto3) using pip: pip install boto3.

Configure AWS (Python)

Follow this procedure to create a named profile to access Backblaze B2; this allows you to easily access B2 and alternative S3-compatible cloud object stores. You can also configure your default profile, set environment variables, or use any other configuration mechanism that is supported by Python.
If you don't have the CLI, you can create a new AWS profile by creating or editing the AWS configuration files.

You can find the AWS credentials file at the following locations:

You can find the AWS configuration file at the following locations:

  1. Create the .aws directory and credentials file, if they do not already exist, and add the following section to the file, substituting your credentials.
    [b2tutorial]
    aws_access_key_id = <your_key_id>
    aws_secret_access_key = <your_application_key>
    
  2. Create the configuration file if it does not already exist and add the following section.
    [b2tutorial]
    output = json
    s3 =
        signature_version = s3v4
    

List Existing Buckets (Python)

The simplest S3 action is 'List Buckets'. It requires no parameters and returns a list of all of the buckets within the account.

  1. Create a file, app.py, with the following content.
    import boto3.session
    import os
    
    # Change this to the endpoint from your bucket details, prefixed with "https://"
    ENDPOINT_URL = 'https://<your endpoint>'
    
    # Create a Boto3 Session with the tutorial profile
    b2session = boto3.session.Session(profile_name='b2tutorial')
    
    # Create a Boto3 Resource from the session, specifying S3 as the service, and our B2 endpoint
    b2 = b2session.resource(service_name='s3',
                            endpoint_url=ENDPOINT_URL)
    
    # Get the list of buckets
    buckets = b2.buckets.all()
    
    # Iterate through the list, printing each bucket's name
    print('Buckets in account:')
    for bucket in buckets:
        print(bucket.name)
    
  2. Edit the value of the ENDPOINT_URL constant to match your endpoint using the following example.
    ENDPOINT_URL = 'https://s3.us-west-004.backblazeb2.com'
    
    The app creates a Boto3 session with the profile and an S3 resource client from the session that specifies the endpoint. The app then calls the all() method on the resource's buckets collection to retrieve a list of Bucket objects. Finally, the app iterates through the list, printing each bucket's name.
  3. Run the application using the following code.
    python app.py
    
    An output similar to the following example is returned.
    Buckets in account:
    my-unique-bucket-name
    

Create a Private Bucket (Python)

You already created a public bucket in the web console. Follow this procedure to use the S3 'Create Bucket' action to create a private bucket programmatically.

  1. Add the following code at the bottom of app.py, and replace the bucket name with a unique name.
    # Create a new private bucket. Replace the bucket name with your own.
    bucket_name = 'another-unique-bucket-name'
    try:
        print(f'\nTrying to create bucket: {bucket_name}')
        bucket = b2.create_bucket(Bucket=bucket_name,
                                  ACL='private')
        print(f'Success! Response is: {bucket}')
    except b2.meta.client.exceptions.BucketAlreadyOwnedByYou:
        print(f'You already created {bucket_name}. \nCarrying on...')
        bucket = b2.Bucket(bucket_name)
    except b2.meta.client.exceptions.BucketAlreadyExists:
        print(f'{bucket_name} already exists in another account.\nExiting.')
        exit(1)
    
    The app calls the B2 resource's create-bucket method with the bucket name and the canned ACL value private, and displays the resulting bucket object.

    The bucket may already exist, in which case create_bucket throws an exception. The exception's class indicates whether the bucket is owned by your account. If so, then the app creates a bucket object from the bucket name and continues to the next step; otherwise, the app exits with an error.

  2. Enter the following code to run the app again.
    python app.py
    
    An output similar to the following example is returned.
    Buckets in account:
    my-unique-bucket-name
    
    Trying to create bucket: another-unique-bucket-name
    Success! Response is: s3.Bucket(name='another-unique-bucket-name')
    
    If the bucket already exists in another account, the following message is returned:
    Buckets in account:
    my-unique-bucket-name
    
    Trying to create bucket: tester
    tester already exists in another account. 
    Exiting.
    
  3. After the bucket is created, run the following code again.
    python app.py
    
    The following output is returned that indicates that the exception was handled.
    Buckets in account:
    another-unique-bucket-name
    my-unique-bucket-name
    
    Trying to create bucket: another-unique-bucket-name
    You already created another-unique-bucket-name. 
    Carrying on...
    
  4. Return to the bucket listing in the web console and refresh the page.
    The new private bucket is listed.

Upload a File to a Bucket (Python)

In this final section of the tutorial, you will upload a file to the private bucket using the S3 'Put Object' action.

  1. To upload a single file to your private bucket in B2, add the following code to the bottom of app.py, and replace the path with the path of your file to upload.
    # The key in B2 is set to the file name.
    path_to_file = './myimage.png'
    print(f'Uploading: {path_to_file}')
    obj = bucket.put_object(Body=open(path_to_file, mode='rb'),
                            Key=os.path.basename(path_to_file))
    # Create a response dict with the values returned from B2
    response = {attr: getattr(obj, attr) for attr in ['e_tag', 'version_id']}
    print(f'Success! Response is: {response}')
    
    This section of code calls the put_object method on the bucket that the app just created, with the file content and a key set to the file name from the specified path. Since the put_object method returns a Boto3 Object representing the file in B2, rather than the B2 response itself, the code extracts the ETag and VersionId values returned by B2 and displays them.
  2. Use the following code to run the app again.
    python app.py
    
    An output similar to the following example is returned.
    Buckets in account:
    another-unique-bucket-name
    my-unique-bucket-name
    
    Trying to create bucket: another-unique-bucket-name
    You already created another-unique-bucket-name. 
    Carrying on...
    Uploading: ./myimage.png
    Success! Response is: {'e_tag': '"3de71fbae1459a1e084b091fedff7b52"', 'version_id': '4_zc34d68b13f96d7c87bf80413_f112be370c4da1c29_d20220725_m214852_c004_v0402009_t0031_u01658785732321'}
    

Etag and VersionId Output (Python)

The Etag value (represented in Boto3 as e_tag) identifies a specific version of the file's content. Etag is a standard HTTP header that is included when clients download files from B2. Etag enables caches to be more efficient and save bandwidth because a web server does not need to resend a full response if the content was not changed. VersionId (version_id) identifies a specific version of the file within B2. If a file is uploaded to an existing key in a bucket, a new version of the file is stored even if the file content is the same.

To see the difference between ETag and VersionId, run the 'upload file' commands a second time and upload the same file content to the same bucket and key. The ETag is the same since the content hasn't changed, but a new VersionId is returned.

An output similar to the following example is returned.

Buckets in account:
another-unique-bucket-name
my-unique-bucket-name

Trying to create bucket: another-unique-bucket-name
You already created another-unique-bucket-name. 
Carrying on...
Uploading: ./myimage.png
Success! Response is: {'e_tag': '"3de71fbae1459a1e084b091fedff7b52"', 'version_id': '4_zc34d68b13f96d7c87bf80413_f102dc31873d979a2_d20220725_m214855_c004_v0402009_t0015_u01658785735087'}

Use the put_object method to upload a single file. To upload multiple files, your application must build a list of files to upload and iterate through that list.

Browse Files (Python)

In the web console, navigate to your private bucket on the Browse Files page. Your file is displayed with a (2) next to the filename.

If you click the (2), and click one of the file versions, you will see that the Fguid matches the VersionId that was returned when the file was created.

There is also no File Info for this file. The web console set the src_last_modified_millis attribute for the file that you uploaded earlier, but you did not specify one when you uploaded the file.

Click one of the URLs to open it in the browser. You cannot access the file because it is in a private bucket. The S3-compatible API returns the following XML-formatted error for the S3 URL.

<Error>
    <Code>UnauthorizedAccess</Code>
    <Message>bucket is not authorized: another-unique-bucket-name</Message>
</Error>

The B2 Native API returns a similar, JSON-formatted error for the Native and Friendly URLs:

{
  "code": "unauthorized",
  "message": "",
  "status": 401
}

Additional Resources

You can do more with Backblaze B2: