Create Large Files with the Native API
    • Dark
      Light

    Create Large Files with the Native API

    • Dark
      Light

    Article Summary

    For all of the Backblaze API operations and their corresponding documentation, see API Documentation.

    There are four Backblaze B2 Cloud Storage Native API calls that you need to make to create a large file.

    1. b2_start_large_file
    2. b2_get_upload_part_url (for each thread that are are uploading)
    3. b2_upload_part or b2_copy_part (for each part of the file)
    4. b2_finish_large_file

    Call b2_start_large_file and provide the file name, content type, and custom file information. The call returns a file ID for the large file, which you need when you upload the parts of the file.

    You can either upload each part of your file, or you can copy them from an existing file in any bucket that belongs to the same account as the large file. Note that you can assemble large files from a mix of uploaded and copied parts.

    Determine the size of each part that you upload and copy. For example, if you want to upload a 100 GB file, you can make each part 1 GB, and perform 100 uploads 1 GB at a time. The file parts do not need to be the same size. The maximum part size is 5 GB, and the minimum part size is 5 MB except for the last part in a file which has a minimum size of 1 byte. Backblaze recommends a part size of 100 MB which strikes a good balance between upload throughput and the ability to upload parts in parallel.

    Rather than hard coding the part size, you can use the recommendedPartSize that is returned by the b2_authorize_account operation.

    The parts are numbered starting at one, up to the number of parts that are needed, with a maximum of 10,000 parts for one large file.

    Use b2_get_upload_part_url to get the target for uploading parts. Each thread that uploads should get its own URL.

    Upload a part using b2_upload_part, and provide the file ID of the large file, the part number, and the data in the part.

    Copy a part using b2_copy_part, and provide the source file ID, the large file ID, the part number, and optionally the range of bytes to copy over from the source file.

    Finally, after all of the parts are uploaded, you can call b2_finish_large_file to transform the parts into a single Backblaze B2 file. After this is done, it looks just like any other file. You can download it, and it will show up when you list the files in a bucket.

    Manage Large Files in Progress

    Any number of large files can be in progress at the same time. You can use b2_list_unfinished_large_files to get a list of them.

    For any one unfinished large file, you can use b2_list_parts to get a list of the parts that were uploaded so far.

    If you started a large file, but do not want to finish, you can use b2_cancel_large_file to delete all of the parts that were uploaded so far.

    Access Large Files

    After a large file is created, you can do anything you can do with a normal file:

    • b2_delete_file_version This operation deletes one version of one file.
    • b2_download_file_by_id This operation downloads a specific version of a file.
    • b2_download_file_by_name This operation downloads the most recent version of a file.
    • b2_get_file_info This operation returns information about a file.
    • b2_hide_file This operation hides a file without deleting its data.
    • b2_list_file_names This operation lists the file names that are in a bucket.
    • b2_list_file_versions This operation lists all of the file versions that are in a bucket.

    When you download large files, the Range header can be especially useful. It lets you download just part of the file. For details, see b2_download_file_by_name and b2_download_file_by_id.


    Was this article helpful?