{"id":110526,"date":"2023-12-13T08:37:55","date_gmt":"2023-12-13T16:37:55","guid":{"rendered":"https:\/\/www.backblaze.com\/blog\/?p=110526"},"modified":"2024-08-14T11:52:14","modified_gmt":"2024-08-14T18:52:14","slug":"how-to-run-ai-ml-workloads-on-coreweave-backblaze","status":"publish","type":"post","link":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/","title":{"rendered":"How to Run AI\/ML Workloads on CoreWeave + Backblaze"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"512\" src=\"\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To-1024x512.png\" alt=\"A decorative image showing the Backblaze and CoreWeave logos superimposed on clouds. \" class=\"wp-image-110527\" srcset=\"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To-1024x512.png 1024w, https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To-300x150.png 300w, https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To-768x384.png 768w, https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png 1250w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-drop-cap\">Backblaze compute partner <a href=\"https:\/\/www.coreweave.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">CoreWeave<\/a> is a specialized GPU cloud provider designed to power use cases such as AI\/ML, graphics, and rendering up to <a href=\"https:\/\/www.coreweave.com\/gpu-cloud-pricing\" target=\"_blank\" rel=\"noreferrer noopener\">35x faster and for 80% less<\/a> than generalized public clouds. Brandon Jacobs, an infrastructure architect at CoreWeave, joined us earlier this year for <a href=\"https:\/\/www.brighttalk.com\/webcast\/14807\/594247\" target=\"_blank\" rel=\"noreferrer noopener\">Backblaze Tech Day \u201823<\/a>. Brandon and I co-presented a session explaining both how to backup CoreWeave Cloud storage volumes to <a href=\"https:\/\/www.backblaze.com\/cloud-storage\" target=\"_blank\" rel=\"noreferrer noopener\">Backblaze B2 Cloud Storage<\/a> and how to load a model from Backblaze B2 into the CoreWeave Cloud inference stack.<\/p>\n\n\n\n<p>Since we recently published <a href=\"https:\/\/www.backblaze.com\/docs-back-up-storage-volumes-from-coreweave-to-backblaze-b2\" target=\"_blank\" rel=\"noreferrer noopener\">an article covering the backup process<\/a>, in this blog post I\u2019ll focus on loading a large language model (LLM) directly from Backblaze B2 into CoreWeave Cloud.<\/p>\n\n\n\n<p>Below is the session recording from Tech Day; feel free to watch it instead of, or in addition to, reading this article.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Specialized Clouds, Superior Results: GPU-Driven AI\/ML Applications with CoreWeave\" width=\"750\" height=\"422\" src=\"https:\/\/www.youtube.com\/embed\/IVgciGybobg?feature=oembed&#038;enablejsapi=1&#038;origin=https:\/\/bzatlasbluestg.wpenginepowered.com\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">More About CoreWeave<\/h2>\n\n\n\n<p>In the Tech Day session, Brandon covered the two sides of CoreWeave Cloud:&nbsp;<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Model training and fine tuning.&nbsp;<\/li>\n\n\n\n<li>The inference service.&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>To maximize performance, CoreWeave provides a fully-managed Kubernetes environment running on bare metal, with no hypervisors between your containers and the hardware.<\/p>\n\n\n\n<p>CoreWeave provides a range of storage options: storage volumes that can be directly mounted into Kubernetes pods as block storage or a shared file system, running on solid state drives (SSDs) or hard disk drives (HDDs), as well as their own native S3 compatible object storage. Knowing that, you\u2019re probably wondering, \u201cWhy bother with Backblaze B2, when CoreWeave has their own object storage?\u201d<\/p>\n\n\n\n<p>The answer echoes the first few words of this blog post\u2014CoreWeave\u2019s object storage is a specialized implementation, co-located with their GPU compute infrastructure, with high-bandwidth networking and caching. Backblaze B2, in contrast, is general purpose cloud object storage, and includes features such as Object Lock and lifecycle rules, that are not as relevant to CoreWeave\u2019s object storage. There is also a price differential. Currently, at $6\/TB\/month, Backblaze B2 is one-fifth of the cost of CoreWeave\u2019s object storage.<\/p>\n\n\n\n<p>So, as Brandon and I explained in the session, CoreWeave\u2019s native storage is a great choice for both the training and inference use cases, where you need the fastest possible access to data, while Backblaze B2 shines as longer term storage for training, model, and inference data as well as the destination for data output from the inference process. In addition, since Backblaze and CoreWeave are bandwidth partners, you can transfer data between our two clouds with no egress fees, freeing you from unpredictable data transfer costs.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Loading an LLM From Backblaze B2<\/h2>\n\n\n\n<p>To demonstrate how to load an archived model from Backblaze B2, I used <a href=\"https:\/\/docs.coreweave.com\/coreweave-machine-learning-and-ai\/how-to-guides-and-tutorials\/examples\/tensorflow-guides\/gpt-2\" target=\"_blank\" rel=\"noreferrer noopener\">CoreWeave\u2019s GPT-2 sample<\/a>. <a href=\"https:\/\/en.wikipedia.org\/wiki\/GPT-2\" target=\"_blank\" rel=\"noreferrer noopener\">GPT-2<\/a> is an earlier version of the GPT-3.5 and GPT-4 LLMs used in ChatGPT. As such, it\u2019s an accessible way to get started with LLMs, but, as you\u2019ll see, it certainly doesn\u2019t pass the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Turing_test\" target=\"_blank\" rel=\"noreferrer noopener\">Turing test<\/a>!<\/p>\n\n\n\n<p>This sample comprises two applications: a transformer and a predictor. The transformer implements a REST API, handling incoming prompt requests from client apps, encoding each prompt into a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tensor_(machine_learning)\" target=\"_blank\" rel=\"noreferrer noopener\">tensor<\/a>, which the transformer passes to the predictor. The predictor applies the GPT-2 model to the input tensor, returning an output tensor to the transformer for decoding into text that is returned to the client app. The two applications have different hardware requirements\u2014the predictor needs a GPU, while the transformer is satisfied with just a CPU, so they are configured as separate Kubernetes pods, and can be scaled up and down independently.<\/p>\n\n\n\n<p>Since the GPT-2 sample includes instructions for loading data from Amazon S3, and Backblaze B2 features an <a href=\"https:\/\/www.backblaze.com\/docs-s3-compatible-api\" target=\"_blank\" rel=\"noreferrer noopener\">S3 compatible API<\/a>, it was a snap to modify the sample to load data from a Backblaze B2 Bucket. In fact, there was just a single line to change, in the <code>s3-secret.yaml<\/code> configuration file. The file is only 10 lines long, so here it is in its entirety:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: v1\nkind: Secret\nmetadata:\n  name: s3-secret\n  annotations:\n     serving.kubeflow.org\/s3-endpoint: s3.us-west-004.backblazeb2.com\ntype: Opaque\ndata:\n  AWS_ACCESS_KEY_ID: &lt;my-backblaze-b2-application-key-id>\n  AWS_SECRET_ACCESS_KEY: &lt;my-backblaze-b2-application-key><\/pre>\n\n\n\n<p>As you can see, all I had to do was set the <code>serving.kubeflow.org\/s3-endpoint<\/code> metadata annotation to my Backblaze B2 Bucket\u2019s endpoint and paste in an application key and its ID.<\/p>\n\n\n\n<p>While that was the only Backblaze B2-specific edit, I did have to configure the bucket and path where my model was stored. Here\u2019s an excerpt from <code>gpt-s3-inferenceservice.yaml<\/code>, which configures the inference service itself:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: serving.kubeflow.org\/v1alpha2\nkind: InferenceService\nmetadata:\n  name: gpt-s3\n  annotations:\n    # Target concurrency of 4 active requests to each container\n    autoscaling.knative.dev\/target: \"4\"\n    serving.kubeflow.org\/gke-accelerator: Tesla_V100\nspec:\n  default:\n    predictor:\n      minReplicas: 0 # Allow scale to zero\n      maxReplicas: 2 \n      serviceAccountName: s3-sa # The B2 credentials are retrieved from the service account\n      tensorflow:\n        # B2 bucket and path where the model is stored\n        storageUri: s3:\/\/&lt;my-bucket>\/model-storage\/124M\/\n        runtimeVersion: \"1.14.0-gpu\"\n        ...<\/pre>\n\n\n\n<p>Aside from <code>storageUri<\/code> configuration, you can see how the predictor application\u2019s pod is configured to scale from between zero and two instances (\u201creplicas\u201d in Kubernetes terminology). The remainder of the file contains the transformer pod configuration, allowing it to scale from zero to a single instance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Running an LLM on CoreWeave Cloud<\/h2>\n\n\n\n<p>Spinning up the inference service involved a <code>kubectl apply<\/code> command for each configuration file and a short wait for the CoreWeave GPU cloud to bring up the compute and networking infrastructure. Once the predictor and transformer services were ready, I used curl to submit my first prompt to the transformer endpoint:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">% curl -d '{\"instances\": [\"That was easy\"]}' http:\/\/gpt-s3-transformer-default.tenant-dead0a.knative.chi.coreweave.com\/v1\/models\/gpt-s3:predict\n{\"predictions\": [\"That was easy for some people, it's just impossible for me,\\\" Davis said. \\\"I'm still trying to\" ]}<\/pre>\n\n\n\n<p>In the video, I repeated the exercise, feeding GPT-2\u2019s response back into it as a prompt a few times to generate a few paragraphs of text. Here\u2019s what it came up with:<\/p>\n\n\n\n<p class=\"has-background\" style=\"background-color:#f5f4ff\">&#8220;That was easy: If I had a friend who could take care of my dad for the rest of his life, I would&#8217;ve known. If I had a friend who could take care of my kid. He would&#8217;ve been better for him than if I had to rely on him for everything.<br \/><br \/>The problem is, no one is perfect. There are always more people to be around than we think. No one cares what anyone in those parts of Britain believes,<br \/><br \/>The other problem is that every decision the people we&#8217;re trying to help aren&#8217;t really theirs. If you have to choose what to do&#8221;<\/p>\n\n\n\n<p>If you\u2019ve used ChatGPT, you\u2019ll recognize how far LLMs have come since GPT-2\u2019s release in 2019!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Run Your Own Large Language Model<\/h2>\n\n\n\n<p>While CoreWeave\u2019s GPT-2 sample is an excellent introduction to the world of LLMs, it\u2019s a bit limited. If you\u2019re looking to get deeper into generative AI, another sample, <a href=\"https:\/\/docs.coreweave.com\/coreweave-machine-learning-and-ai\/how-to-guides-and-tutorials\/model-training-guides\/fine-tuning\/finetuning-machine-learning-models\" target=\"_blank\" rel=\"noreferrer noopener\">Fine-tune Large Language Models with CoreWeave Cloud<\/a>, shows how to fine-tune a model from the more recent <a href=\"https:\/\/github.com\/EleutherAI\/pythia\" target=\"_blank\" rel=\"noreferrer noopener\">EleutherAI Pythia<\/a> suite.<\/p>\n\n\n\n<p>Since CoreWeave is a specialized GPU cloud designed to deliver best-in-class performance up to 35x faster and 80% less expensive than generalized public clouds, it\u2019s a great choice for workloads such as AI, ML, rendering, and more, and, as you\u2019ve seen in this blog post, easy to integrate with <a href=\"https:\/\/www.backblaze.com\/cloud-storage\" target=\"_blank\" rel=\"noreferrer noopener\">Backblaze B2 Cloud Storage<\/a>, with no data transfer costs. For more information, <a href=\"https:\/\/www.coreweave.com\/contact-us\" target=\"_blank\" rel=\"noreferrer noopener\">contact the CoreWeave team<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At Backblaze&#8217;s 2023 Tech Day, CoreWeave and Chief Technical Evangelist Pat Patterson discussed how the two platforms interact. Read the article to see the details. <\/p>\n","protected":false},"author":174,"featured_media":110527,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[7,434,476,484,483],"tags":[468],"class_list":["post-110526","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cloud-storage","category-featured-1","category-media-workflow","category-partner-news","category-tech-lab","tag-b2cloud","entry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>How to Run AI\/ML Workloads on CoreWeave + Backblaze<\/title>\n<meta name=\"description\" content=\"Explore how CoreWeave&#039;s GPU cloud accelerates AI\/ML and rendering tasks, and learn to back up data to Backblaze B2 seamlessly.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Run AI\/ML Workloads on CoreWeave + Backblaze\" \/>\n<meta property=\"og:description\" content=\"Explore how CoreWeave&#039;s GPU cloud accelerates AI\/ML and rendering tasks, and learn to back up data to Backblaze B2 seamlessly.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/\" \/>\n<meta property=\"og:site_name\" content=\"Backblaze Blog | Cloud Storage &amp; Cloud Backup\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/backblaze\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-13T16:37:55+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-08-14T18:52:14+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.backblaze.com\/blog\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1250\" \/>\n\t<meta property=\"og:image:height\" content=\"625\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Pat Patterson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@backblaze\" \/>\n<meta name=\"twitter:site\" content=\"@backblaze\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Pat Patterson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to Run AI\/ML Workloads on CoreWeave + Backblaze","description":"Explore how CoreWeave's GPU cloud accelerates AI\/ML and rendering tasks, and learn to back up data to Backblaze B2 seamlessly.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/","og_locale":"en_US","og_type":"article","og_title":"How to Run AI\/ML Workloads on CoreWeave + Backblaze","og_description":"Explore how CoreWeave's GPU cloud accelerates AI\/ML and rendering tasks, and learn to back up data to Backblaze B2 seamlessly.","og_url":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/","og_site_name":"Backblaze Blog | Cloud Storage &amp; Cloud Backup","article_publisher":"https:\/\/www.facebook.com\/backblaze","article_published_time":"2023-12-13T16:37:55+00:00","article_modified_time":"2024-08-14T18:52:14+00:00","og_image":[{"width":1250,"height":625,"url":"https:\/\/www.backblaze.com\/blog\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png","type":"image\/png"}],"author":"Pat Patterson","twitter_card":"summary_large_image","twitter_creator":"@backblaze","twitter_site":"@backblaze","twitter_misc":{"Written by":"Pat Patterson","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#article","isPartOf":{"@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/"},"author":{"name":"Pat Patterson","@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#\/schema\/person\/a724a8aee97b6451107442747cd101a4"},"headline":"How to Run AI\/ML Workloads on CoreWeave + Backblaze","datePublished":"2023-12-13T16:37:55+00:00","dateModified":"2024-08-14T18:52:14+00:00","mainEntityOfPage":{"@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/"},"wordCount":1111,"commentCount":0,"publisher":{"@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#primaryimage"},"thumbnailUrl":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png","keywords":["B2Cloud"],"articleSection":["Cloud Storage","Featured","Media Workflow","Partner News","Tech Lab"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/","url":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/","name":"How to Run AI\/ML Workloads on CoreWeave + Backblaze","isPartOf":{"@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#primaryimage"},"image":{"@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#primaryimage"},"thumbnailUrl":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png","datePublished":"2023-12-13T16:37:55+00:00","dateModified":"2024-08-14T18:52:14+00:00","description":"Explore how CoreWeave's GPU cloud accelerates AI\/ML and rendering tasks, and learn to back up data to Backblaze B2 seamlessly.","breadcrumb":{"@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#primaryimage","url":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png","contentUrl":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png","width":1250,"height":625,"caption":"A decorative image showing the Backblaze and CoreWeave logos superimposed on clouds."},{"@type":"BreadcrumbList","@id":"https:\/\/www.backblaze.com\/blog\/how-to-run-ai-ml-workloads-on-coreweave-backblaze\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/"},{"@type":"ListItem","position":2,"name":"How to Run AI\/ML Workloads on CoreWeave + Backblaze"}]},{"@type":"WebSite","@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#website","url":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/","name":"Backblaze Cloud Solutions Blog","description":"Cloud Storage &amp; Cloud Backup","publisher":{"@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#organization","name":"Backblaze","url":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/www.backblaze.com\/blog\/wp-content\/uploads\/2017\/12\/backblaze_icon_transparent.png?fit=512%2C512&ssl=1","contentUrl":"https:\/\/i0.wp.com\/www.backblaze.com\/blog\/wp-content\/uploads\/2017\/12\/backblaze_icon_transparent.png?fit=512%2C512&ssl=1","width":512,"height":512,"caption":"Backblaze"},"image":{"@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/backblaze","https:\/\/x.com\/backblaze","https:\/\/www.youtube.com\/user\/Backblaze","https:\/\/en.wikipedia.org\/wiki\/Backblaze"]},{"@type":"Person","@id":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/#\/schema\/person\/a724a8aee97b6451107442747cd101a4","name":"Pat Patterson","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2022\/01\/PatPatterson1920px-150x150.png","url":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2022\/01\/PatPatterson1920px-150x150.png","contentUrl":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2022\/01\/PatPatterson1920px-150x150.png","caption":"Pat Patterson"},"description":"Pat Patterson is the former chief technical evangelist at Backblaze. Over his three decades in the industry, Pat has built software and communities at Sun Microsystems, Salesforce, StreamSets, and Citrix. In his role at Backblaze, he creates and delivers content tailored to the needs of the hands-on technical professional, acts as the \u201cvoice of the developer\u201d on the Product team, and actively participates in the wider technical community. Outside the office, Pat runs far, having completed ultramarathons up to the 50 mile distance. Catch up with Pat via Bluesky or LinkedIn.","url":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/author\/pat\/"}]}},"jetpack_featured_media_url":"https:\/\/backblazeprod.wpenginepowered.com\/wp-content\/uploads\/2023\/12\/bb-bh-Coreweave-How-To.png","_links":{"self":[{"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/posts\/110526","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/users\/174"}],"replies":[{"embeddable":true,"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/comments?post=110526"}],"version-history":[{"count":0,"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/posts\/110526\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/media\/110527"}],"wp:attachment":[{"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/media?parent=110526"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/categories?post=110526"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/backblazeprod.wpenginepowered.com\/blog\/wp-json\/wp\/v2\/tags?post=110526"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}