
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 23:30:02 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Making Super Slurper 5x faster with Workers, Durable Objects, and Queues]]></title>
            <link>https://blog.cloudflare.com/making-super-slurper-five-times-faster/</link>
            <pubDate>Thu, 10 Apr 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ We re-architected Super Slurper from the ground up using our Developer Platform — leveraging Cloudflare Workers, Durable Objects, and Queues — and improved transfer speeds by up to 5x. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://developers.cloudflare.com/r2/data-migration/super-slurper/"><u>Super Slurper</u></a> is Cloudflare’s data migration tool that is designed to make large-scale data transfers between cloud object storage providers and <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2</u></a> easy. Since its launch, thousands of developers have used Super Slurper to move petabytes of data from AWS S3, Google Cloud Storage, and other <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible services</a> to R2.</p><p>But we saw an opportunity to make it even faster. We rearchitected Super Slurper from the ground up using our Developer Platform — building on <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, and <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> — and improved transfer speeds by up to 5x. In this post, we’ll dive into the original architecture, the performance bottlenecks we identified, how we solved them, and the real-world impact of these improvements.</p>
    <div>
      <h2>Initial architecture and performance bottlenecks</h2>
      <a href="#initial-architecture-and-performance-bottlenecks">
        
      </a>
    </div>
    <p>Super Slurper originally shared its architecture with <a href="https://developers.cloudflare.com/images/upload-images/sourcing-kit/"><u>SourcingKit</u></a>, a tool built to bulk import images from AWS S3 into <a href="https://developers.cloudflare.com/images/"><u>Cloudflare Images</u></a>. SourcingKit was deployed on Kubernetes and ran alongside the <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> service. When we started building Super Slurper, we split it into its own Kubernetes namespace and introduced a few new APIs to make it easier to use for the object storage use case. This setup worked well and helped thousands of developers move data to R2.</p><p>However, it wasn’t without its challenges. SourcingKit wasn’t designed to handle the scale required for large, petabytes-scale transfers. SourcingKit, and by extension Super Slurper, operated on Kubernetes clusters located in one of our core data centers, meaning it had to share compute resources and bandwidth with Cloudflare’s control plane, analytics, and other services. As the number of migrations grew, these resource constraints became a clear bottleneck.</p><p>For a service transferring data between object storage providers, the job is simple: list objects from the source, copy them to the destination, and repeat. This is exactly how the original Super Slurper worked. We listed objects from the source bucket, pushed that list to a Postgres-based queue (<code>pg_queue</code>), and then pulled from this queue at a steady pace to copy objects over. Given the scale of object storage migrations, bandwidth usage was inevitably going to be high. This made it challenging to scale.</p><p>To address the bandwidth constraints operating solely in our core data center, we introduced <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a> into the mix. Instead of handling the copying of data in our core data center, we started calling out to a Worker to do the actual copying:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EgtILMnu88y3VzUvYLlPl/479e2f99a62155f7bd8047f98a2a9cd2/1_.png" />
          </figure><p>As Super Slurper’s usage grew, so did our Kubernetes resource consumption. A significant amount of time during data transfers was spent waiting on network I/O or storage, and not actually doing compute-intensive tasks. So we didn’t need more memory or more CPU, we needed more concurrency.</p><p>To keep up with demand, we kept increasing the replica count. But eventually, we hit a wall. We were dealing with scalability challenges when running on the order of tens of pods when we wanted multiple orders of magnitude more.</p><p>We decided to rethink the entire approach from first principles, instead of leaning on the architecture we had inherited. In about a week, we built a rough proof of concept using <a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, and <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a>. We listed objects from the source bucket, pushed them into a queue, and then consumed messages from the queue to initiate transfers. Although this sounds very similar to what we did in the original implementation, building on our Developer Platform allowed us to automatically scale an order of magnitude higher than before.</p><ul><li><p><b>Cloudflare Queues</b>: Enables asynchronous object transfers and auto-scales to meet the number of objects being migrated.</p></li><li><p><b>Cloudflare Workers</b>: Runs lightweight compute tasks without the overhead of Kubernetes and optimizes where in the world each part of the process runs<b> </b>for lower latency and better performance.</p></li><li><p><b>SQLite-backed Durable Objects (DOs)</b>: Acts as a fully distributed database, eliminating the limitations of a single PostgreSQL instance.</p></li><li><p><b>Hyperdrive</b>: Provides fast access to historical job data from the original PostgreSQL database, keeping it as an archive store.</p></li></ul><p>We ran a few tests and found that our proof of concept was slower than the original implementation for small transfers (a few hundred objects), but it matched and eventually exceeded the performance of the original as transfers scaled into the millions of objects. That was the signal we needed to invest the time to take our proof of concept to production.</p><p>We removed our proof of concept hacks, worked on stability, and found new ways to make transfers scale to even higher concurrency. After a few iterations, we landed on something we were happy with.</p>
    <div>
      <h2>New architecture: Workers, Queues, and Durable Objects</h2>
      <a href="#new-architecture-workers-queues-and-durable-objects">
        
      </a>
    </div>
    
    <div>
      <h4>Processing layer: managing the flow of migration</h4>
      <a href="#processing-layer-managing-the-flow-of-migration">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ieLgJoWErEYEEa90QaXLC/81470021a99486a974753301d2d2f809/2.png" />
          </figure><p>At the heart of our processing layer are <b>queues, consumers, and workers</b>. Here’s what the process looks like:</p>
    <div>
      <h4>Kicking off a migration</h4>
      <a href="#kicking-off-a-migration">
        
      </a>
    </div>
    <p>When a client triggers a migration, it starts with a request sent to our <b>API Worker</b>. This worker takes the details of the migration, stores them in the database, and adds a message to the <b>List Queue</b> to start the process.</p>
    <div>
      <h4>Listing source bucket objects</h4>
      <a href="#listing-source-bucket-objects">
        
      </a>
    </div>
    <p>The <b>List Queue Consumer</b> is where things start to pick up. It pulls messages from the queue, retrieves object listings from the source bucket, applies any necessary filters, and stores important metadata in the database. Then, it creates new tasks by enqueuing object transfer messages into the <b>Transfer Queue</b>.</p><p>We immediately queue new batches of work, maximizing concurrency. A built-in throttling mechanism prevents us from adding more messages to our queues when unexpected failures occur, such as dependent systems going down. This helps maintain stability and prevents overload during disruptions.</p>
    <div>
      <h4>Efficient object transfers</h4>
      <a href="#efficient-object-transfers">
        
      </a>
    </div>
    <p>The <b>Transfer Queue Consumer</b> Workers pull object transfer messages from the queue, ensuring that each object is processed only once by locking the object key in the database. When the transfer finishes, the object is unlocked. For larger objects, we break them into manageable chunks and transfer them as multipart uploads.</p>
    <div>
      <h4>Handling failures gracefully</h4>
      <a href="#handling-failures-gracefully">
        
      </a>
    </div>
    <p>Failures are inevitable in any distributed system, and we had to make sure we accounted for that. We implemented automatic retries for transient failures, so issues don’t interrupt the flow of the migration. But if something can’t be resolved with retries, the message goes into the <b>Dead Letter Queue (DLQ)</b>, where it is logged for later review and resolution.</p>
    <div>
      <h4>Job completion &amp; lifecycle management</h4>
      <a href="#job-completion-lifecycle-management">
        
      </a>
    </div>
    <p>Once all the objects are listed and the transfers are in progress, the <b>Lifecycle Queue Consumer</b> keeps an eye on everything. It monitors the ongoing transfers, ensuring that no object is left behind. When all the transfers are complete, the job is marked as finished and the migration process wraps up.</p>
    <div>
      <h3>Database layer: durable storage &amp; legacy data retrieval</h3>
      <a href="#database-layer-durable-storage-legacy-data-retrieval">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4OhENQndBrRkVLNmWQ4mWP/815173a64ec1943b7b626b02247d4887/3.png" />
          </figure><p>When building our new architecture, we knew we needed a robust solution to handle massive datasets while ensuring retrieval of historical job data. That's where our combination of <b>Durable Objects (DOs)</b> and <b>Hyperdrive</b> came in.</p>
    <div>
      <h4>Durable Objects</h4>
      <a href="#durable-objects">
        
      </a>
    </div>
    <p>We gave each account a dedicated Durable Object to track migration jobs. Each <b>job’s DO</b> stores vital details, such as bucket names, user options, and job state. This ensured everything stayed organized and easy to manage. To support large migrations, we also added a <b>Batch DO</b> that manages all the objects queued for transfer, storing their transfer state, object keys, and any extra metadata.</p><p>As migrations scaled up to <b>billions of objects</b>, we had to get creative with storage. We implemented a sharding strategy to distribute request loads, preventing bottlenecks and working around <b>SQLite DO’s 10 GB</b> storage limit. As objects are transferred, we clean up their details, optimizing storage space along the way. It’s surprising how much storage a billion object keys can require!</p>
    <div>
      <h4>Hyperdrive</h4>
      <a href="#hyperdrive">
        
      </a>
    </div>
    <p>Since we were rebuilding a system with years of migration history, we needed a way to preserve and access every past migration detail. Hyperdrive serves as a bridge to our legacy systems, enabling seamless retrieval of historical job data from our core <b>PostgreSQL</b> database. It's not just a data retrieval mechanism, but an archive for complex migration scenarios.</p>
    <div>
      <h2>Results: Super Slurper now transfers data to R2 up to 5x faster</h2>
      <a href="#results-super-slurper-now-transfers-data-to-r2-up-to-5x-faster">
        
      </a>
    </div>
    <p>So, after all of that, did we actually achieve our goal of making transfers faster?</p><p>We ran a test migration of 75,000 objects from AWS S3 to R2. With the original implementation, the transfer took 15 minutes and 30 seconds. After our performance improvements, the same migration completed in just 3 minutes and 25 seconds.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Pmt9tVNGYWvmRQQyvYE9/43443656bc81743485c3bb0f7d65b134/4.png" />
          </figure><p>When production migrations started using the new service in February, we saw even greater improvements in some cases, especially depending on the distribution of object sizes. Super Slurper has been around <a href="https://blog.cloudflare.com/r2-super-slurper-ga/"><u>for about two years</u></a>. But the improved performance has led to it being able to move much more data — 35% of all objects copied by Super Slurper happened just in the last two months.</p>
    <div>
      <h2>Challenges</h2>
      <a href="#challenges">
        
      </a>
    </div>
    <p>One of the biggest challenges we faced with the new architecture was handling duplicate messages. There were a couple of ways duplicates could occur:</p><ul><li><p>Queues provides at-least-once delivery, which means consumers may receive the same message more than once to guarantee delivery.</p></li><li><p>Failures and retries could also create apparent duplicates. For example, if a request to a Durable Object fails after the object has already been transferred, the retry could reprocess the same object.</p></li></ul><p>If not handled correctly, this could result in the same object being transferred multiple times. To solve this, we implemented several strategies to ensure each object was accurately accounted for and only transferred once:</p><ol><li><p>Since listing is sequential (e.g., to get object 2, you need the continuation token from listing object 1), we assign a sequence ID to each listing operation. This allows us to detect duplicate listings and prevent multiple processes from starting simultaneously. This is particularly useful because we don’t wait for database and queue operations to complete before listing the next batch. If listing 2 fails, we can retry it, and if listing 3 has already started, we can short-circuit unnecessary retries.</p></li><li><p>Each object is locked when its transfer begins, preventing parallel transfers of the same object. Once successfully transferred, the object is unlocked by deleting its key from the database. If a message for that object reappears later, we can safely assume it has already been transferred if the key no longer exists.</p></li><li><p>We rely on database transactions to keep our counts accurate. If an object fails to unlock, its count remains unchanged. Similarly, if an object key fails to be added to the database, the count isn’t updated, and the operation will be retried later.</p></li><li><p>As a last failsafe, we check whether the object already exists in the target bucket and was published after the start of our migration. If so, we assume it was transferred by our process (or another) and safely skip it.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/17zkULEDjrPDlG6mNIpomw/5c95bde32595daf0684a558729ee055a/5.png" />
          </figure>
    <div>
      <h2>What’s next for Super Slurper?</h2>
      <a href="#whats-next-for-super-slurper">
        
      </a>
    </div>
    <p>We’re always exploring ways to make Super Slurper faster, more scalable, and even easier to use — this is just the beginning.</p><ul><li><p>We recently launched the ability to migrate from any <a href="https://developers.cloudflare.com/changelog/2025-02-24-r2-super-slurper-s3-compatible-support/"><u>S3 compatible storage provider</u></a>!</p></li><li><p>Data migrations are still currently limited to 3 concurrent migrations per account, but we want to increase that limit. This will allow object prefixes to be split up into separate migrations and run in parallel, drastically increasing the speed at which a bucket can be migrated. For more information on Super Slurper and how to migrate data from existing object storage to R2, refer to our <a href="https://developers.cloudflare.com/r2/data-migration/super-slurper/"><u>documentation</u></a>.</p></li></ul><p>P.S. As part of this update, we made the API much simpler to interact with, so migrations can now be <a href="https://developers.cloudflare.com/api/resources/r2/subresources/super_slurper/"><u>managed programmatically</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2 Super Slurper]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Queues]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[R2]]></category>
            <guid isPermaLink="false">12YmRoxQrsnW1ZVtEKBdht</guid>
            <dc:creator>Connor Maddox</dc:creator>
            <dc:creator>Siddhant Sinha</dc:creator>
            <dc:creator>Prasanna Sai Puvvada</dc:creator>
        </item>
        <item>
            <title><![CDATA[The S3 to R2 Super Slurper is now Generally Available]]></title>
            <link>https://blog.cloudflare.com/r2-super-slurper-ga/</link>
            <pubDate>Tue, 16 May 2023 13:00:50 GMT</pubDate>
            <description><![CDATA[ Use Super Slurper to quickly, securely, and easily migrate data from S3 to R2. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>R2 is Cloudflare’s zero <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fee</a> <a href="https://www.cloudflare.com/developer-platform/products/r2/">object storage platform</a>. One of the things that developers love about R2 is how easy it is to get started. With R2’s <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible API</a>, integrating R2 into existing applications only requires changing a couple of lines of code.</p><p>However, migrating data from other <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> providers into R2 can still be a challenge. To address this issue, we introduced the beta of <a href="/cloudflare-r2-super-slurper/">R2 Super Slurper</a> late last year. During the beta period, we’ve been able to partner with early adopters on hundreds of successful migrations from S3 to <a href="www.cloudflare.com/developer-platform/r2/">Cloudflare R2</a>. We’ve made many improvements during the beta including speed (up to 5x increase in the number of objects copied per second!), reliability, and the ability to copy data between R2 buckets. Today, we’re proud to announce the general availability of Super Slurper for one-time migration, making <a href="https://www.cloudflare.com/learning/cloud/what-is-data-migration/">data migration</a> a breeze!</p>
    <div>
      <h2>Data migration that’s fast, reliable, and easy to use</h2>
      <a href="#data-migration-thats-fast-reliable-and-easy-to-use">
        
      </a>
    </div>
    <p>R2 Super Slurper one-time migration allows you to quickly and easily copy objects from S3 to an R2 bucket of your choice.</p>
    <div>
      <h3>Fast</h3>
      <a href="#fast">
        
      </a>
    </div>
    <p>Super Slurper copies objects from your S3 buckets in parallel and uses Cloudflare’s global network to tap into vast amounts of bandwidth to ensure migrations finish fast.</p><blockquote><p>This migration tool is impressively fast! We expected our migration to take a day to complete, but we were able to move all of our data in less than half an hour. - <b>Nick Inhofe</b>, Engineering Manager at <a href="https://www.pdq.com/">PDQ</a></p></blockquote>
    <div>
      <h3>Reliable</h3>
      <a href="#reliable">
        
      </a>
    </div>
    <p>Sending objects through the Internet can sometimes fail. R2 Super Slurper accounts for that, and is capable of driving multi-terabyte migrations to completion with robust retries of failed transfers. Additionally, larger objects are transferred in chunks, so if something goes wrong, it only retries the portion of the object that’s needed. This means faster migrations and <a href="https://r2-calculator.cloudflare.com/">lower cost</a>. And if for some reason an object just won’t transfer, it gets logged, so you can keep track and sort it out later.</p>
    <div>
      <h3>Easy to use</h3>
      <a href="#easy-to-use">
        
      </a>
    </div>
    <p>R2 Super Slurper simplifies the process of copying objects and their associated metadata from S3 to your R2 buckets. Point Super Slurper to your S3 buckets and an asynchronous task will handle the rest. While the migration is taking place, you can follow along from the dashboard.</p><blockquote><p>R2 has saved us both time and money. We migrated millions of images in a short period of time. It wouldn't have been possible for us to build a tool to migrate our data in this amount of time in a cost-effective way. - <b>Damien Capocchi</b>, Backend Engineering Manager at <a href="https://reelgood.com/">Reelgood</a></p></blockquote>
    <div>
      <h2>Migrate your S3 data into R2</h2>
      <a href="#migrate-your-s3-data-into-r2">
        
      </a>
    </div>
    <ol><li><p>From the Cloudflare dashboard, expand <b>R2</b> and select <b>Data Migration</b>.</p></li><li><p>Select <b>Migrate files</b>.</p></li><li><p>Enter your Amazon S3 bucket name, optional bucket prefix, and associated credentials and select <b>Next</b>.</p></li><li><p>Enter your R2 bucket name and associated credentials and select <b>Next</b>.</p></li><li><p>After you finish reviewing the details of your migration, select <b>Migrate files</b>.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1qo4qeK1OqM0xTKm6RbBWy/0ac3f85a6833194a585a66a1d3880524/image1-32.png" />
            
            </figure><p>You can view the status of your migration job at any time on the dashboard. If you want to copy data from one R2 bucket to another R2 bucket, you can select Cloudflare R2 as the source bucket provider and follow the same process. For more information on how to use Super Slurper, please see the documentation <a href="https://developers.cloudflare.com/r2/r2-migrator/">here</a>.</p>
    <div>
      <h3>Next up: Incremental migration</h3>
      <a href="#next-up-incremental-migration">
        
      </a>
    </div>
    <p>For the majority of cases, a one-time migration of data from your previous object storage bucket to R2 is sufficient; complete the switch from S3 to R2 and immediately watch egress fees go to zero.</p><p>However, in some cases you may want to migrate data to R2 incrementally over time (sip by sip if you will). Enter incremental migration, allowing you to do just that.</p><p>The goal of incremental migration is to copy files from your origin bucket to R2 as they are requested. When a requested object is not already in the R2 bucket, it is downloaded - one last time - from your origin bucket then copied to R2. From now on, every request for this object will be served by R2, which means less egress fees!</p><p>Since data is migrated all within the flow of normal data access and application logic, this means zero cost overhead of unnecessary egress fees! Previously complicated migrations become as easy as replacing your S3 endpoint in your application.</p>
    <div>
      <h3>Join the private beta waitlist for incremental migration</h3>
      <a href="#join-the-private-beta-waitlist-for-incremental-migration">
        
      </a>
    </div>
    <p>We’re excited about our progress making data migration easier, but we’re just getting started. If you’re interested in participating in the private beta for Super Slurper incremental migration, let us know by joining the waitlist <a href="https://forms.gle/9xvDLR8LL1Pt8rF58">here</a>.</p><p>We encourage you to join our <a href="https://discord.cloudflare.com/">Discord community</a> to share your R2 experiences, questions, and feedback!</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[R2 Super Slurper]]></category>
            <guid isPermaLink="false">5rDAs1l1glaZBHaVpPlbSu</guid>
            <dc:creator>Phillip Jones</dc:creator>
            <dc:creator>Jérôme Schneider</dc:creator>
        </item>
        <item>
            <title><![CDATA[ICYMI: Developer Week 2022 announcements]]></title>
            <link>https://blog.cloudflare.com/icymi-developer-week-2022-announcements/</link>
            <pubDate>Fri, 18 Nov 2022 21:13:51 GMT</pubDate>
            <description><![CDATA[ This week we made over 30 announcements, in case you missed any here’s a quick round-up.  ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hJumg76O4azWrTEzO7r35/c611fec9c69576a134d3fa3ee61f714a/2022-Developer-Week-Hero-Dark_b-1.png" />
            
            </figure><p>Developer Week 2022 has come to a close. Over the last week we’ve shared with you 31 posts on what you can build on Cloudflare and our vision and roadmap on where we’re headed. We shared product announcements, customer and partner stories, and provided technical deep dives. In case you missed any of the posts here’s a handy recap.</p>
    <div>
      <h2>Product and feature announcements</h2>
      <a href="#product-and-feature-announcements">
        
      </a>
    </div>
    <table>
<thead>
  <tr>
    <th>Announcement</th>
    <th>Summary</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/welcome-to-the-supercloud-and-developer-week-2022/">Welcome to the Supercloud (and Developer Week 2022)</a></td>
    <td>Our vision of the cloud -- a model of cloud computing that promises to make developers highly productive at scaling from one to Internet-scale in the most flexible, efficient, and economical way.</td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-queues-open-beta">Build applications of any size on Cloudflare with the Queues open beta</a></td>
    <td>Build performant and resilient distributed applications with Queues. Available to all developers with a paid Workers plan. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-r2-super-slurper/">Migrate from S3 easily with the R2 Super Slurper</a></td>
    <td>A tool to easily and efficiently move objects from your existing storage provider to R2. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cloudflare-workers-templates/">Get started with Cloudflare Workers with ready-made templates</a></td>
    <td>See what’s possible with Workers and get building faster with these starter templates. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/cache-reserve-open-beta/">Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve</a></td>
    <td>Cache Reserve is graduating to open beta – users can now test and integrate it into their content delivery strategy without any additional waiting. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/announcing-logs-engine/">Store and process your Cloudflare Logs... with Cloudflare</a></td>
    <td>Query Cloudflare logs stored on R2. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/d1-open-alpha/">UPDATE Supercloud SET status = 'open alpha' WHERE product = 'D1'</a></td>
    <td>D1, our first global relational database, is in open alpha. Start building and share your feedback with us. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/introducing-workers-browser-rendering-api/">Automate an isolated browser instance with just a few lines of code</a></td>
    <td>The Browser Rendering API is an out of the box solution to run browser automation tasks with Puppeteer in Workers.
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/mutual-tls-for-workers/">Bringing authentication and identification to Workers through Mutual TLS</a></td>
    <td>Send outbound requests with Workers through a mutually authenticated channel. 
</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/pages-function-goes-GA/">Spice up your sites on Cloudflare Pages with Pages Functions General Availability</a></td>
    <td>Easily add dynamic content to your Pages projects with Functions.  </td>    
  </tr>
     <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/launchpad-fall-22/">Announcing the first Workers Launchpad cohort and growth of the program to $2 billion</a></td>
    <td>We were blown away by the interest in the Workers Launchpad Funding Program and are proud to introduce the first cohort. 
</td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/snippets-announcement">The most programmable Supercloud with Cloudflare Snippets</a></td>
    <td>Modify traffic routed through the Cloudflare CDN without having to write a Worker. 
</td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/deployments-for-workers">Keep track of Workers’ code and configuration changes with Deployments</a></td>
    <td>Track your changes to a Worker configuration, binding, and code. 
</td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/workers-logpush-GA">Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush</a></td>
    <td>Gain visibility into your Workers when logs are sent to your analytics platform or <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>. Available to all users on a Workers paid plan. </td>    
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/improving-workers-types">Improved Workers TypeScript support</a></td>
    <td>Based on feedback from users we’ve improved our types and are open-sourcing the automatic generation scripts. </td>    
  </tr>
    

</tbody>
</table>
    <div>
      <h3>Technical deep dives</h3>
      <a href="#technical-deep-dives">
        
      </a>
    </div>
    <table>
<thead>
  <tr>
    <th>Announcement</th>
    <th>Summary</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/standards-compliant-workers-api/">The road to a more standards-compliant Workers API</a></td>
      <td>An update on the work the <a href="https://github.com/wintercg">WinterCG</a> is doing on the creation of common API standards in JavaScript runtimes and how Workers is implementing them.   
</td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/r2-rayid-retrieval">Indexing millions of HTTP requests using Durable Objects
</a></td>
    <td>Indexing and querying millions of logs stored in R2 using Workers, Durable Objects, and the Streams API.  </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/building-a-better-developer-experience-through-api-documentation">Iteration isn't just for code: here are our latest API docs</a></td>
    <td>We’ve revamped our API reference documentation to standardize our API content and improve the overall developer experience when using the Cloudflare APIs. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/making-static-sites-dynamic-with-cloudflare-d1">Making static sites dynamic with D1</a></td>
    <td>A template to build a D1-based comments API. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/open-api-transition">The Cloudflare API now uses OpenAPI schemas</a></td>
    <td>OpenAPI schemas are now available for the Cloudflare API. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/pages-full-stack-frameworks">Server-side render full stack applications with Pages Functions</a></td>
    <td>Run server-side rendering in a Function using a variety of frameworks including Qwik, Astro, and SolidStart.</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/fragment-piercing">Incremental adoption of micro-frontends with Cloudflare Workers</a></td>
    <td>How to replace selected elements of a legacy client-side rendered application with server-side rendered fragments using Workers. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/technology-behind-radar2/">How we built it: the technology behind Cloudflare Radar 2.0</a></td>
    <td>Details on how we rebuilt Radar using Pages, Remix, Workers, and R2. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/terraforming-cloudflare-at-cloudflare">How Cloudflare uses Terraform to manage Cloudflare</a></td>
    <td>How we made it easier for our developers to make changes with the Cloudflare Terraform provider. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/network-performance-update-developer-week/">Network performance Update: Developer Week 2022</a></td>
    <td>See how fast Cloudflare Workers are compared to other solutions.</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/using-analytics-engine-to-improve-analytics-engine">How Cloudflare instruments services using Workers Analytics Engine</a></td>
    <td>Instrumentation with Analytics Engine provides data to find bugs and helps us prioritize new features. </td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/miniflare-and-workerd">Doubling down on local development with Workers:Miniflare meets workerd</a></td>
    <td>Improving local development using Miniflare3, now powered by workerd.</td>
  </tr>
 
</tbody>
</table>
    <div>
      <h3>Customer and partner stories</h3>
      <a href="#customer-and-partner-stories">
        
      </a>
    </div>
    <table>
<thead>
  <tr>
    <th>Announcement</th>
    <th>Summary</th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/devcycle-customer-story">Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers</a></td>
    <td>How DevCycle re-architected their feature management tool using Workers. </td>
  </tr>
  <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/neon-postgres-database-from-workers">Easy Postgres integration with Workers and Neon.tech</a></td>
    <td>Neon.tech solves the challenges of connecting to Postgres from Workers</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/xata-customer-story">Xata Workers: client-side database access without client-side secrets</a></td>
    <td>Xata uses Workers for Platform to reduce security risks of running untrusted code.</td>
  </tr>
    <tr>
    <td><a href="http://staging.blog.mrk.cfdata.org/twilio-segment-sdk-powered-by-cloudflare-workers">Twilio Segment Edge SDK powered by Cloudflare Workers</a></td>
    <td>The Segment Edge SDK, built on Workers, helps applications collect and track events from the client, and get access to realtime user state to personalize experiences.</td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Next</h3>
      <a href="#next">
        
      </a>
    </div>
    <p>And that’s it for Developer Week 2022. But you can keep the conversation going by joining our <a href="https://discord.gg/cloudflaredev">Discord Community</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Supercloud]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[R2 Super Slurper]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5T3oI2mNgSH8cND974FYm5</guid>
            <dc:creator>Dawn Parzych</dc:creator>
        </item>
        <item>
            <title><![CDATA[Migrate from S3 easily with the R2 Super Slurper]]></title>
            <link>https://blog.cloudflare.com/cloudflare-r2-super-slurper/</link>
            <pubDate>Tue, 15 Nov 2022 14:01:00 GMT</pubDate>
            <description><![CDATA[ Today we're announcing the R2 Super Slurper, the tool that will enable you to migrate all your data to R2 in a simple and efficient way ]]></description>
            <content:encoded><![CDATA[ <p></p><p>R2 is an <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible</a>, globally <a href="https://www.cloudflare.com/developer-platform/products/r2/">distributed object storage</a>, allowing developers to store large amounts of unstructured data without the costly <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress bandwidth fees</a> you commonly find with other providers.</p><p>To enjoy this egress freedom, you’ll have to start planning to send all that data you have somewhere else into R2. You might want to do it all at once, moving as much data as quickly as possible while ensuring data consistency. Or do you prefer moving the data to R2 slowly and gradually shifting your reads from your old provider to R2? And only then decide whether to cut off your old storage or keep it as a backup for new objects in R2?</p><p>There are multiple options for architecture and implementations for this movement, but taking terabytes of data from one cloud storage provider to another is always problematic, always involves planning, and likely requires staffing.</p><p>And that was hard. But not anymore.</p><p>Today we're announcing the R2 Super Slurper, the feature that will enable you to move all your data to R2 in one giant slurp or sip by sip — all in a friendly, intuitive UI and API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ndFgYCd0mbAXlnMc5LRQ2/a52499ae2b42f8f7d0b99c08ae1333b5/image2-29.png" />
            
            </figure>
    <div>
      <h2>The first step: R2 Super Slurper Private Beta</h2>
      <a href="#the-first-step-r2-super-slurper-private-beta">
        
      </a>
    </div>
    
    <div>
      <h3>One giant slurp</h3>
      <a href="#one-giant-slurp">
        
      </a>
    </div>
    <p>The very first iteration of the R2 Super Slurper allows you to target an S3 bucket and import the objects you have stored there into your R2 bucket. It's a simple, one-time import that covers the most common scenarios. Point to your existing S3 source, grant the R2 Super Slurper permissions to read the objects you want to migrate, and an asynchronous job will take care of the rest.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3WkVFWS2GIAuc9TnpHuAsT/898b781ae72a521fddafb2ddb722763b/image1-34.png" />
            
            </figure><p>You'll also be able to save the definitions and credentials to access your source bucket, so you can migrate different folders from within the bucket, in new operations, without having to define URLs and credentials all over again. This operation alone will save you from scripting your way through buckets with many paths you’d like to validate for consistency.  During the beta stages — with your feedback — we will evolve the R2 Super Slurper to the point where anyone can achieve an entirely consistent, super slurp, all with the click of just a few buttons.</p>
    <div>
      <h3>Automatic sip by sip migration</h3>
      <a href="#automatic-sip-by-sip-migration">
        
      </a>
    </div>
    <p>Other future development includes automatic sip by sip migration, which provides a way to incrementally copy objects to R2 as they get requested from an end-user. It allows you to start serving objects from R2 as they migrate, saving you money immediately.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36YMN6Y7USY0TMwG8VQuob/a3e1f03714895b9581abe870a322566b/image4-16.png" />
            
            </figure><p>The flow of the requests and object migration will look like this:</p><ul><li><p><b>Check for Object</b> — A request arrives at Cloudflare <b>(1)</b>, and we check the R2 bucket for the requested object <b>(2)</b>. If the object exists, R2 serves it <b>(3)</b>.</p></li><li><p><b>Copy the Object</b> — If the object does <i>not</i> exist in R2, a request for the object flows to the origin bucket <b>(2a)</b>. Once there's an answer with an object, we serve it and copy it into R2 <b>(2b)</b>.</p></li><li><p><b>Serve the Object —</b> R2 serves all future requests for the object <b>(3)</b>.</p></li></ul><p>With this capability you can copy your objects, previously scattered through one or even multiple buckets from other vendors, while ensuring that everything requested from the end-user side gets served from R2. And because you will only need to use the R2 Super Slurper to sip the object from elsewhere on the first request, you will <a href="https://r2-calculator.cloudflare.com/">start saving on those egress fees</a> for any subsequent ones.</p><p>We are currently targeting <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible</a> buckets for now, but you can expect other sources to become available during 2023.</p>
    <div>
      <h2>Join the waitlist for the R2 Super Slurper private beta</h2>
      <a href="#join-the-waitlist-for-the-r2-super-slurper-private-beta">
        
      </a>
    </div>
    <p>To access the R2 Super Slurper, <a href="https://dash.cloudflare.com/?to=/:account/r2/plans">you must be an R2 user first</a> and sign up for the R2 Super Slurper waitlist <a href="https://dash.cloudflare.com/?to=/:account/r2/slurper">here</a>.</p><p>We will collaborate closely with many early users in the private beta stage to refine and test the service . Soon, we'll announce an open beta where users can sign up for the service.</p><p>Make sure to join our <a href="https://discord.gg/cloudflaredev">Discord server</a> and get in touch with a fantastic community of users and Cloudflare staff for all R2-related topics!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[R2 Super Slurper]]></category>
            <category><![CDATA[Data Transfer Bucket]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[R2]]></category>
            <guid isPermaLink="false">5pytIiHYPQbpQj1sc4tDlE</guid>
            <dc:creator>Aly Cabral</dc:creator>
        </item>
    </channel>
</rss>