
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 05:46:36 GMT</lastBuildDate>
        <item>
            <title><![CDATA[The Migration of Legacy Applications to Workers]]></title>
            <link>https://blog.cloudflare.com/the-migration-of-legacy-applications-to-workers/</link>
            <pubDate>Tue, 28 Jul 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ As Cloudflare Workers, and other Serverless platforms, continue to drive down costs while making it easier for developers to stand up globally scaled applications, the migration of legacy applications is becoming increasingly common. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>As Cloudflare Workers, and other Serverless platforms, continue to drive down costs while making it easier for developers to stand up globally scaled applications, the migration of legacy applications is becoming increasingly common. In this post, I want to show how easy it is to migrate such an application onto Workers. To demonstrate, I’m going to use a common migration scenario: moving a legacy application — on an old compute platform behind a VPN or in a private cloud — to a serverless compute platform behind zero-trust security.</p>
    <div>
      <h3>Wait but why?</h3>
      <a href="#wait-but-why">
        
      </a>
    </div>
    <p>Before we dive further into the technical work, however, let me just address up front: why would someone want to do this? What benefits would they get from such a migration? In my experience, there are two sets of reasons: (1) factors that are “pushing” off legacy platforms, or the constraints and problems of the legacy approach; and (2) factors that are “pulling” onto serverless platforms like Workers, which speaks to the many benefits of this new approach. In terms of the push factors, we often see three core ones:</p><ul><li><p>Legacy compute resources are not flexible and must be constantly maintained, often leading to capacity constraints or excess cost;</p></li><li><p>Maintaining VPN credentials is cumbersome, and can introduce security risks if not done properly;</p></li><li><p>VPN client software can be challenging for non-technical users to operate.</p></li></ul><p>Similarly, there are some very key benefits “pulling” folks onto Serverless applications and zero-trust security:</p><ul><li><p>Instant scaling, up or down, depending on usage. No capacity constraints, and no excess cost;</p></li><li><p>No separate credentials to maintain, users can use Single Sign On (SSO) across many applications;</p></li><li><p>VPN hardware / private cloud; and existing compute, can be retired to simplify operations and reduce cost</p></li></ul><p>While the benefits to this more modern end-state are clear, there’s one other thing that causes organizations to pause: the costs in time and migration effort seem daunting. Often what organizations find is that migration is not as difficult as they fear. In the rest of this post, I will show you how Cloudflare Workers, and the rest of the Cloudflare platform, can greatly simplify migrations and help you modernize all of your applications.</p>
    <div>
      <h3>Getting Started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>To take you through this, we will use a contrived application I’ve written in Node.js to illustrate the steps we would take with a real, and far more complex, example. The goal is to show the different tools and features you can use at each step; and how our platform design supports development and cutover of an application.  We’ll use four key Cloudflare technologies, as we see how to move this Application off of my Laptop and into the Cloud:</p><ol><li><p><b>Serverless Compute through Workers</b></p></li><li><p><b>Robust Developer-focused Tooling for Workers via Wrangler</b></p></li><li><p><b>Zero-Trust security through Access</b></p></li><li><p><b>Instant, Secure Origin Tunnels through Argo Tunnels</b></p></li></ol><p>Our example application for today is called Post Process, and it performs business logic on input provided in an HTTP POST body. It takes the input data from authenticated clients, performs a processing task, and responds with the result in the body of an HTTP response. The server runs in Node.js on my laptop.</p><p>Since the example application is written in Node.js; we will be able to directly copy some of the JavaScript assets for our new application. You could follow this “direct port” method not only for JavaScript applications, <a href="https://github.com/cloudflare/worker-emscripten-template">but even applications in our other WASM-supported languages.</a> For other languages, you first need to rewrite or transpile into one with WASM support.</p><p><b>Into our Application</b>Our basic example will perform only simple text processing, so that we can focus on the broad features of the migration. I’ve set up an unauthenticated copy (using Workers, to give us a <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">scalable and reliable place to host</a> it) at <a href="https://postprocess-workers.kirk.workers.dev/postprocess">https://postprocess-workers.kirk.workers.dev/postprocess</a> where you can see how it operates. Here is an example cURL:</p>
            <pre><code>curl -X POST https://postprocess-workers.kirk.workers.dev/postprocess --data '{"operation":"2","data":"Data-Gram!"}'</code></pre>
            <p>The relevant takeaways from the code itself are pretty simple:</p><ul><li><p>There are two code modules, which conveniently split the application logic completely from the Preprocessing / HTTP interface.</p></li><li><p>The application logic module exposes one function <i>postProcess(object)</i> where <i>object</i> is the parsed JSON of the POST body. It returns a JavaScript object, ready to be encoded into a string in the JSON HTTP response. <b>This module can be run on Workers JavaScript, with no changes. It only needs a new preprocessing / HTTP interface</b>.</p></li><li><p>The Preprocessing / HTTP interface runs on raw Node.js; and exposes a local HTTPS server. The server does not directly take inbound traffic from the Internet, but sits behind a gateway which controls access to the service.</p></li></ul>
    <div>
      <h4>Code snippet from Node.js HTTP module</h4>
      <a href="#code-snippet-from-node-js-http-module">
        
      </a>
    </div>
    
            <pre><code>const server = http.createServer((req, res) =&gt; {
    if (req.url == '/postprocess') {
        if(req.method == 'POST') {
                gatherPost(req, data =&gt; {
                        try{
                                jsonData = JSON.parse(data)
                        } catch (e) {
                                res.end('Invalid JSON payload! \n')
                                return
                        }
                        result = postProcess(jsonData)
                        res.write(JSON.stringify(result) + '\n');
                        res.end();
                })
        } else {
                res.end('Invalid Method, only POST is supported! \nPlease send a POST with data in format {"Operation":1","data","Data-Gram!"        }
    } else {
        res.end('Invalid request. Did you mean to POST to /postprocess? \n');
    }
});</code></pre>
            
    <div>
      <h4>Code snippet from Node.js logic module</h4>
      <a href="#code-snippet-from-node-js-logic-module">
        
      </a>
    </div>
    
            <pre><code>function postProcess (postJson) {
        const ServerVersion = "2.5.17"
        if(postJson != null &amp;&amp; 'operation' in postJson &amp;&amp; 'data' in postJson){
                var output
                var operation = postJson['operation']
                var data = postJson['data']
                switch(operation){
                        case "1":
                              output = String(data).toLowerCase()
                              break
                        case "2":
                              d = data + "\n"
                              output = d + d + d
                              break
                        case "3":
                              output = ServerVersion
                              break
                        default:
                              output = "Invalid Operation"
                }
                return {'Output': output}
        }
        else{
                return {'Error':'Invalid request, invalid JSON format'}
        }</code></pre>
            
    <div>
      <h4>Current State Application Architecture</h4>
      <a href="#current-state-application-architecture">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1d6MEhUqu77DDapE41clZn/dea420cdf5233fa5afd3f9b7aaa22280/image4-9.png" />
            
            </figure>
    <div>
      <h3>Design Decisions</h3>
      <a href="#design-decisions">
        
      </a>
    </div>
    <p>With all this information in hand, we can arrive at at the details of our new Cloudflare-based design:</p><ol><li><p>Keep the business logic completely intact, and specifically use the same .js asset</p></li><li><p>Build a new preprocessing layer in Workers to replace the Node.js module</p></li><li><p>Use Cloudflare Access to authenticate users to our application</p></li></ol>
    <div>
      <h4>Target State Application Architecture</h4>
      <a href="#target-state-application-architecture">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XTQxjhc7V7BCNLD6OYtNu/eec3260400209afbf1d7e56f0d753bc4/image3-15.png" />
            
            </figure>
    <div>
      <h3>Finding the first win</h3>
      <a href="#finding-the-first-win">
        
      </a>
    </div>
    <p>One good way to make a migration successful is to find a quick win early on; a useful task which can be executed while other work is still ongoing. It is even better if the quick win also benefits the eventual cutover. We can find a quick win here, if we solve the zero-trust security problem ahead of the compute problem by putting Cloudflare’s security in front of the existing application.</p><p>We will do this by using cloudflare’s <a href="https://developers.cloudflare.com/argo-tunnel/">Argo Tunnel</a> feature to securely connect to the existing application, and <a href="https://developers.cloudflare.com/access/">Access</a> for zero-trust authentication. Below, you can see how easy this process is for any command-line user, with our cloudflared tool.</p><p>I open up a terminal and use <code>cloudflared tunnel login</code>, which presents me with an authentication flow. I then use the <code>cloudflared tunnel --hostname postprocess.kschwenkler.com --url localhost:8080</code> command to connect an Argo Tunnel between the “url” (my local server) and the “hostname” (the new, public address we will use on my Cloudflare zone).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HmHbX1sc9KoQHRihx36SP/e1aba58bbdba9d7afe35a330c219cb5c/2.gif" />
            
            </figure><p>Next I flip over to my Cloudflare dashboard, and attach an Access Policy to the “hostname” I specified before. We will be using the Service Token mode of Access; which generates a client-specific security token which that client can attach to each HTTP POST. Other modes are better suited to interactive browser use cases.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/343nkuLHV6Hro1OpA9ae6g/8ff8045b063ecf3d5cc479782afbd819/3.gif" />
            
            </figure><p>Now, without using the VPN, I can send a POST to the service, still running on Node.js on my laptop, from any Internet-connected device which has the correct token! It has taken only a few minutes to add zero-trust security to this application; and safely expose it to the Internet while still running on a legacy compute platform (my laptop!).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BYW2YOuW6EuHNgzM92W7i/2de262828b7b10718f813274a15e46fc/4.gif" />
            
            </figure>
    <div>
      <h3>“Quick Win” Architecture</h3>
      <a href="#quick-win-architecture">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qYhPJkmdsepqfjMCytdnS/6181df1c9920da0109d37cf3cc62049b/image13.png" />
            
            </figure><p>Beyond the direct benefit of a huge security upgrade; we’ve also made our eventual application migration much easier, by putting the traffic through the target-state API gateway already. We will see later how we can surgically move traffic to the new application for testing, in this state.</p>
    <div>
      <h3>Lift to the Cloud</h3>
      <a href="#lift-to-the-cloud">
        
      </a>
    </div>
    <p>With our zero-trust security benefits in hand, and our traffic running through Cloudflare; we can now proceed with the migration of the application itself to Workers. We’ll be using the <a href="https://developers.cloudflare.com/workers/tooling/wrangler">Wrangler</a> tooling to make this process very easy.</p><p>As noted when we first looked at the code, this contrived application exposes a very clean interface between the Node.js-specific HTTP module, which we need to replace, and the business logic <i>postprocess</i> module which we can use as is with Workers. We’ll first need to re-write the HTTP module, and then bundle it with the existing business logic into a new Workers application.</p><p>Here is a handwritten example of the basic pattern we’ll use for the HTTP module. We can see how the Service Workers API makes it very easy to grab the POST body with <i>await</i>, and how the JSON interface lets us easily pass the data to the <i>postprocess</i> module we took directly from the initial Node.js app.</p>
            <pre><code>addEventListener('fetch', event =&gt; {
 event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
 try{
   requestData = await request.json()
 } catch (e) {
   return new Response("Invalid JSON", {status:500})
 }
 const response = new Response(JSON.stringify(postProcess (requestData)))
 return response
}</code></pre>
            <p>For our work on the mock application, we will go a slightly different route; more in line with a real application which would be more complex. Instead of writing this by hand, we will use <a href="https://developers.cloudflare.com/workers/quickstart">Wrangler</a> and our <a href="https://developers.cloudflare.com/workers/templates/pages/router/">Router template,</a> to build the new front end from a robust framework.</p><p>We’ll run <code>wrangler generate post-process-workers https://github.com/cloudflare/worker-template-router</code> to initialize a new Wrangler project with the Router template. Most of the configurations for this template will work as is; and we just have to update account_id in our wrangler.toml and make a few small edits to the code in index.js.</p><p>Below is our <code>index.js</code> after my edits, Note the line const <code>postProcess = require('./postProcess.js')</code> at the start of the new http module - this will tell Wrangler to include the original business logic, from the legacy app’s <code>postProcess.js</code> module which I will copy to our working directory.</p>
            <pre><code>const Router = require('./router')
const postProcess = require('./postProcess.js')

addEventListener('fetch', event =&gt; {
    event.respondWith(handleRequest(event.request))
})

async function handler(request) {
    const init = {
        headers: { 'content-type': 'application/json' },
    }
    const body = JSON.stringify(postProcess(await request.json()))
    return new Response(body, init)
}

async function handleRequest(request) {
    const r = new Router()
    r.post('.*/postprocess*', request =&gt; handler(request))
    r.get('/', () =&gt; new Response('Hello worker!')) // return a default message for the root route

    const resp = await r.route(request)
    return resp
}</code></pre>
            <p>Now we can simply run wrangler publish, to put our application on <a href="https://workers.dev/">workers.dev</a> for testing! The Router template’s defaults; and the small edits made above, are all we need. Since Wrangler automatically exposes the test application to the Internet (note that we can *also* put the test application behind Access, with a slightly modified method), we can easily send test traffic from any device.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ikUB5oJkvyjkScPyyJ8PT/3d4bad9f0b3fc2b9cb1d8bf115f900aa/5.gif" />
            
            </figure>
    <div>
      <h4>Shift, Safely!</h4>
      <a href="#shift-safely">
        
      </a>
    </div>
    <p>With our application up for testing on workers.dev, we finally come to the last and most daunting migration step: cutting over traffic from the legacy application to the new one without any service interruption.</p><p>Luckily, we had our quick win earlier and are already routing our production traffic through the Cloudflare network (to the legacy application via Argo Tunnels). This provides huge benefits now that we are at the cutover step. Without changing our IP address, SSL configuration, or any other client-facing properties, we can route traffic to the new application with just one wrangler command.</p>
    <div>
      <h4>Seamless cutover from Transition to Target state</h4>
      <a href="#seamless-cutover-from-transition-to-target-state">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rYqzkjG8iwCxKPNYSbvTH/fd7135b9281f04ae799205a7becb86e6/image14.png" />
            
            </figure><p>We simply modify <code>wrangler.toml</code> to indicate the production domain / route we’d like the application to operate on; and <code>wrangler publish</code>. As soon as Cloudflare receives this update; it will send production traffic to our new application instead of the Argo Tunnel. We have configured the application to send a ‘version’ header which lets us verify this easily using curl.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NNWzJ0Lf1qPBPQVGv6SpK/8365b2d8c6ff73ed042416398445fb7d/6.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1S3FRDRzxAL0M2JciODIkA/76b7c2c9935629bee8c177d268d6237a/7.gif" />
            
            </figure><p>Rollback, if it is needed, is also very easy. We can either set the <code>wrangler.toml</code> back to the workers.dev only mode, and <code>wrangler publish</code> again; or delete our route manually. Either will send traffic back to the Argo Tunnel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7zkTHE7QsTycEDpCzOjCWL/3522bdaf3498d5ab7fadc493fa7b7075/8.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qYUySUW2sUik0WhhsOFiP/bf1e89e2a3c46fae2d9578ad1489882a/9.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wlX569YmzRMhxjjgy7ZLm/a199cd40004b09aa3734229bffd1012c/10.png" />
            
            </figure>
    <div>
      <h3>In Conclusion</h3>
      <a href="#in-conclusion">
        
      </a>
    </div>
    <p>Clearly, a real application will be more complex than our example above. It may have multiple components, with complex interactions, which must each be handled in turn. Argo Tunnel might remain in use, to connect to a data store or other application outside of our network. We might use WASM to support modules written in other languages. In any of these scenarios, Cloudflare’s Wrangler tooling and Serverless capabilities will help us work through the complexities and achieve success.</p><p>I hope that this simple example has helped you to see how Wrangler, cloudflared, Workers, and our entire global network can work together to make migrations as quick and hassle-free as possible. Whether for this case of an old application behind a VPN, or another application that has outgrown its current home - our Workers platform, Wrangler tooling, and underlying platform will scale to meet your business needs.</p> ]]></content:encoded>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">3vNFOI5bc36SSeB0SaAr1s</guid>
            <dc:creator>Kirk Schwenkler</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bandwidth Alliance Partners - Exciting Choices]]></title>
            <link>https://blog.cloudflare.com/bandwidth-alliance-partners-exciting-choices/</link>
            <pubDate>Thu, 15 Nov 2018 18:04:41 GMT</pubDate>
            <description><![CDATA[ We are tremendously excited about the value our Bandwidth Alliance partner ecosystem adds to our customers. We’re on a mission to help make the internet a better place; and ensuring everyone can access cloud resources at zero-egress rates supports that mission in many ways.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We are tremendously excited about the value our Bandwidth Alliance partner ecosystem adds to our customers. We’re on a mission to help make the internet a better place; and ensuring everyone can access cloud resources at zero-egress rates supports that mission in many ways. It’s an easy way for our clients to build modern, cloud-centric applications without the design constraint and financial burden of <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a>.</p><p>The cloudflare bandwidth alliance partner landscape continues to grow, and incorporate a diverse group of partners, with today’s second wave announcement.  With over a dozen different partners, the range of choices can quickly become overwhelming. And, while these are all high-quality platforms which we are happy to recommend to our clients - their important differences will help determine the best fit for you, the customer.</p><p>In this post, I’ll lay out some of Cloudflare’s approach to this solution design question through the lens of a large client we recently worked with. We apply this approach across our full range of products and services, including many use cases far different from the Storage need we’ll dig into in this post. I hope that this can help all of our clients, or anyone else interested, mirror a similar approach.</p>
    <div>
      <h3>Storage: Looking at the client’s needs</h3>
      <a href="#storage-looking-at-the-clients-needs">
        
      </a>
    </div>
    <p>The first step to solution design, whether on a technical issue or a business process, is a clear understanding of the needs. In this case, we identified a few key needs from our new client:</p><ul><li><p>Zero-egress storage option: required to manage costs</p></li><li><p>Costs: low cost storage given likelihood of high volume growth</p></li><li><p>Read requests: ability to support thousands of read requests per second</p></li><li><p>Write requests: less concern about rate of write requests</p></li><li><p>Volume: fairly high volume of storage; 500 TB +</p></li><li><p>Size: 10’s of millions of small objects in storage</p></li><li><p>API: <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">compatible with the familiar S3 API</a></p></li><li><p>Security: authentication between the storage and Cloudflare</p></li></ul><p>These needs were specific to this large client but the factors of consideration are likely similar for any customer looking to store data on a host and deliver it through Cloudflare. The relative weight to each of these factors will depend on your particular application.</p>
    <div>
      <h3>Looking at the provider landscape</h3>
      <a href="#looking-at-the-provider-landscape">
        
      </a>
    </div>
    <p>With the client’s needs in mind, we were able to start filtering out some providers which did not align well to these needs. I generally find it useful to sort the options into three buckets:</p><ol><li><p>Checks all the boxes</p></li><li><p>Soft no (fails to check a few boxes, but we may be able to find a middle ground)</p></li><li><p>Hard no (fails crucial boxes)</p></li></ol><p>First, several providers use a custom API instead of S3. This can have many advantages including cost and performance in some cases; but was not aligned with this client’s request given their development plans. We put all of those into ‘Soft No’ right away.</p><p>Then, we dug into each provider’s performance and economic model around read vs write requests; storage volume; and read object size. A few had economics or rate limits which were very challenging for the client’s use case, which put them into ‘Hard No’ category. For example; some providers charge a fee based on the number of Reads from storage. This client wanted to perform 10’s of millions of reads per day on average, across their many stored objects, so any pricing based on this would quickly break their economics. For other use cases, when a low number of large objects are stored, this would not be as much of a factor.</p><p>At this point, we had identified a partner which was a very good fit for our client. We introduced the teams, and began implementation. This customer is currently ramping up storage and delivery of their content based on this joint solution and we expect to be serving 100TB’s of their stored data over the next year or so.</p>
    <div>
      <h3>Final thoughts</h3>
      <a href="#final-thoughts">
        
      </a>
    </div>
    <p>In any technology implementation; and especially a complex engagement in Cloudflare’s ever-expanding ecosystem, it is important for us to keep the customer’s goals and use case first in mind. By building close partnerships with our clients, we are able to arrive at a clear understanding of these needs and design the best solution.</p><p>We’re excited to work with clients of any scale on Storage, Edge Compute, Security, and many technologies; and leverage our ever growing network to help them succeed.</p> ]]></content:encoded>
            <category><![CDATA[Bandwidth Alliance]]></category>
            <category><![CDATA[Bandwidth Costs]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">77GzamSS1ssIasI6rqaulU</guid>
            <dc:creator>Kirk Schwenkler</dc:creator>
        </item>
    </channel>
</rss>