
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 14:42:59 GMT</lastBuildDate>
        <item>
            <title><![CDATA[15 years of helping build a better Internet: a look back at Birthday Week 2025]]></title>
            <link>https://blog.cloudflare.com/birthday-week-2025-wrap-up/</link>
            <pubDate>Mon, 29 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Rust-powered core systems, post-quantum upgrades, developer access for students, PlanetScale integration, open-source partnerships, and our biggest internship program ever — 1,111 interns in 2026. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare launched fifteen years ago with a mission to help build a better Internet. Over that time the Internet has changed and so has what it needs from teams like ours.  In this year’s <a href="https://blog.cloudflare.com/cloudflare-2025-annual-founders-letter/"><u>Founder’s Letter</u></a>, Matthew and Michelle discussed the role we have played in the evolution of the Internet, from helping encryption grow from 10% to 95% of Internet traffic to more recent challenges like how people consume content. </p><p>We spend Birthday Week every year releasing the products and capabilities we believe the Internet needs at this moment and around the corner. Previous <a href="https://blog.cloudflare.com/tag/birthday-week/"><u>Birthday Weeks</u></a> saw the launch of <a href="https://blog.cloudflare.com/introducing-cloudflares-automatic-ipv6-gatewa/"><u>IPv6 gateway</u></a> in 2011,  <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Universal SSL</u></a> in 2014, <a href="https://blog.cloudflare.com/introducing-cloudflare-workers/"><u>Cloudflare Workers</u></a> and <a href="https://blog.cloudflare.com/unmetered-mitigation/"><u>unmetered DDoS protection</u></a> in 2017, <a href="https://blog.cloudflare.com/introducing-cloudflare-radar/"><u>Cloudflare Radar</u></a> in 2020, <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2 Object Storage</u></a> with zero egress fees in 2021,  <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>post-quantum upgrades for Cloudflare Tunnel</u></a> in 2022, <a href="https://blog.cloudflare.com/best-place-region-earth-inference/"><u>Workers AI</u></a> and <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>Encrypted Client Hello</u></a> in 2023. And those are just a sample of the launches.</p><p>This year’s themes focused on helping prepare the Internet for a new model of monetization that encourages great content to be published, fostering more opportunities to build community both inside and outside of Cloudflare, and evergreen missions like making more features available to everyone and constantly improving the speed and security of what we offer.</p><p>We shipped a lot of new things this year. In case you missed the dozens of blog posts, here is a breakdown of everything we announced during Birthday Week 2025. </p><p><b>Monday, September 22</b></p>
<div><table><thead>
  <tr>
    <th><span>What</span></th>
    <th><span>In a sentence …</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><a href="https://blog.cloudflare.com/cloudflare-1111-intern-program/?_gl=1*rxpw9t*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MTgwNzEkajI4JGwwJGgw"><span>Help build the future: announcing Cloudflare’s goal to hire 1,111 interns in 2026</span></a></td>
    <td><span>To invest in the next generation of builders, we announced our most ambitious intern program yet with a goal to hire 1,111 interns in 2026.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/supporting-the-future-of-the-open-web/?_gl=1*1l701kl*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MTg0MDMkajYwJGwwJGgw"><span>Supporting the future of the open web: Cloudflare is sponsoring Ladybird and Omarchy</span></a></td>
    <td><span>To support a diverse and open Internet, we are now sponsoring Ladybird (an independent browser) and Omarchy (an open-source Linux distribution and developer environment).</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/new-hubs-for-startups/?_gl=1*s35rml*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MTg2NjEkajYwJGwwJGgw/"><span>Come build with us: Cloudflare’s new hubs for startups</span></a></td>
    <td><span>We are opening our office doors in four major cities (San Francisco, Austin, London, and Lisbon) as free hubs for startups to collaborate and connect with the builder community.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/ai-crawl-control-for-project-galileo/?_gl=1*n9jmji*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MTg2ODUkajM2JGwwJGgw"><span>Free access to Cloudflare developer services for non-profit and civil society organizations</span></a></td>
    <td><span>We extended our Cloudflare for Startups program to non-profits and public-interest organizations, offering free credits for our developer tools.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/workers-for-students/?_gl=1*lq39wt*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MTg3NDgkajYwJGwwJGgw"><span>Introducing free access to Cloudflare developer features for students</span></a></td>
    <td><span>We are removing cost as a barrier for the next generation by giving students with .edu emails 12 months of free access to our paid developer platform features.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/?_gl=1*19mcm4k*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjA2MTgkajYwJGwwJGgw"><span>Cap’n Web: a new RPC system for browsers and web servers</span></a></td>
    <td><span>We open-sourced Cap'n Web, a new JavaScript-native RPC protocol that simplifies powerful, schema-free communication for web applications.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/workers-launchpad-006/?_gl=1*8z9nf6*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjA3MTckajUwJGwwJGgw"><span>A lookback at Workers Launchpad and a warm welcome to Cohort #6</span></a></td>
    <td><span>We announced Cohort #6 of the Workers Launchpad, our accelerator program for startups building on Cloudflare.</span></td>
  </tr>
</tbody></table></div><p><b>Tuesday, September 23</b></p>
<div><table><thead>
  <tr>
    <th><span>What</span></th>
    <th><span>In a sentence …</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><a href="https://blog.cloudflare.com/per-customer-bot-defenses/?_gl=1*1i1oipn*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjA3NjAkajckbDAkaDA./"><span>Building unique, per-customer defenses against advanced bot threats in the AI era</span></a></td>
    <td><span>New anomaly detection system that uses machine learning trained on each zone to build defenses against AI-driven bot attacks. </span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/cloudflare-astro-tanstack/?_gl=1*v1uhzx*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjE2MzckajYwJGwwJGgw"><span>Why Cloudflare, Netlify, and Webflow are collaborating to support Open Source tools</span></a></td>
    <td><span>To support the open web, we joined forces with Webflow to sponsor Astro, and with Netlify to sponsor TanStack.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/x402/?_gl=1*kizcyy*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjA5OTUkajYkbDAkaDA./"><span>Launching the x402 Foundation with Coinbase, and support for x402 transactions</span></a></td>
    <td><span>We are partnering with Coinbase to create the x402 Foundation, encouraging the adoption of the </span><a href="https://github.com/coinbase/x402?cf_target_id=4D4A124640BFF471F5B56706F9A86B34"><span>x402 protocol</span></a><span> to allow clients and services to exchange value on the web using a common language</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/ai-crawl-control-for-project-galileo/?_gl=1*1r1zsjt*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjE3NjYkajYwJGwwJGgw"><span>Helping protect journalists and local news from AI crawlers with Project Galileo</span></a></td>
    <td><span>We are extending our free Bot Management and AI Crawl Control services to journalists and news organizations through Project Galileo.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/confidence-score-rubric/"><span>Cloudflare Confidence Scorecards - making AI safer for the Internet</span></a></td>
    <td><span>Automated evaluation of AI and SaaS tools, helping organizations to embrace AI without compromising security.</span></td>
  </tr>
</tbody></table></div><p><b>Wednesday, September 24</b></p>
<div><table><thead>
  <tr>
    <th><span>What</span></th>
    <th><span>In a sentence …</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><a href="https://blog.cloudflare.com/automatically-secure/?_gl=1*8mjfiy*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjE4MTckajkkbDAkaDA."><span>Automatically Secure: how we upgraded 6,000,000 domains by default</span></a></td>
    <td><span>Our Automatic SSL/TLS system has upgraded over 6 million domains to more secure encryption modes by default and will soon automatically enable post-quantum connections.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/content-signals-policy/?_gl=1*lfy031*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjE5NTkkajYwJGwwJGgw/"><span>Giving users choice with Cloudflare’s new Content Signals Policy</span></a></td>
    <td><span>The Content Signals Policy is a new standard for robots.txt that lets creators express clear preferences for how AI can use their content.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/?_gl=1*hjo4nx*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjIwMTIkajckbDAkaDA."><span>To build a better Internet in the age of AI, we need responsible AI bot principles</span></a></td>
    <td><span>A proposed set of responsible AI bot principles to start a conversation around transparency and respect for content creators' preferences.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/saas-to-saas-security/?_gl=1*tigi23*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjIwNjgkajYwJGwwJGgw"><span>Securing data in SaaS to SaaS applications</span></a></td>
    <td><span>New security tools to give companies visibility and control over data flowing between SaaS applications.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/post-quantum-warp/?_gl=1*1vy23vv*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjIyMDIkajYwJGwwJGgw"><span>Securing today for the quantum future: WARP client now supports post-quantum cryptography (PQC)</span></a></td>
    <td><span>Cloudflare’s WARP client now supports post-quantum cryptography, providing quantum-resistant encryption for traffic. </span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/a-simpler-path-to-a-safer-internet-an-update-to-our-csam-scanning-tool/?_gl=1*1avvoeq*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjIxMTUkajEzJGwwJGgw"><span>A simpler path to a safer Internet: an update to our CSAM scanning tool</span></a></td>
    <td><span>We made our CSAM Scanning Tool easier to adopt by removing the need to create and provide unique credentials, helping more site owners protect their platforms.</span></td>
  </tr>
</tbody></table></div><p>
<b>Thursday, September 25</b></p>
<div><table><thead>
  <tr>
    <th><span>What</span></th>
    <th><span>In a sentence …</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><a href="https://blog.cloudflare.com/enterprise-grade-features-for-all/?_gl=1*ll2laa*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjIyODIkajYwJGwwJGgw/"><span>Every Cloudflare feature, available to everyone</span></a></td>
    <td><span>We are making every Cloudflare feature, starting with Single Sign On (SSO), available for anyone to purchase on any plan. </span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/cloudflare-developer-platform-keeps-getting-better-faster-and-more-powerful/?_gl=1*1dwrmxx*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI0MzgkajYwJGwwJGgw/"><span>Cloudflare's developer platform keeps getting better, faster, and more powerful</span></a></td>
    <td><span>Updates across Workers and beyond for a more powerful developer platform – such as support for larger and more concurrent Container images, support for external models from OpenAI and Anthropic in AI Search (previously AutoRAG), and more. </span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/planetscale-postgres-workers/?_gl=1*1e87q21*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI2MDUkajYwJGwwJGgw"><span>Partnering to make full-stack fast: deploy PlanetScale databases directly from Workers</span></a></td>
    <td><span>You can now connect Cloudflare Workers to PlanetScale databases directly, with connections automatically optimized by Hyperdrive.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/cloudflare-data-platform/?_gl=1*1gj7lyv*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI5MDckajYwJGwwJGgw"><span>Announcing the Cloudflare Data Platform</span></a></td>
    <td><span>A complete solution for ingesting, storing, and querying analytical data tables using open standards like Apache Iceberg. </span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/r2-sql-deep-dive/?_gl=1*88kngf*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI5MzAkajM3JGwwJGgw"><span>R2 SQL: a deep dive into our new distributed query engine</span></a></td>
    <td><span>A technical deep dive on R2 SQL, a serverless query engine for petabyte-scale datasets in R2.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/safe-in-the-sandbox-security-hardening-for-cloudflare-workers/?_gl=1*y25my1*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI4ODQkajMkbDAkaDA./"><span>Safe in the sandbox: security hardening for Cloudflare Workers</span></a></td>
    <td><span>A deep-dive into how we’ve hardened the Workers runtime with new defense-in-depth security measures, including V8 sandboxes and hardware-assisted memory protection keys.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/sovereign-ai-and-choice/?_gl=1*1gvqucw*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI4NjkkajE4JGwwJGgw/"><span>Choice: the path to AI sovereignty</span></a></td>
    <td><span>To champion AI sovereignty, we've added locally-developed open-source models from India, Japan, and Southeast Asia to our Workers AI platform.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/email-service/?_gl=1*z3yus0*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI4MjckajYwJGwwJGgw"><span>Announcing Cloudflare Email Service’s private beta</span></a></td>
    <td><span>We announced the Cloudflare Email Service private beta, allowing developers to reliably send and receive transactional emails directly from Cloudflare Workers.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/nodejs-workers-2025/?_gl=1*gzumry*_gcl_aw*R0NMLjE3NTg5MTQ0ODEuQ2p3S0NBanc4OWpHQmhCMEVpd0EybzFPbnp1VkVIN2UybUZJcERvWWtJMV9Rc2FlbTFEV19FU19qVjR1QnVmcEE3QVdkeU9zaVRIZGl4b0N4dHNRQXZEX0J3RQ..*_gcl_dc*R0NMLjE3NTgyMDc1NDEuQ2owS0NRancyNjdHQmhDU0FSSXNBT2pWSjRIWTFOVTZVWDFyVEJVNGNyd243d3RwX3lheFBuNnZJdXJlOUVmWmRzWkJJa1ZyejF4cDFDSWFBa2pBRUFMd193Y0I.*_gcl_au*MTI5NDk3ODE3OC4xNzUzMTQwMzIw*_ga*ZTI0NWUyMDQtZDM1YS00NTFkLWIwM2UtYjhhNzliZWQxY2Nj*_ga_SQCRB0TXZW*czE3NTg5MTY5NDEkbzYkZzEkdDE3NTg5MjI2ODgkajYwJGwwJGgw/"><span>A year of improving Node.js compatibility in Cloudflare Workers</span></a></td>
    <td><span>There are hundreds of new Node.js APIs now available that make it easier to run existing Node.js code on our platform. </span></td>
  </tr>
</tbody></table></div><p><b>Friday, September 26</b></p>
<table><thead>
  <tr>
    <th><span>What</span></th>
    <th><span>In a sentence …</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><a href="https://blog.cloudflare.com/20-percent-internet-upgrade"><span>Cloudflare just got faster and more secure, powered by Rust</span></a></td>
    <td><span>We have re-engineered our core proxy with a new modular, Rust-based architecture, cutting median response time by 10ms for millions. </span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com//introducing-observatory-and-smart-shield/"><span>Introducing Observatory and Smart Shield</span></a></td>
    <td><span>New monitoring tools in the Cloudflare dashboard that provide actionable recommendations and one-click fixes for performance issues.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/monitoring-as-sets-and-why-they-matter/"><span>Monitoring AS-SETs and why they matter</span></a></td>
    <td><span>Cloudflare Radar now includes Internet Routing Registry (IRR) data, allowing network operators to monitor AS-SETs to help prevent route leaks.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/an-ai-index-for-all-our-customers"><span>An AI Index for all our customers</span></a></td>
    <td><span>We announced the private beta of AI Index, a new service that creates an AI-optimized search index for your domain that you control and can monetize.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/new-regional-internet-traffic-and-certificate-transparency-insights-on-radar/"><span>Introducing new regional Internet traffic and Certificate Transparency insights on Cloudflare Radar</span></a></td>
    <td><span>Sub-national traffic insights and Certificate Transparency dashboards for TLS monitoring.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/eliminating-cold-starts-2-shard-and-conquer/"><span>Eliminating Cold Starts 2: shard and conquer</span></a></td>
    <td><span>We have reduced Workers cold starts by 10x by implementing a new "worker sharding" system that routes requests to already-loaded Workers.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/network-performance-update-birthday-week-2025/"><span>Network performance update: Birthday Week 2025</span></a></td>
    <td><span>The TCP Connection Time (Trimean) graph shows that we are the fastest TCP connection time in 40% of measured ISPs – and the fastest across the top networks.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/how-cloudflare-uses-the-worlds-greatest-collection-of-performance-data/"><span>How Cloudflare uses performance data to make the world’s fastest global network even faster</span></a></td>
    <td><span>We are using our network's vast performance data to tune congestion control algorithms, improving speeds by an average of 10% for QUIC traffic.</span></td>
  </tr>
  <tr>
    <td><a href="https://blog.cloudflare.com/code-mode/"><span>Code Mode: the better way to use MCP</span></a></td>
    <td><span>It turns out we've all been using MCP wrong. Most agents today use MCP by exposing the "tools" directly to the LLM. We tried something different: Convert the MCP tools into a TypeScript API, and then ask an LLM to write code that calls that API. The results are striking.</span></td>
  </tr>
</tbody></table>
    <div>
      <h3>Come build with us!</h3>
      <a href="#come-build-with-us">
        
      </a>
    </div>
    <p>Helping build a better Internet has always been about more than just technology. Like the announcements about interns or working together in our offices, the community of people behind helping build a better Internet matters to its future. This week, we rolled out our most ambitious set of initiatives ever to support the builders, founders, and students who are creating the future.</p><p>For founders and startups, we are thrilled to welcome <b>Cohort #6</b> to the <b>Workers Launchpad</b>, our accelerator program that gives early-stage companies the resources they need to scale. But we’re not stopping there. We’re opening our doors, literally, by launching <b>new physical hubs for startups</b> in our San Francisco, Austin, London, and Lisbon offices. These spaces will provide access to mentorship, resources, and a community of fellow builders.</p><p>We’re also investing in the next generation of talent. We announced <b>free access to the Cloudflare developer platform for all students</b>, giving them the tools to learn and experiment without limits. To provide a path from the classroom to the industry, we also announced our goal to hire <b>1,111 interns in 2026</b> — our biggest commitment yet to fostering future tech leaders.</p><p>And because a better Internet is for everyone, we’re extending our support to <b>non-profits and public-interest organizations</b>, offering them free access to our production-grade developer tools, so they can focus on their missions.</p><p>Whether you're a founder with a big idea, a student just getting started, or a team working for a cause you believe in, we want to help you succeed.</p>
    <div>
      <h3>Until next year</h3>
      <a href="#until-next-year">
        
      </a>
    </div>
    <p>Thank you to our customers, our community, and the millions of developers who trust us to help them build, secure, and accelerate the Internet. Your curiosity and feedback drive our innovation.</p><p>It’s been an incredible 15 years. And as always, we’re just getting started!</p><p><i>(Watch the full conversation on our show </i><a href="ThisWeekinNET.com"><i>ThisWeekinNET.com</i></a><i> about what we launched during Birthday Week 2025 </i><a href="https://youtu.be/Z2uHFc9ua9s?feature=shared"><i><b><u>here</u></b></i></a><i>.) </i></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Partners]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Workers Launchpad]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Cloudflare for Startups]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4k1NhJtljIsH7GOkpHg1Ei</guid>
            <dc:creator>Nikita Cano</dc:creator>
            <dc:creator>Korinne Alpers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare just got faster and more secure, powered by Rust]]></title>
            <link>https://blog.cloudflare.com/20-percent-internet-upgrade/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve replaced the original core system in Cloudflare with a new modular Rust-based proxy, replacing NGINX.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is relentless about building and running the world’s fastest network. We have been <a href="https://blog.cloudflare.com/tag/network-performance-update/"><u>tracking and reporting on our network performance since 2021</u></a>: you can see the latest update <a href="https://blog.cloudflare.com/tag/network-performance-update/"><u>here</u></a>.</p><p>Building the fastest network requires work in many areas. We invest a lot of time in our hardware, to have efficient and fast machines. We invest in peering arrangements, to make sure we can talk to every part of the Internet with minimal delay. On top of this, we also have to invest in the software we run our network on, especially as each new product can otherwise add more processing delay.</p><p>No matter how fast messages arrive, we introduce a bottleneck if that software takes too long to think about how to process and respond to requests. Today we are excited to share a significant upgrade to our software that cuts the median time we take to respond by 10ms and delivers a 25% performance boost, as measured by third-party CDN performance tests.</p><p>We've spent the last year rebuilding major components of our system, and we've just slashed the latency of traffic passing through our network for millions of our customers. At the same time, we've made our system more secure, and we've reduced the time it takes for us to build and release new products. </p>
    <div>
      <h2>Where did we start?</h2>
      <a href="#where-did-we-start">
        
      </a>
    </div>
    <p>Every request that hits Cloudflare starts a journey through our network. It might come from a browser loading a webpage, a mobile app calling an API, or automated traffic from another service. These requests first terminate at our HTTP and TLS layer, then pass into a system we call FL, and finally through <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a>, which performs cache lookups or fetches data from the origin if needed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CKW3jrB9vw4UhXx6OYJpt/928b9e9c1c794f7622fc4ad469d66b60/image3.png" />
          </figure><p>FL is the brain of Cloudflare. Once a request reaches FL, we then run the various security and performance features in our network. It applies each customer’s unique configuration and settings, from enforcing <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>WAF rules </u></a>and <a href="https://www.cloudflare.com/ddos/"><u>DDoS protection</u></a> to routing traffic to the <a href="https://developers.cloudflare.com/learning-paths/workers/devplat/intro-to-devplat/"><u>Developer Platform </u></a>and <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>. </p><p>Built more than 15 years ago, FL has been at the core of Cloudflare’s network. It enables us to deliver a broad range of features, but over time that flexibility became a challenge. As we added more products, FL grew harder to maintain, slower to process requests, and more difficult to extend. Each new feature required careful checks across existing logic, and every addition introduced a little more latency, making it increasingly difficult to sustain the performance we wanted.</p><p>You can see how FL is key to our system — we’ve often called it the “brain” of Cloudflare. It’s also one of the oldest parts of our system: the first commit to the codebase was made by one of our founders, Lee Holloway, well before our initial launch. We’re celebrating our 15th Birthday this week - this system started 9 months before that!</p>
            <pre><code>commit 39c72e5edc1f05ae4c04929eda4e4d125f86c5ce
Author: Lee Holloway &lt;q@t60.(none)&gt;
Date:   Wed Jan 6 09:57:55 2010 -0800

    nginx-fl initial configuration</code></pre>
            <p>As the commit implies, the first version of FL was implemented based on the NGINX webserver, with product logic implemented in PHP.  After 3 years, the system became too complex to manage effectively, and too slow to respond, and an almost complete rewrite of the running system was performed. This led to another significant commit, this time made by Dane Knecht, who is now our CTO.</p>
            <pre><code>commit bedf6e7080391683e46ab698aacdfa9b3126a75f
Author: Dane Knecht
Date:   Thu Sep 19 19:31:15 2013 -0700

    remove PHP.</code></pre>
            <p>From this point on, FL was implemented using NGINX, the <a href="https://openresty.org/en/"><u>OpenResty</u></a> framework, and <a href="https://luajit.org/"><u>LuaJIT</u></a>.  While this was great for a long time, over the last few years it started to show its age. We had to spend increasing amounts of time fixing or working around obscure bugs in LuaJIT. The highly dynamic and unstructured nature of our Lua code, which was a blessing when first trying to implement logic quickly, became a source of errors and delay when trying to integrate large amounts of complex product logic. Each time a new product was introduced, we had to go through all the other existing products to check if they might be affected by the new logic.</p><p>It was clear that we needed a rethink. So, in July 2024, we cut an initial commit for a brand new, and radically different, implementation. To save time agreeing on a new name for this, we just called it “FL2”, and started, of course, referring to the original FL as “FL1”.</p>
            <pre><code>commit a72698fc7404a353a09a3b20ab92797ab4744ea8
Author: Maciej Lechowski
Date:   Wed Jul 10 15:19:28 2024 +0100

    Create fl2 project</code></pre>
            
    <div>
      <h2>Rust and rigid modularization</h2>
      <a href="#rust-and-rigid-modularization">
        
      </a>
    </div>
    <p>We weren’t starting from scratch. We’ve <a href="https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/"><u>previously blogged</u></a> about how we replaced another one of our legacy systems with Pingora, which is built in the <a href="https://www.rust-lang.org/"><u>Rust</u></a> programming language, using the <a href="https://tokio.rs/"><u>Tokio</u></a> runtime. We’ve also <a href="https://blog.cloudflare.com/introducing-oxy/"><u>blogged about Oxy</u></a>, our internal framework for building proxies in Rust. We write a lot of Rust, and we’ve gotten pretty good at it.</p><p>We built FL2 in Rust, on Oxy, and built a strict module framework to structure all the logic in FL2.</p>
    <div>
      <h2>Why Oxy?</h2>
      <a href="#why-oxy">
        
      </a>
    </div>
    <p>When we set out to build FL2, we knew we weren’t just replacing an old system; we were rebuilding the foundations of Cloudflare. That meant we needed more than just a proxy; we needed a framework that could evolve with us, handle the immense scale of our network, and let teams move quickly without sacrificing safety or performance. </p><p>Oxy gives us a powerful combination of performance, safety, and flexibility. Built in Rust, it eliminates entire classes of bugs that plagued our Nginx/LuaJIT-based FL1, like memory safety issues and data races, while delivering C-level performance. At Cloudflare’s scale, those guarantees aren’t nice-to-haves, they’re essential. Every microsecond saved per request translates into tangible improvements in user experience, and every crash or edge case avoided keeps the Internet running smoothly. Rust’s strict compile-time guarantees also pair perfectly with FL2’s modular architecture, where we enforce clear contracts between product modules and their inputs and outputs.</p><p>But the choice wasn’t just about language. Oxy is the culmination of years of experience building high-performance proxies. It already powers several major Cloudflare services, from our <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Zero Trust</u></a> Gateway to <a href="https://blog.cloudflare.com/icloud-private-relay/"><u>Apple’s iCloud Private Relay</u></a>, so we knew it could handle the diverse traffic patterns and protocol combinations that FL2 would see. Its extensibility model lets us intercept, analyze, and manipulate traffic from<a href="https://www.cloudflare.com/en-gb/learning/ddos/glossary/open-systems-interconnection-model-osi/"><u> layer 3 up to layer 7</u></a>, and even decapsulate and reprocess traffic at different layers. That flexibility is key to FL2’s design because it means we can treat everything from HTTP to raw IP traffic consistently and evolve the platform to support new protocols and features without rewriting fundamental pieces.</p><p>Oxy also comes with a rich set of built-in capabilities that previously required large amounts of bespoke code. Things like monitoring, soft reloads, dynamic configuration loading and swapping are all part of the framework. That lets product teams focus on the unique business logic of their module rather than reinventing the plumbing every time. This solid foundation means we can make changes with confidence, ship them quickly, and trust they’ll behave as expected once deployed.</p>
    <div>
      <h2>Smooth restarts - keeping the Internet flowing</h2>
      <a href="#smooth-restarts-keeping-the-internet-flowing">
        
      </a>
    </div>
    <p>One of the most impactful improvements Oxy brings is handling of restarts. Any software under continuous development and improvement will eventually need to be updated. In desktop software, this is easy: you close the program, install the update, and reopen it. On the web, things are much harder. Our software is in constant use and cannot simply stop. A dropped HTTP request can cause a page to fail to load, and a broken connection can kick you out of a video call. Reliability is not optional.</p><p>In FL1, upgrades meant restarts of the proxy process. Restarting a proxy meant terminating the process entirely, which immediately broke any active connections. That was particularly painful for long-lived connections such as WebSockets, streaming sessions, and real-time APIs. Even planned upgrades could cause user-visible interruptions, and unplanned restarts during incidents could be even worse.</p><p>Oxy changes that. It includes a built-in mechanism for <a href="https://blog.cloudflare.com/oxy-the-journey-of-graceful-restarts/"><u>graceful restarts</u></a> that lets us roll out new versions without dropping connections whenever possible. When a new instance of an Oxy-based service starts up, the old one stops accepting new connections but continues to serve existing ones, allowing those sessions to continue uninterrupted until they end naturally.</p><p>This means that if you have an ongoing WebSocket session when we deploy a new version, that session can continue uninterrupted until it ends naturally, rather than being torn down by the restart. Across Cloudflare’s fleet, deployments are orchestrated over several hours, so the aggregate rollout is smooth and nearly invisible to end users.</p><p>We take this a step further by using systemd socket activation. Instead of letting each proxy manage its own sockets, we let systemd create and own them. This decouples the lifetime of sockets from the lifetime of the Oxy application itself. If an Oxy process restarts or crashes, the sockets remain open and ready to accept new connections, which will be served as soon as the new process is running. That eliminates the “connection refused” errors that could happen during restarts in FL1 and improves overall availability during upgrades.</p><p>We also built our own coordination mechanisms in Rust to replace Go libraries like <a href="https://github.com/cloudflare/tableflip"><u>tableflip</u></a> with <a href="https://github.com/cloudflare/shellflip"><u>shellflip</u></a>. This uses a restart coordination socket that validates configuration, spawns new instances, and ensures the new version is healthy before the old one shuts down. This improves feedback loops and lets our automation tools detect and react to failures immediately, rather than relying on blind signal-based restarts.</p>
    <div>
      <h2>Composing FL2 from Modules</h2>
      <a href="#composing-fl2-from-modules">
        
      </a>
    </div>
    <p>To avoid the problems we had in FL1, we wanted a design where all interactions between product logic were explicit and easy to understand. </p><p>So, on top of the foundations provided by Oxy, we built a platform which separates all the logic built for our products into well-defined modules. After some experimentation and research, we designed a module system which enforces some strict rules:</p><ul><li><p>No IO (input or output) can be performed by the module.</p></li><li><p>The module provides a list of <b>phases</b>.</p></li><li><p>Phases are evaluated in a strictly defined order, which is the same for every request.</p></li><li><p>Each phase defines a set of inputs which the platform provides to it, and a set of outputs which it may emit.</p></li></ul><p>Here’s an example of what a module phase definition looks like:</p>
            <pre><code>Phase {
    name: phases::SERVE_ERROR_PAGE,
    request_types_enabled: PHASE_ENABLED_FOR_REQUEST_TYPE,
    inputs: vec![
        InputKind::IPInfo,
        InputKind::ModuleValue(
            MODULE_VALUE_CUSTOM_ERRORS_FETCH_WORKER_RESPONSE.as_str(),
        ),
        InputKind::ModuleValue(MODULE_VALUE_ORIGINAL_SERVE_RESPONSE.as_str()),
        InputKind::ModuleValue(MODULE_VALUE_RULESETS_CUSTOM_ERRORS_OUTPUT.as_str()),
        InputKind::ModuleValue(MODULE_VALUE_RULESETS_UPSTREAM_ERROR_DETAILS.as_str()),
        InputKind::RayId,
        InputKind::StatusCode,
        InputKind::Visitor,
    ],
    outputs: vec![OutputValue::ServeResponse],
    filters: vec![],
    func: phase_serve_error_page::callback,
}</code></pre>
            <p>This phase is for our custom error page product.  It takes a few things as input — information about the IP of the visitor, some header and other HTTP information, and some “module values.” Module values allow one module to pass information to another, and they’re key to making the strict properties of the module system workable. For example, this module needs some information that is produced by the output of our rulesets-based custom errors product (the “<code>MODULE_VALUE_RULESETS_CUSTOM_ERRORS_OUTPUT</code>” input). These input and output definitions are enforced at compile time.</p><p>While these rules are strict, we’ve found that we can implement all our product logic within this framework. The benefit of doing so is that we can immediately tell which other products might affect each other.</p>
    <div>
      <h2>How to replace a running system</h2>
      <a href="#how-to-replace-a-running-system">
        
      </a>
    </div>
    <p>Building a framework is one thing. Building all the product logic and getting it right, so that customers don’t notice anything other than a performance improvement, is another.</p><p>The FL code base supports 15 years of Cloudflare products, and it’s changing all the time. We couldn’t stop development. So, one of our first tasks was to find ways to make the migration easier and safer.</p>
    <div>
      <h3>Step 1 - Rust modules in OpenResty</h3>
      <a href="#step-1-rust-modules-in-openresty">
        
      </a>
    </div>
    <p>It’s a big enough distraction from shipping products to customers to rebuild product logic in Rust. Asking all our teams to maintain two versions of their product logic, and reimplement every change a second time until we finished our migration was too much.</p><p>So, we implemented a layer in our old NGINX and OpenResty based FL which allowed the new modules to be run. Instead of maintaining a parallel implementation, teams could implement their logic in Rust, and replace their old Lua logic with that, without waiting for the full replacement of the old system.</p><p>For example, here’s part of the implementation for the custom error page module phase defined earlier (we’ve cut out some of the more boring details, so this doesn’t quite compile as-written):</p>
            <pre><code>pub(crate) fn callback(_services: &amp;mut Services, input: &amp;Input&lt;'_&gt;) -&gt; Output {
    // Rulesets produced a response to serve - this can either come from a special
    // Cloudflare worker for serving custom errors, or be directly embedded in the rule.
    if let Some(rulesets_params) = input
        .get_module_value(MODULE_VALUE_RULESETS_CUSTOM_ERRORS_OUTPUT)
        .cloned()
    {
        // Select either the result from the special worker, or the parameters embedded
        // in the rule.
        let body = input
            .get_module_value(MODULE_VALUE_CUSTOM_ERRORS_FETCH_WORKER_RESPONSE)
            .and_then(|response| {
                handle_custom_errors_fetch_response("rulesets", response.to_owned())
            })
            .or(rulesets_params.body);

        // If we were able to load a body, serve it, otherwise let the next bit of logic
        // handle the response
        if let Some(body) = body {
            let final_body = replace_custom_error_tokens(input, &amp;body);

            // Increment a metric recording number of custom error pages served
            custom_pages::pages_served("rulesets").inc();

            // Return a phase output with one final action, causing an HTTP response to be served.
            return Output::from(TerminalAction::ServeResponse(ResponseAction::OriginError {
                rulesets_params.status,
                source: "rulesets http_custom_errors",
                headers: rulesets_params.headers,
                body: Some(Bytes::from(final_body)),
            }));
        }
    }
}</code></pre>
            <p>The internal logic in each module is quite cleanly separated from the handling of data, with very clear and explicit error handling encouraged by the design of the Rust language.</p><p>Many of our most actively developed modules were handled this way, allowing the teams to maintain their change velocity during our migration.</p>
    <div>
      <h3>Step 2 - Testing and automated rollouts</h3>
      <a href="#step-2-testing-and-automated-rollouts">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6OElHIMjnkIQmsbx92ZOYc/545a26ba623a253ea1a5c6b13279301d/image4.png" />
          </figure><p>It’s essential to have a seriously powerful test framework to cover such a migration.  We built a system, internally named Flamingo, which allows us to run thousands of full end-to-end test requests concurrently against our production and pre-production systems. The same tests run against FL1 and FL2, giving us confidence that we’re not changing behaviours.</p><p>Whenever we deploy a change, that change is rolled out gradually across many stages, with increasing amounts of traffic. Each stage is automatically evaluated, and only passes when the full set of tests have been successfully run against it - as well as overall performance and resource usage metrics being within acceptable bounds. This system is fully automated, and pauses or rolls back changes if the tests fail.
</p><p>The benefit is that we’re able to build and ship new product features in FL2 within 48 hours - where it would have taken weeks in FL1. In fact, at least one of the announcements this week involved such a change!</p>
    <div>
      <h3>Step 3 - Fallbacks</h3>
      <a href="#step-3-fallbacks">
        
      </a>
    </div>
    <p>Over 100 engineers have worked on FL2, and we have over 130 modules. And we’re not quite done yet. We're still putting the final touches on the system, to make sure it replicates all the behaviours of FL1.</p><p>So how do we send traffic to FL2 without it being able to handle everything? If FL2 receives a request, or a piece of configuration for a request, that it doesn’t know how to handle, it gives up and does what we’ve called a <b>fallback</b> - it passes the whole thing over to FL1. It does this at the network level - it just passes the bytes on to FL1.</p><p>As well as making it possible for us to send traffic to FL2 without it being fully complete, this has another massive benefit. When we have implemented a piece of new functionality in FL2, but want to double check that it is working the same as in FL1, we can evaluate the functionality in FL2, and then trigger a fallback. We are able to compare the behaviour of the two systems, allowing us to get a high confidence that our implementation was correct.</p>
    <div>
      <h3>Step 4 - Rollout</h3>
      <a href="#step-4-rollout">
        
      </a>
    </div>
    <p>We started running customer traffic through FL2 early in 2025, and have been progressively increasing the amount of traffic served throughout the year. Essentially, we’ve been watching two graphs: one with the proportion of traffic routed to FL2 going up, and another with the proportion of traffic failing to be served by FL2 and falling back to FL1 going down.</p><p>We started this process by passing traffic for our free customers through the system. We were able to prove that the system worked correctly, and drive the fallback rates down for our major modules. Our <a href="https://community.cloudflare.com/"><u>Cloudflare Community</u></a> MVPs acted as an early warning system, smoke testing and flagging when they suspected the new platform might be the cause of a new reported problem. Crucially their support allowed our team to investigate quickly, apply targeted fixes, or confirm the move to FL2 was not to blame.</p><p>
We then advanced to our paying customers, gradually increasing the amount of customers using the system. We also worked closely with some of our largest customers, who wanted the performance benefits of FL2, and onboarded them early in exchange for lots of feedback on the system.</p><p>Right now, most of our customers are using FL2. We still have a few features to complete, and are not quite ready to onboard everyone, but our target is to turn off FL1 within a few more months.</p>
    <div>
      <h2>Impact of FL2</h2>
      <a href="#impact-of-fl2">
        
      </a>
    </div>
    <p>As we described at the start of this post, FL2 is substantially faster than FL1. The biggest reason for this is simply that FL2 performs less work. You might have noticed in the module definition example a line</p>
            <pre><code>    filters: vec![],</code></pre>
            <p>Every module is able to provide a set of filters, which control whether they run or not. This means that we don’t run logic for every product for every request — we can very easily select just the required set of modules. The incremental cost for each new product we develop has gone away.</p><p>Another huge reason for better performance is that FL2 is a single codebase, implemented in a performance focussed language. In comparison, FL1 was based on NGINX (which is written in C), combined with LuaJIT (Lua, and C interface layers), and also contained plenty of Rust modules.  In FL1, we spent a lot of time and memory converting data from the representation needed by one language, to the representation needed by another.</p><p>As a result, our internal measures show that FL2 uses less than half the CPU of FL1, and much less than half the memory. That’s a huge bonus — we can spend the CPU on delivering more and more features for our customers!</p>
    <div>
      <h2>How do we measure if we are getting better?</h2>
      <a href="#how-do-we-measure-if-we-are-getting-better">
        
      </a>
    </div>
    <p>Using our own tools and independent benchmarks like <a href="https://www.cdnperf.com/"><u>CDNPerf</u></a>, we measured the impact of FL2 as we rolled it out across the network. The results are clear: websites are responding 10 ms faster at the median, a 25% performance boost.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eDqQvbAfrQoSXPZed0fd7/d7fe0d28e33a3f2e8dfcd27cf79c900f/image1.png" />
          </figure>
    <div>
      <h2>Security</h2>
      <a href="#security">
        
      </a>
    </div>
    <p>FL2 is also more secure by design than FL1. No software system is perfect, but the Rust language brings us huge benefits over LuaJIT. Rust has strong compile-time memory checks and a type system that avoids large classes of errors. Combine that with our rigid module system, and we can make most changes with high confidence.</p><p>Of course, no system is secure if used badly. It’s easy to write code in Rust, which causes memory corruption. To reduce risk, we maintain strong compile time linting and checking, together with strict coding standards, testing and review processes.</p><p>We have long followed a policy that any unexplained crash of our systems needs to be <a href="https://blog.cloudflare.com/however-improbable-the-story-of-a-processor-bug/"><u>investigated as a high priority</u></a>. We won’t be relaxing that policy, though the main cause of novel crashes in FL2 so far has been due to hardware failure. The massively reduced rates of such crashes will give us time to do a good job of such investigations.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re spending the rest of 2025 completing the migration from FL1 to FL2, and will turn off FL1 in early 2026. We’re already seeing the benefits in terms of customer performance and speed of development, and we’re looking forward to giving these to all our customers.</p><p>We have one last service to completely migrate. The “HTTP &amp; TLS Termination” box from the diagram way back at the top is also an NGINX service, and we’re midway through a rewrite in Rust. We’re making good progress on this migration, and expect to complete it early next year.</p><p>After that, when everything is modular, in Rust and tested and scaled, we can really start to optimize! We’ll reorganize and simplify how the modules connect to each other, expand support for non-HTTP traffic like RPC and streams, and much more. </p><p>If you’re interested in being part of this journey, check out our <a href="https://www.cloudflare.com/careers/"><u>careers page</u></a> for open roles - we’re always looking for new talent to help us to help build a better Internet.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[NGINX]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Engineering]]></category>
            <guid isPermaLink="false">7d41Nh5ZiG8DbEjjwhJF3H</guid>
            <dc:creator>Richard Boulton</dc:creator>
            <dc:creator>Steve Goldsmith</dc:creator>
            <dc:creator>Maurizio Abba</dc:creator>
            <dc:creator>Matthew Bullock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Observatory and Smart Shield — see how the world sees your website, and make it faster in one click]]></title>
            <link>https://blog.cloudflare.com/introducing-observatory-and-smart-shield/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We're announcing two enhancements to our Application Performance suite that'll show how the world sees your website, and make it faster with one click - available Cloudflare Dashboard! ]]></description>
            <content:encoded><![CDATA[ <p>Modern users expect instant, reliable web experiences. When your application is slow, they don’t just complain — they leave. Even delays as small as 100 ms have been <a href="https://wpostats.com/"><u>shown to have a measurable impact on revenue, conversions, bounce rate, engagement and more</u></a>. </p><p>If you’re responsible for delivering on these expectations to the users of your product, you know there are many monitoring tools that show you how visitors experience your website, and can let you know when things are slow or causing issues. This is essential, but we believe understanding the condition is only half the story. The real value comes from integrating monitoring and remedies in the same view, giving customers the ability to quickly identify and resolve issues.</p><p>That's why today, we're excited to launch the new and improved <b>Observatory</b>, now in open beta. This monitoring and <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> tool goes beyond charts and graphs, by also telling you exactly how to improve your application's performance and resilience, and immediately showing you the impact of those changes. And we’re releasing it to all subscription tiers (including Free!), available today. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/T6HhZL51aLEhzD3lQPxCq/f9b03f05cf4db0b2e61c8e861df4ecdf/1.png" />
          </figure><p>But wait, there’s more! To make your users’ experience in Cloudflare even faster, we’re launching Smart Shield, available today for all subscription tiers. Using Observatory, you can pinpoint performance bottlenecks, and for many of the most common issues, you can now apply the fix in just a few clicks with our <b>Smart Shield</b> product. Double the fun!</p>
    <div>
      <h2>Our unique perspective: leveraging data from 20% of the web</h2>
      <a href="#our-unique-perspective-leveraging-data-from-20-of-the-web">
        
      </a>
    </div>
    <p>Every day, Cloudflare handles traffic for over 20% of the web, giving us a unique vantage point into what makes websites faster and more resilient. We built Observatory to take advantage of this position, uniting data that is normally scattered across different tools — including real-user data, synthetic testing, error rates, and backend telemetry — into a single platform. This gives you a complete, cohesive picture of your application's health end-to-end, in one spot, and enables you to easily identify and resolve performance issues.</p><p>For this launch, we're bringing together:</p><ul><li><p><b>Real-user data:</b> See how your application performs for real people, in the real world.</p></li><li><p><b>Back-end telemetry:</b> Break down the lifecycle of a request to pinpoint areas for improvement.</p></li><li><p><b>Error rates:</b> Understand the stability of your application at both the edge and origin.</p></li><li><p><b>Cache hit ratios:</b> Ensure you're maximizing the performance of your configuration.</p></li><li><p><b>Synthetic testing:</b> Proactively test and monitor key endpoints with powerful, accurate simulations.</p></li></ul><p>Let's take a quick look at each data set to see how we use them in Observatory.</p>
    <div>
      <h2>Real-user data</h2>
      <a href="#real-user-data">
        
      </a>
    </div>
    <p>There are two primary forms of data collection: real-user data and synthetic data. Real-user data are performance metrics collected from real traffic, from real visitors, to your application. It’s how users are <i>actually</i> seeing your application perform in the real world. It’s unpredictable, and covers every scenario.</p><p>Synthetic data is data collected using some sort of simulated test (loading a site in a headless browser, making network requests from a testing system to an endpoint, etc.). Tests are run under a predefined set of characteristics — location, network speed, etc. — to provide a consistent baseline.</p><p>Both forms of data have their uses, and companies with a strongly established culture of operational excellence tend to use both.</p><p>The first data you’ll see when you visit Observatory is real-user data collected with <a href="https://www.cloudflare.com/web-analytics/"><u>Real User Monitoring (RUM)</u></a>, with a particular focus on the <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/"><u>Core Web Vital</u></a> metrics.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/400NHp7OBcSXNmLi5AxXb8/641f6436574e040bfbc14b56c7bfcd70/1.5.png" />
          </figure><p>This is very intentional.</p><p>Real-user data should be the source of truth when it comes to measuring performance and resiliency of your application. Even the best of synthetic data sources are always going to be an approximation. They cannot cover every possible scenario, and because they are being run from a lab environment, they will not always reveal issues that may be more sporadic and unpredictable.</p><p>They’re also the best representation of what your users are experiencing when they access your site and, at the end of the day, that’s why we focus on improving performance, resiliency,  and security for our users.</p><p>We believe so strongly in the importance of every company having access to accurate, detailed RUM data that we are providing it for free, to all accounts. In fact, we’re about to make our <a href="https://www.cloudflare.com/web-analytics/#:~:text=Privacy%20First"><u>privacy-first analytics</u></a> — which doesn’t track individual users for analytics — <a href="https://blog.cloudflare.com/the-rum-diaries-enabling-web-analytics-by-default/"><u>available by default for all free zones</u></a> (<b>excluding data from EU or UK visitors</b>), no setup necessary. We believe the right thing is arming everyone with detailed, actionable, real-user data, and we want to make it easy.</p>
    <div>
      <h2>Backend telemetry</h2>
      <a href="#backend-telemetry">
        
      </a>
    </div>
    <p>Front-end performance metrics are our best proxy for understanding the actual user experience of an application and as a result, they work great as key performance indicators (KPI’s).</p><p>But they’re not enough. Every primary metric should have some level of supporting diagnostic metrics that help us understand <i>why</i> our user metrics are performing like they are — so that we can quickly identify issues, bottlenecks, and areas of improvement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Un8yQdUf9DZw05gfS5WVs/187901b7e636cec35655ff954b1f38c4/2.png" />
          </figure><p>While the industry has largely, and rightfully, moved on from Time to First Byte (TTFB) as a primary metric of focus, it still has value as a diagnostic metric. In fact, we analyzed our RUM data and found a very strong connection between <a href="https://developers.cloudflare.com/speed/observatory/test-results/#synthetic-tests-and-real-user-monitoring-metrics"><u>Time to First Byte and Largest Contentful Paint</u></a>.</p><p>Google’s recommended thresholds for Time to First Byte are:</p><ul><li><p>Good: &lt;= 800ms</p></li><li><p>Needs Improvement: &gt; 800ms and &lt;= 1800ms</p></li><li><p>Poor: &gt; 1800ms</p></li></ul><p>Similarly, their official thresholds for Largest Contentful Paint are:</p><ul><li><p>Good: &lt;= 2500ms</p></li><li><p>Needs Improvement &gt; 2500ms and &lt;= 4000ms</p></li><li><p>Poor: &gt; 4000ms</p></li></ul><p>Looking across over 9 billion events, we found that when compared to the average site, sites with a “poor” (&gt;1800ms) TTFB are:</p><ul><li><p>70.1 percentage points less likely to have a “good” LCP</p></li><li><p>21.9 percentage points more likely to have a “needs improvement” LCP</p></li><li><p>48.2 percentage points more likely to have a “poor” LCP</p></li></ul><p>TTFB is an ill-defined blackbox, so we’re making a point to break that down into its various subparts so you can quickly pinpoint if the issue is with the connection establishment, the server response time, the network itself, and more. We’ll be working to break this down even further in the coming months as we expose the complete lifecycle of a request so you’re able to pinpoint <i>exactly</i> where the bottlenecks lie.</p>
    <div>
      <h2>Errors &amp; cache ratios</h2>
      <a href="#errors-cache-ratios">
        
      </a>
    </div>
    <p>Degradation in stability and performance are frequently directly connected to configuration changes or an increase in errors. Clear visibility into these characteristics can often cut right to the heart of the issue at hand, as well as point to opportunities for improvement of the overall efficiency and effectiveness of your application.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6j89m6ONeXh9v6XL35YJjn/1d65ac83476971fc42fccc2980bc79ff/3.png" />
          </figure><p>Observatory prominently surfaces cache hit ratio and error rates for <i>both</i> the edge and origin. This compliments the backend telemetry nicely, and helps to further breakdown the backend metrics you are seeing to help pinpoint areas of improvement.</p><p>Take cache hit ratio for example. Intuitively, we know that when content is served from cache on an edge server, it should be faster than when the request has to go all the way back to the origin server. Based on our data, again, that’s exactly what we see.</p><p>If we consider our Time To First Byte thresholds again (good is &lt;= 800ms; needs improvement is &gt; 800ms and less than 1800ms; poor is anything over 1800ms), when looking across 9 billion data points as collected by our RUM solution, we see that a whopping <b>91.7% of all pages served from Cloudflare’s cache have a “good” TTFB compared to 79.7% when the request has to be served from the origin server</b>.</p><p>In other words, optimizing origin performance (more on that in a bit) and moving more content to the edge are sure-fire ways to give you a much stronger performance baseline.</p>
    <div>
      <h2>Accurate and detailed synthetic testing</h2>
      <a href="#accurate-and-detailed-synthetic-testing">
        
      </a>
    </div>
    <p>While real-user data is our source of truth, synthetic testing and monitoring is important as well. Because tests are run in a more controlled environment (test from this location, at this time, with this criteria, etc.), the resulting data is a lot less noisy and variable. In addition, because there is not a user involved and we don’t have to worry about any observer effect, synthetic tests are able to grab a lot more information about the request and page lifecycle.</p><p>As a result, synthetic data tends to work very well for arming engineers with debugging information, as well as providing a cleaner set of data for comparing and contrasting results across different platforms, releases, and other situations.</p><p>Observatory provides two different types of synthetic tests.</p><p>The first synthetic test is a browser test. A browser test will load the requested page in a headless browser, run <a href="https://developer.chrome.com/docs/lighthouse"><u>Google’s Lighthouse</u></a> on it to report on key performance metrics, and provide some light suggestions for improvement. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cvDSWqtBTMibgYysDgEoI/43cd0c684d3705fe021f588674a91cf6/4.png" />
          </figure><p>The second type of synthetic test Observatory provides is a network test. This is a brand new test type in Cloudflare, and is focused on giving you a better breakdown of the network and back-end performance of an endpoint.</p><p>Each network test will hit the provided endpoint for the test and record the wait time, server response time, connect time, SSL negotiation time, and total load time for the endpoint response. Because these tests are much more targeted, a single test in itself is not as valuable and can be prone to variation. That variation isn’t necessarily a bad thing—in fact, variability in these results can actually give you a better understanding of the breadth of results when real users hit that same endpoint.</p><p>For that reason, network tests trigger a series of individual runs against the provided endpoint spread out over a short period of time. The data for each response is recorded, and then presented as a histogram on the test results page, letting you see not just a single datapoint, but the long and short-tail of each metric. This gives you a much more accurate representation of reality than what a single test run can provide.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gCWSp0HCTd4iJ0rTKpEpk/a610e47596eedd6b8cedf73dfcde09ca/5.png" />
          </figure><p>You are also able to compare network tests in Observatory, by selecting two network tests that have been completed. Again, all the data points for each test will be provided in a histogram, where you can easily compare the results of the two.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mG2bRanAGzltvkucJImue/11f56a4d3c3af4cd2a65dab834a0f0af/6.png" />
          </figure><p>We are working on improving both synthetic test types in Q4 2025, focusing on making them more powerful and diagnostic.</p><p>As we mentioned before, even at its best, synthetic data is an approximation of what is actually happening. Accuracy is critical. Inaccurate data can distract teams with variability and faulty measurements.</p><p>It’s important that these tools are as accurate and true to the real world as possible. It’s also important to us that we give back to the community, both because it’s the right thing to do, and because we believe the best way to have the highest level of confidence in the measurement tools and frameworks we’re using is the rigor and scrutiny that open-source provides.</p><p>For those reasons, we’ll be working on open-sourcing many of the testing agents we’re using to power Observatory. We’ll share more on that soon, as well as more details about how we’ve built each different testing tool, and why.</p>
    <div>
      <h2>Doing something about it: Smart Suggestions</h2>
      <a href="#doing-something-about-it-smart-suggestions">
        
      </a>
    </div>
    <p>People don’t measure for the sake of having data and pretty charts. They measure because they want to be able to stay on top of the health of their application and find ways to improve it. Data is easy. Understanding what to do about the data you’re presented is both the hardest, and most important, part.</p><p>Monitoring without action is useless.</p><p>We’re building Observatory to have a <i>relentless</i> focus on actionability. Before any new metric is presented, we take some time to explore why that metric matters, when it’s something worth addressing, and what actions you should take if those metrics need improvement.</p><p>All of that leads us to our new Smart Suggestions. Wherever possible, we want to pair each metric with a set of opinionated, data-driven suggestions for how to make things better. We want to avoid vague hand-wavy advice and instead be prescriptive and specific and precise.</p><p>For example, let’s look at one particular recommendation we provide around improving Largest Contentful Paint.</p><p>Largest Contentful Paint is a core web vital metric that measures when the largest piece of content is displayed on the screen. That piece of content could be an image, video or text.</p><p>Much like TTFB, Largest Contentful Paint is a bit of a black box by itself. While it tells us how long it takes for that content to get on screen, there are a large number of potential bottlenecks that could be causing the delay. Perhaps the server response time was very slow. Or maybe there was something blocking the content from being displayed on the page. If the object was an image or video, perhaps the filesize was large and the resulting download was slow. LCP by itself doesn’t give us that level of granularity, so it’s hard to give more than hand wavy guidance on how to address it.</p><p>Thankfully, just like we can break TTFB into subparts, we can break LCP into its subparts as well. Specifically we can look at:</p><ul><li><p>Time to First Byte: how quickly the server responds to the request for HTML</p></li><li><p>Resource Load Delay: How long it takes after TTFB for the browser to discover the LCP resource</p></li><li><p>Resource Load Duration: How long it takes for the browser to download the LCP resource</p></li><li><p>Render Delay: How long it takes the browser to render the content, after it has the resource in hand.</p></li></ul><p>Breaking it down into these subparts, we can be much more diagnostic about what to do.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qfKPLaTGTjJjhawTVoWAi/10ce739e376cabd7c468adfa280246dd/7.png" />
          </figure><p>In the example above, our recommendation engine analyzes the site's real-user data and notices that Resource Load Delay accounts for over 10% of total LCP time. As a result, there’s a high likelihood that the resource triggering LCP is large and could potentially be compressed to reduce file size. So we make a recommendation to enable compression using <a href="https://developers.cloudflare.com/images/polish/"><u>Polish</u></a>.</p><p>We’re very excited about the impact these suggestions will have on helping everyone quickly zero in on meaningful solutions for improving performance and resiliency, without having to wade through mountains of data to get there. As we analyze data, we’ll find more and more patterns of problems and the solutions they can map to. Expanding on our Smart Suggestions will be a constant and ongoing focus as we move forward, and we are working on adding much more content about those patterns and what we find in Q4.</p>
    <div>
      <h2>Fixing the biggest pain point: Smart Shield</h2>
      <a href="#fixing-the-biggest-pain-point-smart-shield">
        
      </a>
    </div>
    <p>Observatory gives you unprecedented insight into your application's health, but insights are only half the battle. The next challenge is acting on them, which brings us to another layer of complexity: protecting your origin. For many of our customers, proper management of origin routes and connections is one of the largest drivers of aggregate overall performance. As we mentioned before, we see a clear negative impact on user-facing performance metrics when we have to go back to the origin, and we want to make it as easy as possible for our customers to improve those experiences. Achieving this requires protecting against unnecessary load while ensuring only trusted traffic reaches your servers.</p><p>Today's customers have powerful tools to protect their origins, but achieving basic use cases remains frustratingly complex:</p><ul><li><p>Making applications faster</p></li><li><p>Reducing origin load</p></li><li><p>Understanding origin health issues</p></li><li><p>Restricting IP address access to origin servers</p></li></ul><p>These fundamental needs currently require navigating multiple APIs and dashboard settings. You shouldn't need to become an expert in each feature — we should analyze your traffic patterns and provide clear, actionable solutions.</p>
    <div>
      <h2>Smart Shield: the future of origin shielding</h2>
      <a href="#smart-shield-the-future-of-origin-shielding">
        
      </a>
    </div>
    <p>Smart Shield transforms origin protection from a complex, multi-tool challenge into a streamlined, intelligent solution that works on your behalf. Our unified API and UI combines all origin protection essentials — dynamic traffic acceleration, intelligent caching, health monitoring, and dedicated egress IPs — into one place that enables single-click configuration.</p><p>But we didn't stop at simplification. Smart Shield integrates with <b>Observatory</b> to provide both the <b>“what” </b>— identifying performance bottlenecks and health issues — and the <b>“how” </b>— delivering capabilities that increase performance, availability, and security.</p><p>This creates a continuous feedback loop: Observatory identifies problems, Smart Shield provides solutions, and real-time analytics verify the impact. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OI8AZzHo5kW4mesYsqM7Z/e08a5961deda6246a8d4fb906f2f5483/8.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6blpvetS2fS0CNAvu1lnp2/c16e1a330c2c260df4920f85b1650917/9.png" />
          </figure><p>But what does this mean for you? </p><ul><li><p>Reduce total cost of ownership (TCO)</p></li><li><p>Reduce the time-to-value (TTV) for performance, availability, and security issues pertaining to customer origins</p></li><li><p>Enable new features without guesswork and validate effectiveness in the data</p></li></ul><p>Your time stays focused on building incredible user experiences, not becoming a configuration expert. We are excited to give you back time for your customers and your engineers, while paving the way for how you make sure your origin infrastructure is easily optimized to delight your customers. </p>
    <div>
      <h2>Protecting and accelerating origins with smart Connection Reuse</h2>
      <a href="#protecting-and-accelerating-origins-with-smart-connection-reuse">
        
      </a>
    </div>
    <p>Keeping your origins fast and stable is a big part of what we do at Cloudflare. When you experience a traffic surge, the last thing you want is for a flood of <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshakes</u></a> to knock your origin down, or for those new connections to stall your requests, leaving your users to wait for slow pages to load.</p><p>This is why we’ve made significant changes to how Cloudflare’s network talks to your origins to dramatically improve the performance of our origin connections. </p><p>When Cloudflare makes a request to your origins, we make them from a subset of the available machines in every Cloudflare data center so that we can improve your connection reuse. Until now, this pool would be sized the same by default for every application within a data center, and changes to the sizing of the pool for a particular customer would need to be made manually. This often led to suboptimal connection reuse for our customers, as we might be making requests from way more machines than were actually needed, resulting in fewer warm connection pools than we otherwise could have had. This also caused issues at our data centers from time to time, as larger applications might have more traffic than the default pool size was capable of serving, resulting in production incidents where engineers are paged and had to manually increase the fanout factor for specific customers.</p><p>Now, these pool sizes are determined automatically and dynamically. By tracking domain-level traffic volume within a datacenter, we can automatically scale up and scale down the number of machines that serve traffic destined for customer origin servers for any particular customer, improving both the performance of customer websites and the reliability of our network. A massive, high-volume website with a considerable amount of API traffic will no longer be processed by the same number of machines as a smaller and more typical website. Our systems can respond to changes in customer traffic patterns within seconds, allowing us to quickly ramp up and respond to surges in origin traffic.</p><p>Thanks to these improvements, Cloudflare now uses over 30% fewer connections across the board to talk to origins. To put this into a more understandable perspective, this translates to saving approximately 402 years of handshake time every day across our global traffic, or 12,060 years of handshake time saved per month! This means just by proxying your traffic through Cloudflare, you’ll see a 30% on average reduction in the amount of connections to your origin, keeping it more available while serving the same traffic volume and in turn lowering your egress fees. But, in many cases, the results observed can be far greater than 30%. For example, in one data center which is particularly heavy in API traffic, we saw a reduction in origin connections of ~60%! </p><p>Many don’t realize that making more connections to an origin requires more compute and time for systems to create TCP and SSL handshakes. This takes time away from serving content requested by your end-users and can act as a hidden tax on your performance and overall to your application.<b> We are proud to reduce the Internet's hidden tax </b>by finding intelligent, innovative ways to reduce the amount of connections needed while supporting the same traffic volume.</p><p>Watch out for more updates to Smart Shield at the start of 2026 — we’re working on adding self-serve support for dedicated CDN egress IP addresses, along with significant performance, reliability, and resilience improvements!</p>
    <div>
      <h2>Charting the course: next steps for Observatory &amp; Smart Shield</h2>
      <a href="#charting-the-course-next-steps-for-observatory-smart-shield">
        
      </a>
    </div>
    <p>We’re really excited to share these two products with everyone today. Smart Shield and Observatory combine to provide a powerful one-two punch of insight and easy remediation.</p><p>As we navigate the beta launch of Observatory, we know this is just the start.</p><p>Our vision for Observatory is to be the single source of truth for your application’s health. We know that making the right decisions requires robust, accurate data, and we want to arm our customers with the most comprehensive picture available.</p><p>In the coming months, we plan to continue driving forward with our goal of providing comprehensive data, backed by a clear path to action.</p><ul><li><p><b>Deeper, more diagnostic data. </b>We’ll continue to break down data silos, bringing in more metrics to make sure you have a truly comprehensive view of your application’s health. We’ll be focused on going deeper and being more diagnostic, breaking down every aspect of both the request and page lifecycle to give you more granular data.</p></li><li><p><b>More paths to solutions. </b>People don’t measure for the sake of looking at data, they measure to solve problems. We’re going to continue to expand our suggestions, arming you with more precise, data-driven solutions to a wider range of issues, letting you fix problems with a single click through Smart Shield and bringing a tighter feedback loop to validate the impact of your configuration updates.</p></li><li><p><b>Benchmarking against other products.</b> Some of our customers split traffic between different CDNs due to regulatory or compliance requirements. Naturally, this brings up a whole series of questions about comparing the performance of the split traffic. In Observatory, you can compare these today, but we have a lot of things planned to make this even easier.</p></li></ul><p>Try out <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/overview"><u>Observatory</u></a> and <a href="https://www.cloudflare.com/application-services/products/smart-shield/"><u>Smart Shield</u></a> yourself today. And if you have ideas or suggestions for making Observatory and Smart Shield better, <a href="https://docs.google.com/forms/d/e/1FAIpQLScRMJVR7SmkiloMjPciaTdLzvHzKE9v3L0c418l02a1sMRj_g/viewform?usp=sharing&amp;ouid=115763007691250405767"><u>we’re all ears and would love to talk</u></a>!</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Aegis]]></category>
            <guid isPermaLink="false">tfg3NnmVPl0IoCJgQYuao</guid>
            <dc:creator>Tim Kadlec</dc:creator>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Noah Maxwell Kennedy</dc:creator>
        </item>
        <item>
            <title><![CDATA[Monitoring AS-SETs and why they matter]]></title>
            <link>https://blog.cloudflare.com/monitoring-as-sets-and-why-they-matter/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We will cover some of the reasons why operators need to monitor the AS-SET memberships for their ASN, and now Cloudflare Radar can help.  ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>Introduction to AS-SETs</h2>
      <a href="#introduction-to-as-sets">
        
      </a>
    </div>
    <p>An <a href="https://www.apnic.net/manage-ip/using-whois/guide/as-set/"><u>AS-SET</u></a>, not to be confused with the <a href="https://datatracker.ietf.org/doc/rfc9774/"><u>recently deprecated BGP AS_SET</u></a>, is an <a href="https://irr.net/overview/"><u>Internet Routing Registry (IRR)</u></a> object that allows network operators to group related networks together. AS-SETs have been used historically for multiple purposes such as grouping together a list of downstream customers of a particular network provider. For example, Cloudflare uses the <a href="https://irrexplorer.nlnog.net/as-set/AS13335:AS-CLOUDFLARE"><u>AS13335:AS-CLOUDFLARE</u></a> AS-SET to group together our list of our own <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>Autonomous System Numbers</u></a> (ASNs) and our downstream Bring-Your-Own-IP (BYOIP) customer networks, so we can ultimately <a href="https://www.peeringdb.com/net/4224"><u>communicate</u></a> to other networks whose prefixes they should accept from us. </p><p>In other words, an AS-SET is <i>currently</i> the way on the Internet that allows someone to attest the networks for which they are the provider. This system of provider authorization is completely trust-based, meaning it's <a href="https://www.kentik.com/blog/the-scourge-of-excessive-as-sets/"><u>not reliable at all</u></a>, and is best-effort. The future of an RPKI-based provider authorization system is <a href="https://datatracker.ietf.org/doc/draft-ietf-sidrops-aspa-verification/"><u>coming in the form of ASPA (Autonomous System Provider Authorization),</u></a> but it will take time for standardization and adoption. Until then, we are left with AS-SETs.</p><p>Because AS-SETs are so critical for BGP routing on the Internet, network operators need to be able to monitor valid and invalid AS-SET <i>memberships </i>for their networks. Cloudflare Radar now introduces a transparent, public listing to help network operators in our <a href="https://radar.cloudflare.com/routing/as13335"><u>routing page</u></a> per ASN.</p>
    <div>
      <h2>AS-SETs and building BGP route filters</h2>
      <a href="#as-sets-and-building-bgp-route-filters">
        
      </a>
    </div>
    <p>AS-SETs are a critical component of BGP policies, and often paired with the expressive <a href="https://irr.net/rpsl-guide/"><u>Routing Policy Specification Language (RPSL)</u></a> that describes how a particular BGP ASN accepts and propagates routes to other networks. Most often, networks use AS-SET to express what other networks should accept from them, in terms of downstream customers. </p><p>Back to the AS13335:AS-CLOUDFLARE example AS-SET, this is published clearly on <a href="https://www.peeringdb.com/net/4224"><u>PeeringDB</u></a> for other peering networks to reference and build filters against. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2590TMppv2h4SAi7uy6xS9/617ec81e2364f470c0efe243a528f695/image6.png" />
          </figure><p>When turning up a new transit provider service, we also ask the provider networks to build their route filters using the same AS-SET. Because BGP prefixes are also created in IRR <a href="https://irr.net/registry/"><u>registries</u></a> using the <i>route</i> or <i>route6 </i><a href="https://developers.cloudflare.com/byoip/concepts/irr-entries/best-practices/"><u>objects</u></a>, peers and providers now know what BGP prefixes they should accept from us and deny the rest. A popular tool for building prefix-lists based on AS-SETs and IRR databases is <a href="https://github.com/bgp/bgpq4"><u>bgpq4</u></a>, and it’s one you can easily try out yourself. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7F2QdhcZTLEJjKNtZbBWxR/92efe32dcef67aa6d51c3b1a29218843/image3.png" />
          </figure><p>For example, to generate a Juniper router’s IPv4 prefix-list containing prefixes that AS13335 could propagate for Cloudflare and its customers, you may use: </p>
            <pre><code>% bgpq4 -4Jl CLOUDFLARE-PREFIXES -m24 AS13335:AS-CLOUDFLARE | head -n 10
policy-options {
replace:
 prefix-list CLOUDFLARE-PREFIXES {
    1.0.0.0/24;
    1.0.4.0/22;
    1.1.1.0/24;
    1.1.2.0/24;
    1.178.32.0/19;
    1.178.32.0/20;
    1.178.48.0/20;</code></pre>
            <p><sup><i>Restricted to 10 lines, actual output of prefix-list would be much greater</i></sup></p><p>This prefix list would be applied within an eBGP import policy by our providers and peers to make sure AS13335 is only able to propagate announcements for ourselves and our customers.</p>
    <div>
      <h2>How accurate AS-SETs prevent route leaks</h2>
      <a href="#how-accurate-as-sets-prevent-route-leaks">
        
      </a>
    </div>
    <p>Let’s see how accurate AS-SETs can help prevent route leaks with a simple example. In this example, AS64502 has two providers – AS64501 and AS64503. AS64502 has accidentally messed up their BGP export policy configuration toward the AS64503 neighbor, and is exporting <b>all</b> routes, including those it receives from their AS64501 provider. This is a typical <a href="https://datatracker.ietf.org/doc/html/rfc7908#section-3.1"><u>Type 1 Hairpin route leak</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/D69Fq0jXg9MaGieS0KqZ2/42fa33a433c875591b85ce9a6db91610/image5.png" />
          </figure><p>Fortunately, AS64503 has implemented an import policy that they generated using IRR data including AS-SETs and route objects. By doing so, they will only accept the prefixes that originate from the <a href="https://www.manrs.org/wp-content/uploads/2021/11/AS-Cones-MANRS.pdf"><u>AS Cone</u></a> of AS64502, since they are their customer. Instead of having a major reachability or latency impact for many prefixes on the Internet because of this route leak propagating, it is stopped in its tracks thanks to the responsible filtering by the AS64503 provider network. Again it is worth keeping in mind the success of this strategy is dependent upon data accuracy for the fictional AS64502:AS-CUSTOMERS AS-SET.</p>
    <div>
      <h2>Monitoring AS-SET misuse</h2>
      <a href="#monitoring-as-set-misuse">
        
      </a>
    </div>
    <p>Besides using AS-SETs to group together one’s downstream customers, AS-SETs can also represent other types of relationships, such as peers, transits, or IXP participations.</p><p>For example, there are 76 AS-SETs that directly include one of the Tier-1 networks, Telecom Italia / Sparkle (AS6762). Judging from the names of the AS-SETs, most of them are representing peers and transits of certain ASNs, which includes AS6762. You can view this output yourself at <a href="https://radar.cloudflare.com/routing/as6762#irr-as-sets"><u>https://radar.cloudflare.com/routing/as6762#irr-as-sets</u></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eeAA6iWaAVd6qd2rB93VM/ff37a27156f8229639a6ec377c7eb273/image7.png" />
          </figure><p>There is nothing wrong with defining AS-SETs that contain one’s peers or upstreams as long as those AS-SETs are not submitted upstream for customer-&gt;provider BGP session filtering. In fact, an AS-SET for upstreams or peer-to-peer relationships can be useful for defining a network’s policies in RPSL.</p><p>However, some AS-SETs in the AS6762 membership list such as AS-10099 look to attest customer relationships. </p>
            <pre><code>% whois -h rr.ntt.net AS-10099 | grep "descr"
descr:          CUHK Customer</code></pre>
            <p>We know AS6762 is transit free and this customer membership must be invalid, so it is a prime example of AS-SET misuse that would ideally be cleaned up. Many Internet Service Providers and network operators are more than happy to correct an invalid AS-SET entry when asked to. It is reasonable to look at each AS-SET membership like this as a potential risk of having higher route leak propagation to major networks and the Internet when they happen.</p>
    <div>
      <h2>AS-SET information on Cloudflare Radar</h2>
      <a href="#as-set-information-on-cloudflare-radar">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> is a hub that showcases global Internet traffic, attack, and technology trends and insights. Today, we are adding IRR AS-SET information to Radar’s routing section, freely available to the public via both website and API access. To view all AS-SETs an AS is a member of, directly or indirectly via other AS-SETs, a user can visit the corresponding AS’s routing page. For example, the AS-SETs list for Cloudflare (AS13335) is available at <a href="https://radar.cloudflare.com/routing/as13335#irr-as-sets"><u>https://radar.cloudflare.com/routing/as13335#irr-as-sets</u></a></p><p>The AS-SET data on IRR contains only limited information like the AS members and AS-SET members. Here at Radar, we also enhance the AS-SET table with additional useful information as follows.</p><ul><li><p><code>Inferred ASN</code> shows the AS number that is inferred to be the creator of the AS-SET. We use PeeringDB AS-SET information match if available. Otherwise, we parse the AS-SET name to infer the creator.</p></li><li><p><code>IRR Sources</code> shows which IRR databases we see the corresponding AS-SET. We are currently using the following databases: <code>AFRINIC</code>, <code>APNIC</code>, <code>ARIN</code>, <code>LACNIC</code>, <code>RIPE</code>, <code>RADB</code>, <code>ALTDB</code>, <code>NTTCOM</code>, and <code>TC</code>.</p></li><li><p><code>AS Members</code> and <code>AS-SET members</code> show the count of the corresponding types of members.</p></li><li><p><code>AS Cone</code> is the count of the unique ASNs that are included by the AS-SET directly or indirectly.</p></li><li><p><code>Upstreams</code> is the count of unique AS-SETs that includes the corresponding AS-SET.</p></li></ul><p>Users can further filter the table by searching for a specific AS-SET name or ASN. A toggle to show only direct or indirect AS-SETs is also available.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0ssTf7bi6yjT2m0YKWPJE/e20b18a7d3151652fecbe606bbe13346/image1.png" />
          </figure><p>In addition to listing AS-SETs, we also provide a tree-view to display how an AS-SET includes a given ASN. For example, the following screenshot shows how as-delta indirectly includes AS6762 through 7 additional other AS-SETs. Users can copy or download this tree-view content in the text format, making it easy to share with others.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hNbh2gdj2F0eLTYrzjrVN/eceb588456067a387e7cb6eb3e1e3c5e/image4.png" />
          </figure><p>We built this Radar feature using our<a href="https://developers.cloudflare.com/api/resources/radar/subresources/entities/subresources/asns/methods/as_set/"><u> publicly available API</u></a>, the same way other Radar websites are built. We have also experimented using this API to build additional features like a full AS-SET tree visualization. We encourage developers to give <a href="https://developers.cloudflare.com/api/resources/radar/subresources/entities/subresources/asns/methods/as_set/"><u>this API</u></a> (and <a href="https://developers.cloudflare.com/api/resources/radar/"><u>other Radar APIs</u></a>) a try, and tell us what you think!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ElaU3M5oe8xRnblrrf67u/3fa35d3a25d797c0b0cbe96f0490fa93/image8.png" />
          </figure>
    <div>
      <h2>Looking ahead</h2>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>We know AS-SETs are hard to keep clean of error or misuse, and even though Radar is making them easier to monitor, the mistakes and misuse will continue. Because of this, we as a community need to push forth adoption of <a href="https://datatracker.ietf.org/doc/rfc9234/"><u>RFC9234</u></a> and <a href="https://blog.apnic.net/2025/09/05/preventing-route-leaks-made-simple-bgp-roleplay-with-junos-rfc-9234/"><u>implementations</u></a> of it from the major vendors. RFC9234 embeds roles and an Only-To-Customer (OTC) attribute directly into the BGP protocol itself, helping to detect and prevent route leaks in-line. In addition to BGP misconfiguration protection with RFC9234, Autonomous System Provider Authorization (ASPA) is still making its way <a href="https://datatracker.ietf.org/doc/draft-ietf-sidrops-aspa-verification/"><u>through the IETF</u></a> and will eventually help offer an authoritative means of attesting who the actual providers are per BGP Autonomous System (AS).</p><p>If you are a network operator and manage an AS-SET, you should seriously consider moving to <a href="https://manrs.org/2022/12/why-network-operators-should-use-hierarchical-as-sets/"><u>hierarchical AS-SETs</u></a> if you have not already. A hierarchical AS-SET looks like AS13335:AS-CLOUDFLARE instead of AS-CLOUDFLARE, but the difference is very important. Only a proper maintainer of the AS13335 ASN can create AS13335:AS-CLOUDFLARE, whereas anyone could create AS-CLOUDFLARE in an IRR database if they wanted to. In other words, using hierarchical AS-SETs helps guarantee ownership and prevent the malicious poisoning of routing information.</p><p>While keeping track of AS-SET memberships seems like a chore, it can have significant payoffs in preventing BGP-related <a href="https://blog.cloudflare.com/cloudflare-1111-incident-on-june-27-2024/"><u>incidents</u></a> such as route leaks. We encourage all network operators to do their part in making sure the AS-SETs you submit to your providers and peers to communicate your downstream customer cone are accurate. Every small adjustment or clean-up effort in AS-SETs could help lessen the impact of a BGP incident later.</p><p>Visit <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> for additional insights around (Internet disruptions, routing issues, Internet traffic trends, attacks, Internet quality, etc.). Follow us on social media at <a href="https://twitter.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>https://noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky), or contact us via <a><u>e-mail</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Radar]]></category>
            <guid isPermaLink="false">6QVNgwE5ZlVbZcWQHJKsDS</guid>
            <dc:creator>Mingwei Zhang</dc:creator>
            <dc:creator>Bryton Herdes</dc:creator>
        </item>
        <item>
            <title><![CDATA[An AI Index for all our customers]]></title>
            <link>https://blog.cloudflare.com/an-ai-index-for-all-our-customers/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare will soon automatically create an AI-optimized search index for your domain, and expose a set of ready-to-use standard APIs and tools including an MCP server, LLMs.txt, and a search API. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re announcing the <b>private beta</b> of <b>AI Index </b>for domains on Cloudflare, a new type of web index that gives content creators the tools to make their data discoverable by AI, and gives AI builders access to better data for fair compensation.</p><p>With AI Index enabled on your domain, we will automatically create an AI-optimized search index for your website, and expose a set of ready-to-use standard APIs and tools including an MCP server, LLMs.txt, and a search API. Our customers will own and control that index and how it’s used, and you will have the ability to monetize access through <a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/"><u>Pay per crawl</u></a> and the new <a href="https://blog.cloudflare.com/x402/"><u>x402 integrations</u></a>. You will be able to use it to build modern search experiences on your own site, and more importantly, interact with external AI and Agentic providers to make your content more discoverable while being fairly compensated.</p><p>For AI builders—whether developers creating agentic applications, or AI platform companies providing foundational LLM models—Cloudflare will offer a new way to discover and retrieve web content: direct <b>pub/sub connections</b> to individual websites with AI Index. Instead of indiscriminate crawling, builders will be able to subscribe to specific sites that have opted in for discovery, receive structured updates as soon as content changes, and pay fairly for each access. Access is always at the discretion of the site owner.</p><p>From the individual indexes, Cloudflare will also build an aggregated layer, the <b>Open Index</b>, that bundles together participating sites. Builders get a single place to search across collections or the broader web, while every site still retains control and can earn from participation. </p>
    <div>
      <h3>Why build an AI Index?</h3>
      <a href="#why-build-an-ai-index">
        
      </a>
    </div>
    <p>AI platforms are quickly becoming one of the main ways people discover information online. Whether asking a chatbot to summarize a news article or find a product recommendation, the path to that answer almost always starts with crawling original content and indexing or using that data for training. However, today, that process is largely controlled by platforms: what gets crawled, how often, and whether the site owner has any input in the matter.</p><p>Although Cloudflare now offers to monitor and control how AI services respect your access policies and how they access your content, it's still challenging to make new content visible. Content creators have no efficient way to signal to AI builders when a page is published or updated. On the other hand, for AI builders, crawling and recrawling unstructured content is costly, wastes resources, especially when you don’t know the quality and cost in advance.</p><p>We need a fairer and healthier ecosystem for content discovery and usage that bridges the gap between content creators and AI builders.</p>
    <div>
      <h3>How AI Index will work</h3>
      <a href="#how-ai-index-will-work">
        
      </a>
    </div>
    <p>When you onboard a domain to Cloudflare, or if you have an existing domain on Cloudflare, you will have the choice to enable an AI Index. If enabled, we will automatically create an AI-optimized search index for your domain that you own and control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kV7Oru6D5jPWeGeWDQDsi/7d738250f24250cf98db2e96222319ec/image1.png" />
          </figure><p>As your site updates and grows, the index will evolve with it. New or updated pages will be processed in real-time using the same technology that powers Cloudflare <a href="https://developers.cloudflare.com/ai-search/"><u>AI Search (formerly AutoRAG)</u></a> and its <a href="https://developers.cloudflare.com/ai-search/configuration/data-source/website/"><u>Website</u></a> as a data source. Best of all, we will manage everything; you won't have to worry about each individual component of compute, storage resources, databases, embeddings, chunking, or AI models. Everything will happen behind the scenes, automatically.</p><p>Importantly, you will have control over what content to <b>include or exclude </b>from your website's index, and <b>who</b> can get access to your content via <b>AI</b> <b>Crawl Control</b>, ensuring that only the data you want to expose is made searchable and accessible. You also will be able to opt out of the AI Index completely; it will all be up to you.</p><p>When your AI Index is set up, you will get a set of ready-to-use APIs:                                                                                                                                                   </p><ul><li><p><b>An MCP Server: </b>Agentic applications will be able to connect directly to your site using the <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>Model Context Protocol (MCP)</u></a>, making your content discoverable to agents in a standardized way. This includes support for <a href="https://developers.cloudflare.com/ai-search/how-to/nlweb/"><u>NLWeb</u></a> tools, an open project developed by Microsoft that defines a standard protocol for natural language queries on websites.</p></li><li><p><b>A flexible search API: </b>This endpoint will<b> </b>return relevant results in structured JSON. </p></li><li><p><b>LLMs.txt and LLMs-full.txt: </b>Standard files that provide LLMs with a machine-readable map of your site, following <a href="https://github.com/AnswerDotAI/llms-txt"><u>emerging open standards</u></a>. These will help models understand how to use your site’s content at inference time. An example of <a href="https://developers.cloudflare.com/llms.txt"><u>llms.txt</u></a> exists in the Cloudflare Developer Documentation.</p></li><li><p><b>A bulk data API: </b>An endpoint<b> </b>for transferring large amounts of content efficiently, available under the rules you set. Instead of querying for every document, AI providers will be able to ingest in one shot.</p></li><li><p><b>Pub-sub subscriptions: </b>AI platforms will be able to subscribe to your site’s index and receive events and content updates directly from Cloudflare in a structured format in real-time, making it easy for them to stay current without re-crawling.</p></li><li><p><b>Discoverability directives:</b> In robots.txt and well-known URIs to allow AI agents and crawlers visiting your site to discover and use the available API automatically.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Hr3EhsMBH0oVwMVKywwre/2a01efbe03d67a8154123b63c05c000f/image3.png" />
          </figure><p>The index will integrate directly with <a href="https://developers.cloudflare.com/ai-crawl-control/"><u>AI Crawl Control</u></a>, so you will be able to see who’s accessing your content, set rules, and manage permissions. And with <a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/"><u>Pay per crawl</u></a> and <a href="https://blog.cloudflare.com/x402/"><u>x402 integrations</u></a>, you can choose to directly monetize access to your content. </p>
    <div>
      <h3>A feed of the web for AI builders</h3>
      <a href="#a-feed-of-the-web-for-ai-builders">
        
      </a>
    </div>
    <p>As an AI builder, you will be able to discover and subscribe to high-quality, permissioned web data through individual site’s AI indexes. Instead of sending crawlers blindly across the open Internet, you will connect via a pub/sub model: participating websites will expose structured updates whenever their content changes, and you will be able to subscribe to receive those updates in real-time. With this model, your new workflow may look something like this:</p><ol><li><p><b>Discover websites that have opted in: </b>Browse and filter through a directory of websites that make their indexes available through Cloudflare.</p></li><li><p><b>Evaluate content with metadata and metrics: </b>Get content metadata information on various metrics (e.g., uniqueness, depth, contextual relevance, popularity) before accessing it.</p></li><li><p><b>Pay fairly for access:</b> When content is valuable, platforms can compensate creators directly through Pay per crawl. These payments not only enable access but also support the continued creation of original content, helping to sustain a healthier ecosystem for discovery.</p></li><li><p><b>Subscribe to updates: </b>Use pub-sub subscriptions to receive events about changes made by the website, so you know when to retrieve or crawl for new content without wasting resources on constant re-crawling. </p></li></ol><p>By shifting from blind crawling to a permissioned pub/sub system for the web, AI builders save time, cut costs, and gain access to cleaner, high-quality data while content creators remain in control and are fairly compensated.</p>
    <div>
      <h3>The aggregated Open Index</h3>
      <a href="#the-aggregated-open-index">
        
      </a>
    </div>
    <p>Individual indexes provide AI platforms with the ability to access data directly from specific sites, allowing them to subscribe for updates, evaluate value, and pay for full content access on a per-site basis. But when builders need to work at a larger scale, managing dozens or hundreds of separate subscriptions can become complex. The <b>Open Index </b>will provide an additional option: a bundled, opt-in collection of those indexes, featuring sophisticated features such as quality, uniqueness, originality, and depth of content filters, all accessible in one place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6rjkK5UCh9BLSqceUuG0RI/92413aed318baced0ee8812bec511cfb/image2.png" />
          </figure><p>The Open Index is designed to make content discovery at scale easier:</p><ul><li><p><b>Get unified access: </b>Query and retrieve data across many participating sites simultaneously. This reduces integration overhead and enables builders to plug into a curated collection of data, or use it as a ready-made web search layer that can be accessed at query time.</p></li><li><p><b>Discover broader scopes: </b>Work with topic-specific bundles (e.g., news, documentation, scientific research) or a general discovery index covering the broader web. This makes it simple to explore new content sources you may not have identified individually.</p></li><li><p><b>Bottom-up monetization: </b>Results still originate from an individual site’s AI index, with monetization flowing back to that site through Pay per crawl, helping preserve fairness and sustainability at scale.</p></li></ul><p>Together, per-site AI indexes and the Open Index will provide flexibility and precise control when you want full content from individual sites (i.e., for training, AI agents, or search experiences), and broad search coverage when you need a unified search across the web.</p>
    <div>
      <h3>How you can participate in the shift</h3>
      <a href="#how-you-can-participate-in-the-shift">
        
      </a>
    </div>
    <p>With AI Index and the Cloudflare Open Index, we’re creating a model where websites decide how their content is accessed, and AI builders receive structured, reliable data at scale to build a fairer and healthier ecosystem for content discovery and usage on the Internet.</p><p>We’re starting with a <b>private beta</b>. If you want to enroll your website into the AI Index or access the pub/sub web feed as an AI builder, you can <a href="https://www.cloudflare.com/aiindex-signup/"><b><u>sign up today</u></b></a>.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Search]]></category>
            <category><![CDATA[MCP]]></category>
            <guid isPermaLink="false">7rcW6x4j6v7O6ZEHir5fmK</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing new regional Internet traffic and Certificate Transparency insights on Cloudflare Radar]]></title>
            <link>https://blog.cloudflare.com/new-regional-internet-traffic-and-certificate-transparency-insights-on-radar/</link>
            <pubDate>Fri, 26 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Radar now offers a Certificate Transparency dashboard for monitoring TLS certificate activity,  and new regional traffic insights for a sub-national perspective on Internet trends. ]]></description>
            <content:encoded><![CDATA[ <p>Since <a href="https://blog.cloudflare.com/introducing-cloudflare-radar/"><u>launching during Birthday Week in 2020</u></a>, Radar has announced significant new capabilities and data sets during subsequent Birthday Weeks. We continue that tradition this year with a two-part launch, adding more dimensions to Radar’s ability to slice and dice the Internet.</p><p>First, we’re adding <a href="#introducing-regional-internet-traffic-insights-on-radar"><u>regional traffic insights</u></a>. Regional traffic insights bring a more localized perspective to the traffic trends shown on Radar.</p><p>Second, we’re adding detailed <a href="#introducing-certificate-transparency-insights-on-radar"><u>Certificate Transparency (CT) data</u></a>, too. The new CT data builds on the <a href="https://blog.cloudflare.com/tag/certificate-transparency/"><u>work that Cloudflare has been doing around CT</u></a> since 2018, including <a href="https://blog.cloudflare.com/a-tour-through-merkle-town-cloudflares-ct-ecosystem-dashboard/"><u>Merkle Town</u></a>, our initial CT dashboard.</p><p>Both features extend Radar's mission of providing deeper, more granular visibility into the health and security of the Internet. Below, we dig into these new capabilities and data sets.</p>
    <div>
      <h2>Introducing regional Internet traffic insights on Radar</h2>
      <a href="#introducing-regional-internet-traffic-insights-on-radar">
        
      </a>
    </div>
    <p>Cloudflare Radar initially launched with visibility into Internet traffic trends at a national level: want to see how that Internet shutdown impacted <a href="https://radar.cloudflare.com/traffic/iq?dateStart=2025-08-28&amp;dateEnd=2025-09-03#traffic-trends"><u>traffic in Iraq</u></a>, or what <a href="https://radar.cloudflare.com/adoption-and-usage/in#ipv4-vs-ipv6"><u>IPv6 adoption looks like in India</u></a>? It’s visible on Radar. Just a year and a half later, in March 2022, <a href="https://blog.cloudflare.com/asn-on-radar/"><u>we launched Autonomous System (ASN) pages on Radar</u></a>. This has enabled us to bring more granular visibility to many of our metrics: What’s network performance like on <a href="https://radar.cloudflare.com/quality/as701"><u>AS701 (Verizon Fios)</u></a>? How thoroughly has <a href="https://radar.cloudflare.com/routing/as812#routing-statistics"><u>AS812 (Rogers Communications)</u></a> implemented routing security? Did <a href="https://radar.cloudflare.com/traffic/as58322?dateStart=2025-08-28&amp;dateEnd=2025-09-03"><u>AS58322 (Halasat)</u></a> just go offline? It’s all visible on Radar.</p><p>However, sometimes Internet usage shifts on a more local level — maybe a sporting event in a particular region drives people online to find out more information. Or maybe a storm or other natural disaster causes infrastructure damage and power outages in a given state, impacting Internet traffic.</p><p>For the last few years, the Radar team relied on internal data sets and <a href="https://jupyter.org/"><u>Jupyter</u></a> notebooks to visualize these “sub-national” traffic shifts. But today, we are bringing that insight to Cloudflare Radar, and to you, with the launch of regional traffic insights. With this new capability, you’ll be able to see traffic trends at a more local level, including bytes and requests, as well as breakouts of desktop/mobile device and bot/human traffic shares. And for even more granular visibility, within the Data Explorer, you’ll also be able to select an autonomous system to join with the regional selection — for example, <a href="https://radar.cloudflare.com/explorer?dataSet=netflows&amp;loc=6254926&amp;dt=7d&amp;asn=as7922"><u>looking at AS7922 (Comcast) in Massachusetts (United States)</u></a>.</p>
    <div>
      <h3>Geographic guidance</h3>
      <a href="#geographic-guidance">
        
      </a>
    </div>
    <p>In line with common industry practice, the region names displayed on Radar are sourced in data from GeoNames (<a href="http://geonames.org"><u>geonames.org</u></a>), a crowdsourced geographical database. Specifically, we are using the “<a href="https://www.geonames.org/export/codes.html"><u>first-order administrative divisions</u></a>” listed for each country — for example, the states of America, the departments of Honduras, or the provinces of Canada. Those geographical names reflect data provided by GeoNames; for more information, please refer to their <a href="https://www.geonames.org/about.html"><u>About</u></a> page.</p><p>Requests logged by Cloudflare’s services include the IP address of the device making the request. The address range (“prefix”) that includes this address is associated with a GeoNames ID within our IP address geolocation data, and we then match that GeoNames ID with the associated country and “first order administrative division” found in the GeoNames dataset. (For example: 155.246.1.142 → 155.246.0.0/16 → GeoNames ID 5101760 → United States &gt; New Jersey) </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4DfCm0p0xIwNdgaXd5y1UF/ce843c0714c7b490fd757dc1d0d60b6c/image9.png" />
          </figure>
    <div>
      <h3>Drilling down into Radar traffic data</h3>
      <a href="#drilling-down-into-radar-traffic-data">
        
      </a>
    </div>
    <p>Within Cloudflare Radar, there are several ways to get to this regional data. If you know the name of the region of interest, you can type it into the search bar at the top of the page, and select it from the results. For example, beginning to type <b>Massachusetts</b> returns the U.S. state, linked to its regional traffic page. Typing the region name into the <b>Traffic in</b> dropdown at the top of a <b>Traffic</b> page will also return the same set of results.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CX1gUqYX6VCpxhzI1YoIs/54977900f36dab7697f08813f6fd06be/image11.png" />
          </figure><p>Radar’s country-level pages now have a new <b>Traffic characteristics by region</b> card that includes both summary and time series views of regional traffic. The summary view is presented as a map and table, similar to the <b>Traffic characteristics</b> card in the Worldwide traffic view. After selecting a metric from the dropdown at the top right of the card, the table and map are updated to reflect the relevant summary values for the chosen time period. Within the paginated table, the region names are linked, and clicking one will take you to the relevant page. Within the map, the summary values are represented by circles placed in the centroid of each region, sized in relation to their value. Clicking a circle will take you to the relevant page.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jJwcTEjoJfMPLuah6i1DB/aece30541e70850d52369a7997bbe064/image8.png" />
          </figure><p>Below the summary map and table, the card also includes a time series graph of traffic at a regional level for the top five highest traffic regions within the country. These graphs can reveal interesting regional differences in traffic patterns. For example, the <a href="https://radar.cloudflare.com/traffic/iq?dateStart=2025-09-02&amp;dateEnd=2025-09-08#traffic-volume-by-region"><b><u>Traffic volume by region in Iraq</u></b></a> graph for HTTP request traffic shown below highlights the differing Internet shutdown schedules (<a href="https://x.com/CloudflareRadar/status/1960324662740529354"><u>Kurdistan Region</u></a>, <a href="https://x.com/CloudflareRadar/status/1960329607892066370"><u>central and southern Iraq</u></a>) across the different governorates. On days when the schedules do not overlap, such as September 2 and 7, traffic from the Erbil and Sulaymaniyah governorates, which are located in the Kurdistan Region, does not drop concurrent with the loss in traffic observed in Baghdad and Basra.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cW34uKtkKqMdky0RIVRia/03a961f1e39dfaad04cffe06f368bdea/image18.png" />
          </figure>
    <div>
      <h3>Mobile vs. desktop device traffic trends</h3>
      <a href="#mobile-vs-desktop-device-traffic-trends">
        
      </a>
    </div>
    <p>Over the past several years, a number of Radar blog posts have explored how human activity impacts Internet traffic, including <a href="https://blog.cloudflare.com/offline-celebrations-how-christmas-nye-and-lunar-new-year-festivities-shape-online-behavior/"><u>holiday celebrations</u></a>, <a href="https://blog.cloudflare.com/elections-2024-internet/"><u>elections</u></a>, and the <a href="https://blog.cloudflare.com/paris-2024-summer-olympics-impacted-internet-traffic/"><u>Paris 2024 Summer Olympics</u></a>. With the new regional views, this impact now becomes even clearer at a more local level. For instance, mobile devices account for, on average, just over half of the request traffic seen from <a href="https://radar.cloudflare.com/traffic/184742?dateStart=2025-08-22&amp;dateEnd=2025-09-04#mobile-vs-desktop"><u>Nairobi Country in Kenya</u></a>. A clear diurnal pattern is seen on weekdays, where mobile device usage drops during workday hours, and then rises again in the evening. However, during the weekends, mobile traffic remains elevated, presumably due to fewer people using desktop computers in office environments, as well as fewer desktop computers in use at home, in line with Kenya’s <a href="https://www.ca.go.ke/mobile-data-and-digital-services-rise-ca-report-shows"><u>mobile-first</u></a> culture.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QgT4OGpdgvXiQJX8GbzEP/62947e34d96bdf85a863f3396f95b094/image17.png" />
          </figure>
    <div>
      <h3>Bot vs human traffic trends</h3>
      <a href="#bot-vs-human-traffic-trends">
        
      </a>
    </div>
    <p>Similar to how the mobile vs. desktop view exposes shifts in human activity, bot vs. human traffic insights do as well. One interpretation of the graph below is that <a href="https://radar.cloudflare.com/traffic/2267056?dateStart=2025-08-29&amp;dateEnd=2025-09-04#bot-vs-human"><u>overnight bot activity from Lisbon</u></a> increased significantly during the first few days of September. However, since the graph shows traffic shares, and given the timing of the apparent increases, the more likely cause is increasingly larger drops in human-driven traffic – users in Lisbon appear to begin logging off around 23:00 UTC (midnight local time), and start getting back online around 05:00 UTC (06:00 local time). The shares and shifts will obviously vary by country and region, but they can provide a perspective on the nocturnal habits of users in a region.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36GundkM2BKTWvCq7T7On2/a5028340a0e8b3a55f85df8116a6a7fe/image16.png" />
          </figure>
    <div>
      <h3>Customize regional analysis with Radar’s Data Explorer</h3>
      <a href="#customize-regional-analysis-with-radars-data-explorer">
        
      </a>
    </div>
    <p>Within the Data Explorer, you can use the breakdown options and filters to customize your analysis of regional traffic data.</p><p>At a country level, choosing to breakdown by regions generates a stacked area graph that shows the relative traffic shares of the top 20 regions in the selected country, along with a bar graph showing summary share values. For example, the <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=US&amp;dt=7d&amp;groupBy=adm1"><u>graph below</u></a> shows that in aggregate, Virginia and California are responsible for just over a quarter of the HTTP request volume in the United States.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2AVUcmEpxse9cAKx16qH07/5ad9be3e7bcaef3dedeb33ef90f95184/image27.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vJKLjUpKGupoPB6kkmavv/5a988c99fd99324060cbdf97054f7f28/image3.png" />
          </figure><p>You can also use Data Explorer to drill down on traffic at a network (ASN) level in a given region, in both summary and timeseries views. For example, <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=6254926&amp;dt=7d&amp;groupBy=ases"><u>looking at HTTP request traffic for Massachusetts by ASN</u></a>, we can see that <a href="https://radar.cloudflare.com/as7922"><u>AS7922</u></a> (Comcast), accounts for a third, followed by <a href="https://radar.cloudflare.com/as701"><u>AS701</u></a> (Verizon Fios, 15%), <a href="https://radar.cloudflare.com/as21928"><u>AS21928</u></a> (T-Mobile, 8.8%), <a href="https://radar.cloudflare.com/as6167"><u>AS6167</u></a> (Verizon Wireless, 5.1%), <a href="https://radar.cloudflare.com/as7018"><u>AS7018</u></a> (AT&amp;T, 4.7%), and <a href="https://radar.cloudflare.com/as20115"><u>AS20115</u></a> (Charter/Spectrum, 4.5%). Over 70% of the request traffic is concentrated in these six providers, with nearly half of that from one provider.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qdfiHtKJ32IDX1loKqCvK/238d47750ab4aa13ae1c80b1b2f16e27/image2.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qsEetiInP5TwYEBoQvWum/0c4a9d01417e67633de5f69d5c98f53f/image19.png" />
          </figure><p>Going a level deeper, you can also look at traffic trends over time for an <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASN</u></a> within a given region, and even compare it with another time period. The <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=6254926&amp;dt=7d&amp;asn=as7922&amp;timeCompare=1"><u>graph below</u></a> shows traffic for AS7922 (Comcast) in Massachusetts over a seven-day period, compared with the prior week. While the traffic volumes on most days were largely in line with the previous week, Saturday and Sunday were noticeably higher. These differences may reflect a shift in human activity, as September 6 &amp; 7 were quite rainy in Massachusetts, so people may have spent more time indoors and online. (The prior weekend was Labor Day weekend, but those Saturday and Sunday traffic levels were in line with the preceding weekend.) You can also add another ASN to the traffic trends comparison. <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=6254926&amp;dt=2025-09-04_2025-09-10&amp;timeCompare=1&amp;compAsn=as701&amp;asn=as7922&amp;compareWith=6254926"><u>Selecting Massachusetts (</u><b><u>Location</u></b><u>) and AS701 (</u><b><u>ASN)</u></b><u> (Verizon Fios)</u></a> in the <b>Compare</b> section finds that traffic on that network was higher on Saturday and Sunday as well, lending credence to the rainy weekend theory.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7yB2jNi8gqRkS4IaaqwH8c/17f74f7e9f84b0cbe2200651f32053cb/image5.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4u2EsKhCmm9QS6B7iYEXu2/7f2b626f30fc29489bf551c5c7be4623/image4.png" />
          </figure><p>Regional comparisons, whether within the same country or across different countries, are also possible in Data Explorer. For instance, if the Kansas City Chiefs and Philadelphia Eagles were to meet yet again in the Super Bowl, the configuration below could be used to <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=4398678&amp;dt=1d&amp;timeCompare=1&amp;compareWith=6254927"><u>compare traffic patterns in the teams’ respective home states</u></a>, as well as comparing the trends with the previous week, showing how human activity impacted it over the course of the game.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15OLkOMtK5I1YlK9uredOz/3f71d3e25a3f2f4065e9b9ac8896409a/image26.png" />
          </figure><p>As always, the data powering the visualizations described above are also available through the Radar API. The <code>timeseries_groups</code> and <code>summary</code> methods for the <a href="https://developers.cloudflare.com/api/resources/radar/subresources/netflows/"><u>NetFlows</u></a> and <a href="https://developers.cloudflare.com/api/resources/radar/subresources/http/"><u>HTTP</u></a> endpoints now have an <code>ADM1</code> dimension, allowing traffic to be broken down by first-order administrative divisions. In addition, the new <code>geoId</code> filter for the NetFlows and HTTP endpoints allows you to filter the results by a specific geolocation, using its GeoNames ID. And finally, there are new <a href="https://developers.cloudflare.com/api/resources/radar/subresources/geolocations/methods/get/"><code><u>get</u></code></a> and <a href="https://developers.cloudflare.com/api/resources/radar/subresources/geolocations/methods/list/"><code><u>list</u></code></a> endpoints for fetching geolocation details.</p>
    <div>
      <h3>A note regarding data quantity and quality</h3>
      <a href="#a-note-regarding-data-quantity-and-quality">
        
      </a>
    </div>
    <p>As you’d expect, the more traffic we see from a given geography, the better the “signal”, and the clearer the associated graph is — this is generally the case when traffic is aggregated at a country level. However, for some smaller or less populous regions, especially in developing countries or countries with poor Internet connectivity, lower traffic will likely cause the signal to be weaker, resulting in graphs that appear spiky or incomplete. (Note that this will also be true for region+ASN views.) An illustrative example is shown below, for <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=408666&amp;dt=2025-08-29_2025-09-04&amp;timeCompare=1#result"><u>Northern Darfur State in Sudan</u></a>. Traffic is observed somewhat inconsistently, resulting in the spikes seen in the graph. Similarly, the “Previous 7 days” line is largely incomplete, indicating a lack of traffic data for that period. In these cases, it will be hard to draw definitive conclusions from such graphs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76nx7WvxtkJZUQcwjQ1zT8/fb8f119576eff87219d2e6f2867225dc/image23.png" />
          </figure><p>Although the Internet arguably transcends geographical boundaries, the reality is that usage patterns can vary by location, with traffic trends that reflect more localized human activity. The new regional insights on Cloudflare Radar traffic pages, and in the Data Explorer, provide a perspective at a sub-national level. We are exploring the potential to go a level deeper in the future, providing traffic data for “second-order administrative divisions” (such as counties, cities, etc.).</p><p>If you share our regional traffic graphs on social media, be sure to tag us: <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky). If you have questions or comments, you can reach out to us on social media, or contact us via <a><u>email</u></a>.</p>
    <div>
      <h2>Introducing Certificate Transparency insights on Radar</h2>
      <a href="#introducing-certificate-transparency-insights-on-radar">
        
      </a>
    </div>
    <p>Just as we're bringing more granular detail to traffic patterns, we're also shedding more light on the very foundation of trust on the Internet: TLS certificates. <a href="https://en.wikipedia.org/wiki/Certificate_authority"><u>Certificate Authorities (CAs)</u></a> serve as trusted gatekeepers for the Internet: any website that wants to prove its identity to clients must present a certificate issued by a CA that the client trusts. But how do we know that CAs themselves are trustworthy and only issue certificates they are authorized to issue?</p><p>That’s where <a href="https://blog.cloudflare.com/azul-certificate-transparency-log/#what-is-certificate-transparency"><u>Certificate Transparency (CT)</u></a> comes in. Clients that enforce CT (most major browsers) will only trust a website certificate if it is both signed by a trusted CA <i>and</i> has proof that the certificate has been added to a public, append-only CT log, so that it can be publicly audited. Only recently, CT played a key role in detecting the <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>unauthorized issuance of certificates for 1.1.1.1</u></a>, a <a href="https://one.one.one.one/"><u>public DNS resolver service</u></a> that Cloudflare operates.</p><p>In addition to its role as a vital safety mechanism for the Internet, CT has proven to be invaluable in other ways, as it provides publicly-accessible lists of <i>all website certificates used on the Internet</i>. This dataset is a treasure trove of intelligence for researchers measuring the Internet, security teams detecting malicious activity like phishing campaigns, or penetration testers mapping a target’s external attack surface.</p><p>The sheer amount of data (multiple terabytes) available in CT makes it difficult for regular Internet users to download and explore themselves. Instead, services like <a href="https://crt.sh"><u>crt.sh</u></a>, <a href="https://www.censys.com/"><u>Censys</u></a>, and <a href="https://www.merklemap.com/"><u>Merklemap</u></a> provide easy search interfaces to allow discoverability for specific domain names and certificates. We <a href="https://blog.cloudflare.com/a-tour-through-merkle-town-cloudflares-ct-ecosystem-dashboard/"><u>launched</u></a> <a href="https://ct.cloudflare.com/"><u>Merkle Town</u></a> in 2018 to share broad insights into the CT ecosystem using data from our own CT monitoring service.</p><p><a href="https://radar.cloudflare.com/certificate-transparency"><u>Certificate Transparency on Cloudflare Radar</u></a> is the next evolution of Merkle Town, providing integration with security and domain information already on Radar and more interactive ways to explore and analyze CT data. (For long-time Merkle Town users, we’re keeping it around until we’ve reached full feature parity.)</p><p>In the sections below, we’ll walk you through the features available in the new dashboard.</p>
    <div>
      <h3>Certificate volume and characteristics</h3>
      <a href="#certificate-volume-and-characteristics">
        
      </a>
    </div>
    <p>The <a href="https://radar.cloudflare.com/certificate-transparency"><u>CT page</u></a> leads with a view of how many certificates are being issued and logged over time. Because the same certificate can appear multiple times within a single log or be submitted to several logs, the total count can be inflated. To address this, two distinct lines are shown: one for total entries and another for unique entries. Uniqueness, however, is calculated only within the selected time range — for example, if certificate C is added to log A in one period and to log B in another, it will appear in the unique count for both periods. It is also important to note that the CT charts and date filters use the log timestamp, which is the time a certificate was added to a CT log. Additionally, the data displayed on the page was collected from the logs monitored by Cloudflare — delays, backlogs, or other inconsistencies may exist, so <a href="https://radar.cloudflare.com/about"><u>please report</u></a> any issues or discrepancies.</p><p>Alongside this chart is a comparison between <a href="https://radar.cloudflare.com/certificate-transparency#entry-type"><u>certificates and pre-certificates</u></a>. A <a href="https://datatracker.ietf.org/doc/html/rfc6962#section-3.1"><u>pre-certificate</u></a> is a special type of certificate used in CT that allows a CA to publicly log a certificate before it is officially issued. CAs are not required to log full certificates if corresponding pre-certificates have already been logged (although many CAs do anyway), so typically there are more pre-certificates logged than full certificates, as seen in the chart.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3QslYrX5Ao6PI6QVXECXeW/a640c1f7959ed1bff834acdcf375fb34/image10.png" />
          </figure><p>While certificate issuance trends are interesting on their own, analyzing the <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=7d#certificate-characteristics"><u>characteristics</u></a> of issued certificates provides deeper insight into the state of the web’s trust infrastructure. Starting with the <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=7d#public-key-algorithm"><u>public key algorithm</u></a>, which defines how secure connections are established between clients and servers, we found that more than 65% of certificates still use <a href="https://en.wikipedia.org/wiki/RSA_cryptosystem"><u>RSA</u></a>, while the remainder use <a href="https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm"><u>ECDSA</u></a>. RSA remains dominant due to its long-standing compatibility with a wide range of clients, while ECDSA is increasingly adopted for its efficiency and smaller key sizes, which can improve performance and reduce computational overhead. In the coming years, we expect <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/"><u>post-quantum signature algorithms</u></a> like ML-DSA to appear when public CAs begin to offer support.</p><p>Next, a breakdown of certificates by <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=7d#signature-algorithm"><u>signature algorithm</u></a> reveals how Certificate Authorities (CAs) sign the certificates they issue. Most certificates (over 65%) use RSA with SHA-256, followed by ECDSA with SHA-384 at 19%, ECDSA with SHA-256 at 12%, and a small fraction using other algorithms. The choice of signature algorithm reflects a balance between widespread support, security, and performance, with stronger algorithms like ECDSA gradually gaining traction for modern deployments.</p><p>Certificates are also categorized by <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=7d#validation-level"><u>validation level</u></a>, which reflects the degree to which the CA has verified the identity of the certificate requester. The <a href="https://www.cloudflare.com/learning/ssl/types-of-ssl-certificates/"><u>main validation types</u></a> are Domain Validation (DV), Organization Validation (OV), and Extended Validation (EV). DV certificates verify only control of the domain, OV certificates verify both domain control and the organization behind it, and EV certificates involve more rigorous checks and display additional identity information in browsers. The industry trend is toward simpler, automated issuance, with DV certificates now making up almost 98% of issued certificates, while EV issuance has become largely obsolete.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/77Vz97OhHE5Aoz9qBKDk88/36f419262376870592198f0348d77106/image22.png" />
          </figure><p>Finally, the chart on <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=7d#certificate-duration"><u>certificate duration</u></a> shows the difference between the NotBefore and NotAfter dates embedded in each certificate, which define the period during which the certificate is valid. Currently, the majority (92%) of issued certificates have durations between 47 and 100 days. Shorter certificate lifetimes improve security by limiting exposure if a certificate is compromised, and the industry is <a href="https://cabforum.org/working-groups/server/baseline-requirements/requirements/#632-certificate-operational-periods-and-key-pair-usage-periods"><u>moving toward even shorter durations</u></a>, driven by browser policies and automated renewal systems.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2i4oiEIAarzTDIG4x7dT9o/fe00dd0ce8770c05dbf7689367e2d957/image15.png" />
          </figure>
    <div>
      <h3>Certificate issuance</h3>
      <a href="#certificate-issuance">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/certificate-transparency#certificate-issuance"><u>Certificate issuance</u></a> is the process by which CAs generate certificates for domain owners. Many CAs are operated by larger organizations that manage multiple subordinate CAs under a single corporate umbrella. The CT page highlights the distribution of certificate issuance across the <a href="https://radar.cloudflare.com/certificate-transparency#certificate-authority-owners"><u>top CA owners</u></a>. At the moment, the Internet Security Research Group (ISRG), also known as <a href="https://letsencrypt.org/"><u>Let’s Encrypt</u></a>, issues more than 66% of all certificates, followed by other widely used CA owners including Google Trust Services, Sectigo, and GoDaddy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wmj8AYvZIfjpzVehR72t4/73eec7d37fae4793e2303cc7ccb51944/image6.png" />
          </figure><p>The impact of events like the <a href="https://letsencrypt.status.io/pages/incident/55957a99e800baa4470002da/687e8d62b8a4e804fad85799"><u>July 21-22 Let’s Encrypt API outage</u></a> due to internal DNS failures that significantly reduced certificate issuance rates are visible in this visualization, as issuance rates dropped significantly during the two-day period.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3vUk1k5aiZghiNg6jYS0HD/497a81ff097861dc1617ac9122c675ad/image12.png" />
          </figure><p>In addition to CA owners, the page provides a breakdown of certificate issuance by <a href="https://radar.cloudflare.com/certificate-transparency#certificate-authorities"><u>individual CA certificates</u></a>. Among the top five CAs, Let’s Encrypt’s four intermediate CAs — R12, R13, E7, and E8 — represent the bulk of its issuance. The bar chart can also be filtered by CA owner to display only the certificates associated with a specified organization.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6L8Q56bPtAt593qWT7qOh3/dbf4a1a2165a7ea867c4e0d2b9184469/image13.png" />
          </figure><p>The CT section also offers dedicated CA-specific pages. By searching for a CA name or fingerprint in the top search bar, you can reach a page showing all insights and trends available on the main CT page, filtered by the selected CA. The page also includes an additional CA information card, which provides details such as the CA’s owner, revocation status, parent certificate, validity period, country, inclusion in public root stores, and a list of all CAs operated by the same owner. All of this information is derived from the <a href="https://www.ccadb.org/"><u>Common CA Database (CCADB)</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UgeS0wen7kY2tqYIdMyEW/e43096ad73311ed66135e753ed4933de/image24.png" />
          </figure>
    <div>
      <h3>Certificate Transparency logs</h3>
      <a href="#certificate-transparency-logs">
        
      </a>
    </div>
    <p>Next on the CT page is a section focused on <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#certificate-transparency-logs"><u>CT logs</u></a>. This section shows the distribution of certificates across <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#ct-log-operators"><u>CT log operators</u></a>, identifying the organizations that manage the infrastructure behind the logs. Over the last three months, Sectigo operated the logs containing the largest number of certificates (2.8 billion), followed by Google (2.5 billion), Cloudflare (1.6 billion), and Let’s Encrypt (1.4 billion). Note that the same certificate can be logged multiple times across CT logs, so organizations that operate multiple CT logs with overlapping acceptance criteria may log certificates at an elevated rate. As such, the relative rank of the operators in this graph should not be construed as a measure of how load-bearing the logs are within the ecosystem.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XcKb0WNUsDvTWO3PEFcRz/78d6199e415c5ac5f587dfe348de0c10/image21.png" />
          </figure><p>Below this, a bar chart displays the distribution of certificates across <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#ct-log-usage"><u>individual CT logs</u></a>. Among the top five logs are Google’s <a href="https://radar.cloudflare.com/certificate-transparency/log/xenon2025h2"><u>xenon2025h1</u></a> and <a href="https://radar.cloudflare.com/certificate-transparency/log/argon2025h2"><u>argon2025h2</u></a>, Cloudflare’s <a href="https://radar.cloudflare.com/certificate-transparency/log/nimbus2025"><u>nimbus2025</u></a>, and Let’s Encrypt’s <a href="https://radar.cloudflare.com/certificate-transparency/log/oak2025h2"><u>oak2025h2</u></a>. This chart can also be filtered by operator to show only the logs associated with a specific owner. Next to the chart, another view shows the distribution of certificates by <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#ct-log-api"><u>log API</u></a>, distinguishing between logs following the original <a href="https://datatracker.ietf.org/doc/html/rfc6962"><u>RFC 6962</u></a> API versus those compatible with the newer and more efficient <a href="https://c2sp.org/static-ct-api"><u>static CT API</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/582GIIREPmMXZULgwPPo4g/46db84cbd3cae894eb61f5014a0a942f/image14.png" />
          </figure><p>Similar to the dedicated CA pages, the CT section also provides log-specific pages. By searching for a log name in the top search bar, you can access a page showing all insights and trends available on the main CT page, filtered by the selected log. Two additional cards are included: one showing information about the log, derived from <a href="https://googlechrome.github.io/CertificateTransparency/log_lists.html"><u>Google Chrome’s log list</u></a>, including details such as the operator, API type, documentation, and a list of other logs operated by the same organization; and another displaying performance metrics with two <a href="https://en.wikipedia.org/wiki/Radar_chart"><u>radar charts</u></a> tracking uptime and response time over the past 90 days, as observed by Cloudflare’s CT monitor. These metrics are useful to determine if logs are meeting the ongoing requirements for inclusion in CT programs like <a href="https://googlechrome.github.io/CertificateTransparency/log_policy.html#ongoing-requirements-of-included-logs"><u>Google's</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5I7hFPCrkslctFyAc7OqIQ/d63881a27edd892900eb82841f63176e/image1.png" />
          </figure>
    <div>
      <h3>Certificate coverage</h3>
      <a href="#certificate-coverage">
        
      </a>
    </div>
    <p>Last but not least, the CT page includes a section on <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#certificate-coverage"><u>certificate coverage</u></a>. Certificates can cover multiple top-level domains (TLDs), include wildcard entries, and support IP addresses in <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/#hostname-and-wildcard-coverage"><u>Subject Alternative Names (SANs)</u></a>.</p><p>The <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#certificate-tld-distribution"><u>distribution of pre-certificates across the top 10 TLDs</u></a> highlights the domains most commonly covered. <code>.com</code> leads with 45% of certificates, followed by other popular TLDs such as <code>.dev</code> and <code>.net</code>.</p><p>Next to this view, two half-donut charts provide further insights into certificate coverage: one shows the share of certificates that <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#wildcard-usage"><u>include wildcard entries</u></a> — almost 25% of certificates use wildcards to cover multiple subdomains — while the other shows certificates that <a href="https://radar.cloudflare.com/certificate-transparency?dateRange=12w#ip-address-inclusion"><u>include IP addresses</u></a>, revealing that the vast majority of certificates do not contain IPs in their SAN fields</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wRxEpgJ1Of8Rw1O9LoQvE/badaa97eaa2017b07d617e06651e7283/image7.png" />
          </figure>
    <div>
      <h3>Expanded domain certificate data </h3>
      <a href="#expanded-domain-certificate-data">
        
      </a>
    </div>
    <p>The <a href="https://radar.cloudflare.com/domains/domain"><u>domain information</u></a> page has also been updated to provide richer details about certificates. The certificates table, which displays certificates recorded in active CT logs for the specified domain, now includes expandable rows. Expanding a row reveals further information, including the certificate’s SHA-256 fingerprint, subject and issuer details — Common Name (CN), Organization (O), and Country (C) — the validity period (<code>NotBefore</code> and <code>NotAfter</code>), and the CT log where the certificate was found.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nqIVexwwCgY0WE0X8JAJk/6df280953ab4fdcce3bd34f476915242/image20.png" />
          </figure><p>While the charts above highlight key insights in the CT ecosystem, all underlying data is accessible via the <a href="https://developers.cloudflare.com/api/resources/radar/subresources/ct/"><u>API</u></a> and can be explored interactively across time periods, CAs, logs, and additional filters and dimensions using <a href="https://radar.cloudflare.com/explorer?dataSet=ct"><u>Radar’s Data Explorer</u></a>. And as always, Radar charts and graphs can be downloaded for sharing or embedded directly into blogs, websites, and dashboards for further analysis. Don’t hesitate to <a href="https://radar.cloudflare.com/about"><u>reach out to us</u></a> with feedback, suggestions, and feature requests — we’re already working through a list of early feedback from the CT community!</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Mobile]]></category>
            <category><![CDATA[Certificate Transparency]]></category>
            <guid isPermaLink="false">6Ye6iffpYFZnLxuwqVQDL</guid>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>André Jesus</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Code Mode: the better way to use MCP]]></title>
            <link>https://blog.cloudflare.com/code-mode/</link>
            <pubDate>Fri, 26 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ It turns out we've all been using MCP wrong. Most agents today use MCP by exposing the "tools" directly to the LLM. ]]></description>
            <content:encoded><![CDATA[ <p>It turns out we've all been using MCP wrong.</p><p>Most agents today use MCP by directly exposing the "tools" to the <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>LLM</u></a>.</p><p>We tried something different: Convert the MCP tools into a TypeScript API, and then ask an LLM to write code that calls that API.</p><p>The results are striking:</p><ol><li><p>We found agents are able to handle many more tools, and more complex tools, when those tools are presented as a TypeScript API rather than directly. Perhaps this is because LLMs have an enormous amount of real-world TypeScript in their training set, but only a small set of contrived examples of tool calls.</p></li><li><p>The approach really shines when an agent needs to string together multiple calls. With the traditional approach, the output of each tool call must feed into the LLM's neural network, just to be copied over to the inputs of the next call, wasting time, energy, and tokens. When the LLM can write code, it can skip all that, and only read back the final results it needs.</p></li></ol><p>In short, LLMs are better at writing code to call MCP, than at calling MCP directly.</p>
    <div>
      <h2>What's MCP?</h2>
      <a href="#whats-mcp">
        
      </a>
    </div>
    <p>For those that aren't familiar: <a href="https://modelcontextprotocol.io/docs/getting-started/intro"><u>Model Context Protocol</u></a> is a standard protocol for giving AI agents access to external tools, so that they can directly perform work, rather than just chat with you.</p><p>Seen another way, MCP is a uniform way to:</p><ul><li><p>expose an API for doing something,</p></li><li><p>along with documentation needed for an LLM to understand it,</p></li><li><p>with authorization handled out-of-band.</p></li></ul><p>MCP has been making waves throughout 2025 as it has suddenly greatly expanded the capabilities of AI agents.</p><p>The "API" exposed by an MCP server is expressed as a set of "tools". Each tool is essentially a remote procedure call (RPC) function – it is called with some parameters and returns a response. Most modern LLMs have <a href="https://developers.cloudflare.com/workers-ai/features/function-calling/"><u>the capability to use "tools" (sometimes called "function calling")</u></a>, meaning they are trained to output text in a certain format when they want to invoke a tool. The program invoking the LLM sees this format and invokes the tool as specified, then feeds the results back into the LLM as input.</p>
    <div>
      <h3>Anatomy of a tool call</h3>
      <a href="#anatomy-of-a-tool-call">
        
      </a>
    </div>
    <p>Under the hood, an LLM generates a stream of "tokens" representing its output. A token might represent a word, a syllable, some sort of punctuation, or some other component of text.</p><p>A tool call, though, involves a token that does <i>not</i> have any textual equivalent. The LLM is trained (or, more often, fine-tuned) to understand a special token that it can output that means "the following should be interpreted as a tool call," and another special token that means "this is the end of the tool call." Between these two tokens, the LLM will typically write tokens corresponding to some sort of JSON message that describes the call.</p><p>For instance, imagine you have connected an agent to an MCP server that provides weather info, and you then ask the agent what the weather is like in Austin, TX. Under the hood, the LLM might generate output like the following. Note that here we've used words in <code>&lt;|</code> and <code>|&gt;</code> to represent our special tokens, but in fact, these tokens do not represent text at all; this is just for illustration.</p><p>I will use the Weather MCP server to find out the weather in Austin, TX.</p>
            <pre><code>I will use the Weather MCP server to find out the weather in Austin, TX.

&lt;|tool_call|&gt;
{
  "name": "get_current_weather",
  "arguments": {
    "location": "Austin, TX, USA"
  }
}
&lt;|end_tool_call|&gt;</code></pre>
            <p>Upon seeing these special tokens in the output, the LLM's harness will interpret the sequence as a tool call. After seeing the end token, the harness pauses execution of the LLM. It parses the JSON message and returns it as a separate component of the structured API result. The agent calling the LLM API sees the tool call, invokes the relevant MCP server, and then sends the results back to the LLM API. The LLM's harness will then use another set of special tokens to feed the result back into the LLM:</p>
            <pre><code>&lt;|tool_result|&gt;
{
  "location": "Austin, TX, USA",
  "temperature": 93,
  "unit": "fahrenheit",
  "conditions": "sunny"
}
&lt;|end_tool_result|&gt;</code></pre>
            <p>The LLM reads these tokens in exactly the same way it would read input from the user – except that the user cannot produce these special tokens, so the LLM knows it is the result of the tool call. The LLM then continues generating output like normal.</p><p>Different LLMs may use different formats for tool calling, but this is the basic idea.</p>
    <div>
      <h3>What's wrong with this?</h3>
      <a href="#whats-wrong-with-this">
        
      </a>
    </div>
    <p>The special tokens used in tool calls are things LLMs have never seen in the wild. They must be specially trained to use tools, based on synthetic training data. They aren't always that good at it. If you present an LLM with too many tools, or overly complex tools, it may struggle to choose the right one or to use it correctly. As a result, MCP server designers are encouraged to present greatly simplified APIs as compared to the more traditional API they might expose to developers.</p><p>Meanwhile, LLMs are getting really good at writing code. In fact, LLMs asked to write code against the full, complex APIs normally exposed to developers don't seem to have too much trouble with it. Why, then, do MCP interfaces have to "dumb it down"? Writing code and calling tools are almost the same thing, but it seems like LLMs can do one much better than the other?</p><p>The answer is simple: LLMs have seen a lot of code. They have not seen a lot of "tool calls". In fact, the tool calls they have seen are probably limited to a contrived training set constructed by the LLM's own developers, in order to try to train it. Whereas they have seen real-world code from millions of open source projects.</p><p><b><i>Making an LLM perform tasks with tool calling is like putting Shakespeare through a month-long class in Mandarin and then asking him to write a play in it. It's just not going to be his best work.</i></b></p>
    <div>
      <h3>But MCP is still useful, because it is uniform</h3>
      <a href="#but-mcp-is-still-useful-because-it-is-uniform">
        
      </a>
    </div>
    <p>MCP is designed for tool-calling, but it doesn't actually <i>have to</i> be used that way.</p><p>The "tools" that an MCP server exposes are really just an RPC interface with attached documentation. We don't really <i>have to</i> present them as tools. We can take the tools, and turn them into a programming language API instead.</p><p>But why would we do that, when the programming language APIs already exist independently? Almost every MCP server is just a wrapper around an existing traditional API – why not expose those APIs?</p><p>Well, it turns out MCP does something else that's really useful: <b>It provides a uniform way to connect to and learn about an API.</b></p><p>An AI agent can use an MCP server even if the agent's developers never heard of the particular MCP server, and the MCP server's developers never heard of the particular agent. This has rarely been true of traditional APIs in the past. Usually, the client developer always knows exactly what API they are coding for. As a result, every API is able to do things like basic connectivity, authorization, and documentation a little bit differently.</p><p>This uniformity is useful even when the AI agent is writing code. We'd like the AI agent to run in a sandbox such that it can only access the tools we give it. MCP makes it possible for the agentic framework to implement this, by handling connectivity and authorization in a standard way, independent of the AI code. We also don't want the AI to have to search the Internet for documentation; MCP provides it directly in the protocol.</p>
    <div>
      <h2>OK, how does it work?</h2>
      <a href="#ok-how-does-it-work">
        
      </a>
    </div>
    <p>We have already extended the <a href="https://developers.cloudflare.com/agents/"><u>Cloudflare Agents SDK</u></a> to support this new model!</p><p>For example, say you have an app built with ai-sdk that looks like this:</p>
            <pre><code>const stream = streamText({
  model: openai("gpt-5"),
  system: "You are a helpful assistant",
  messages: [
    { role: "user", content: "Write a function that adds two numbers" }
  ],
  tools: {
    // tool definitions 
  }
})</code></pre>
            <p>You can wrap the tools and prompt with the codemode helper, and use them in your app: </p>
            <pre><code>import { codemode } from "agents/codemode/ai";

const {system, tools} = codemode({
  system: "You are a helpful assistant",
  tools: {
    // tool definitions 
  },
  // ...config
})

const stream = streamText({
  model: openai("gpt-5"),
  system,
  tools,
  messages: [
    { role: "user", content: "Write a function that adds two numbers" }
  ]
})</code></pre>
            <p>With this change, your app will now start generating and running code that itself will make calls to the tools you defined, MCP servers included. We will introduce variants for other libraries in the very near future. <a href="https://github.com/cloudflare/agents/blob/main/docs/codemode.md"><u>Read the docs</u></a> for more details and examples. </p>
    <div>
      <h3>Converting MCP to TypeScript</h3>
      <a href="#converting-mcp-to-typescript">
        
      </a>
    </div>
    <p>When you connect to an MCP server in "code mode", the Agents SDK will fetch the MCP server's schema, and then convert it into a TypeScript API, complete with doc comments based on the schema.</p><p>For example, connecting to the MCP server at <a href="https://gitmcp.io/cloudflare/agents"><u>https://gitmcp.io/cloudflare/agents</u></a>, will generate a TypeScript definition like this:</p>
            <pre><code>interface FetchAgentsDocumentationInput {
  [k: string]: unknown;
}
interface FetchAgentsDocumentationOutput {
  [key: string]: any;
}

interface SearchAgentsDocumentationInput {
  /**
   * The search query to find relevant documentation
   */
  query: string;
}
interface SearchAgentsDocumentationOutput {
  [key: string]: any;
}

interface SearchAgentsCodeInput {
  /**
   * The search query to find relevant code files
   */
  query: string;
  /**
   * Page number to retrieve (starting from 1). Each page contains 30
   * results.
   */
  page?: number;
}
interface SearchAgentsCodeOutput {
  [key: string]: any;
}

interface FetchGenericUrlContentInput {
  /**
   * The URL of the document or page to fetch
   */
  url: string;
}
interface FetchGenericUrlContentOutput {
  [key: string]: any;
}

declare const codemode: {
  /**
   * Fetch entire documentation file from GitHub repository:
   * cloudflare/agents. Useful for general questions. Always call
   * this tool first if asked about cloudflare/agents.
   */
  fetch_agents_documentation: (
    input: FetchAgentsDocumentationInput
  ) =&gt; Promise&lt;FetchAgentsDocumentationOutput&gt;;

  /**
   * Semantically search within the fetched documentation from
   * GitHub repository: cloudflare/agents. Useful for specific queries.
   */
  search_agents_documentation: (
    input: SearchAgentsDocumentationInput
  ) =&gt; Promise&lt;SearchAgentsDocumentationOutput&gt;;

  /**
   * Search for code within the GitHub repository: "cloudflare/agents"
   * using the GitHub Search API (exact match). Returns matching files
   * for you to query further if relevant.
   */
  search_agents_code: (
    input: SearchAgentsCodeInput
  ) =&gt; Promise&lt;SearchAgentsCodeOutput&gt;;

  /**
   * Generic tool to fetch content from any absolute URL, respecting
   * robots.txt rules. Use this to retrieve referenced urls (absolute
   * urls) that were mentioned in previously fetched documentation.
   */
  fetch_generic_url_content: (
    input: FetchGenericUrlContentInput
  ) =&gt; Promise&lt;FetchGenericUrlContentOutput&gt;;
};</code></pre>
            <p>This TypeScript is then loaded into the agent's context. Currently, the entire API is loaded, but future improvements could allow an agent to search and browse the API more dynamically – much like an agentic coding assistant would.</p>
    <div>
      <h3>Running code in a sandbox</h3>
      <a href="#running-code-in-a-sandbox">
        
      </a>
    </div>
    <p>Instead of being presented with all the tools of all the connected MCP servers, our agent is presented with just one tool, which simply executes some TypeScript code.</p><p>The code is then executed in a secure sandbox. The sandbox is totally isolated from the Internet. Its only access to the outside world is through the TypeScript APIs representing its connected MCP servers.</p><p>These APIs are backed by RPC invocation which calls back to the agent loop. There, the Agents SDK dispatches the call to the appropriate MCP server.</p><p>The sandboxed code returns results to the agent in the obvious way: by invoking <code>console.log()</code>. When the script finishes, all the output logs are passed back to the agent.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DRERHP138FSj3GG0QYj3M/99e8c09b352560b7d4547ca299482c27/image2.png" />
          </figure>
    <div>
      <h2>Dynamic Worker loading: no containers here</h2>
      <a href="#dynamic-worker-loading-no-containers-here">
        
      </a>
    </div>
    <p>This new approach requires access to a secure sandbox where arbitrary code can run. So where do we find one? Do we have to run containers? Is that expensive?</p><p>No. There are no containers. We have something much better: isolates.</p><p>The Cloudflare Workers platform has always been based on V8 isolates, that is, isolated JavaScript runtimes powered by the <a href="https://v8.dev/"><u>V8 JavaScript engine</u></a>.</p><p><b>Isolates are far more lightweight than containers.</b> An isolate can start in a handful of milliseconds using only a few megabytes of memory.</p><p>Isolates are so fast that we can just create a new one for every piece of code the agent runs. There's no need to reuse them. There's no need to prewarm them. Just create it, on demand, run the code, and throw it away. It all happens so fast that the overhead is negligible; it's almost as if you were just eval()ing the code directly. But with security.</p>
    <div>
      <h3>The Worker Loader API</h3>
      <a href="#the-worker-loader-api">
        
      </a>
    </div>
    <p>Until now, though, there was no way for a Worker to directly load an isolate containing arbitrary code. All Worker code instead had to be uploaded via the Cloudflare API, which would then deploy it globally, so that it could run anywhere. That's not what we want for Agents! We want the code to just run right where the agent is.</p><p>To that end, we've added a new API to the Workers platform: the <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Worker Loader API</u></a>. With it, you can load Worker code on-demand. Here's what it looks like:</p>
            <pre><code>// Gets the Worker with the given ID, creating it if no such Worker exists yet.
let worker = env.LOADER.get(id, async () =&gt; {
  // If the Worker does not already exist, this callback is invoked to fetch
  // its code.

  return {
    compatibilityDate: "2025-06-01",

    // Specify the worker's code (module files).
    mainModule: "foo.js",
    modules: {
      "foo.js":
        "export default {\n" +
        "  fetch(req, env, ctx) { return new Response('Hello'); }\n" +
        "}\n",
    },

    // Specify the dynamic Worker's environment (`env`).
    env: {
      // It can contain basic serializable data types...
      SOME_NUMBER: 123,

      // ... and bindings back to the parent worker's exported RPC
      // interfaces, using the new `ctx.exports` loopback bindings API.
      SOME_RPC_BINDING: ctx.exports.MyBindingImpl({props})
    },

    // Redirect the Worker's `fetch()` and `connect()` to proxy through
    // the parent worker, to monitor or filter all Internet access. You
    // can also block Internet access completely by passing `null`.
    globalOutbound: ctx.exports.OutboundProxy({props}),
  };
});

// Now you can get the Worker's entrypoint and send requests to it.
let defaultEntrypoint = worker.getEntrypoint();
await defaultEntrypoint.fetch("http://example.com");

// You can get non-default entrypoints as well, and specify the
// `ctx.props` value to be delivered to the entrypoint.
let someEntrypoint = worker.getEntrypoint("SomeEntrypointClass", {
  props: {someProp: 123}
});</code></pre>
            <p>You can start playing with this API right now when running <code>workerd</code> locally with Wrangler (<a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>check out the docs</u></a>), and you can <a href="https://forms.gle/MoeDxE9wNiqdf8ri9"><u>sign up for beta access</u></a> to use it in production.</p>
    <div>
      <h2>Workers are better sandboxes</h2>
      <a href="#workers-are-better-sandboxes">
        
      </a>
    </div>
    <p>The design of Workers makes it unusually good at sandboxing, especially for this use case, for a few reasons:</p>
    <div>
      <h3>Faster, cheaper, disposable sandboxes</h3>
      <a href="#faster-cheaper-disposable-sandboxes">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/workers/reference/how-workers-works/"><u>The Workers platform uses isolates instead of containers.</u></a> Isolates are much lighter-weight and faster to start up. It takes mere milliseconds to start a fresh isolate, and it's so cheap we can just create a new one for every single code snippet the agent generates. There's no need to worry about pooling isolates for reuse, prewarming, etc.</p><p>We have not yet finalized pricing for the Worker Loader API, but because it is based on isolates, we will be able to offer it at a significantly lower cost than container-based solutions.</p>
    <div>
      <h3>Isolated by default, but connected with bindings</h3>
      <a href="#isolated-by-default-but-connected-with-bindings">
        
      </a>
    </div>
    <p>Workers are just better at handling isolation.</p><p>In Code Mode, we prohibit the sandboxed worker from talking to the Internet. The global <code>fetch()</code> and <code>connect()</code> functions throw errors.</p><p>But on most platforms, this would be a problem. On most platforms, the way you get access to private resources is, you <i>start</i> with general network access. Then, using that network access, you send requests to specific services, passing them some sort of API key to authorize private access.</p><p>But Workers has always had a better answer. In Workers, the "environment" (<code>env</code> object) doesn't just contain strings, <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/"><u>it contains live objects</u></a>, also known as "bindings". These objects can provide direct access to private resources without involving generic network requests.</p><p>In Code Mode, we give the sandbox access to bindings representing the MCP servers it is connected to. Thus, the agent can specifically access those MCP servers <i>without</i> having network access in general.</p><p>Limiting access via bindings is much cleaner than doing it via, say, network-level filtering or HTTP proxies. Filtering is hard on both the LLM and the supervisor, because the boundaries are often unclear: the supervisor may have a hard time identifying exactly what traffic is legitimately necessary to talk to an API. Meanwhile, the LLM may have difficulty guessing what kinds of requests will be blocked. With the bindings approach, it's well-defined: the binding provides a JavaScript interface, and that interface is allowed to be used. It's just better this way.</p>
    <div>
      <h3>No API keys to leak</h3>
      <a href="#no-api-keys-to-leak">
        
      </a>
    </div>
    <p>An additional benefit of bindings is that they hide API keys. The binding itself provides an already-authorized client interface to the MCP server. All calls made on it go to the agent supervisor first, which holds the access tokens and adds them into requests sent on to MCP.</p><p>This means that the AI cannot possibly write code that leaks any keys, solving a common security problem seen in AI-authored code today.</p>
    <div>
      <h2>Try it now!</h2>
      <a href="#try-it-now">
        
      </a>
    </div>
    
    <div>
      <h3>Sign up for the production beta</h3>
      <a href="#sign-up-for-the-production-beta">
        
      </a>
    </div>
    <p>The Dynamic Worker Loader API is in closed beta. To use it in production, <a href="https://forms.gle/MoeDxE9wNiqdf8ri9"><u>sign up today</u></a>.</p>
    <div>
      <h3>Or try it locally</h3>
      <a href="#or-try-it-locally">
        
      </a>
    </div>
    <p>If you just want to play around, though, Dynamic Worker Loading is fully available today when developing locally with Wrangler and <code>workerd</code> – check out the docs for <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/worker-loader/"><u>Dynamic Worker Loading</u></a> and <a href="https://github.com/cloudflare/agents/blob/main/docs/codemode.md"><u>code mode in the Agents SDK</u></a> to get started.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[MCP]]></category>
            <guid isPermaLink="false">61nEdL3TSdS4diA4x21O5e</guid>
            <dc:creator>Kenton Varda</dc:creator>
            <dc:creator>Sunil Pai</dc:creator>
        </item>
        <item>
            <title><![CDATA[Eliminating Cold Starts 2: shard and conquer]]></title>
            <link>https://blog.cloudflare.com/eliminating-cold-starts-2-shard-and-conquer/</link>
            <pubDate>Fri, 26 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We reduced Cloudflare Workers cold starts by 10x by optimistically routing to servers with already-loaded Workers. Learn how we did it here. ]]></description>
            <content:encoded><![CDATA[ <p>Five years ago, we announced that we were <a href="https://blog.cloudflare.com/eliminating-cold-starts-with-cloudflare-workers/"><u>Eliminating Cold Starts with Cloudflare Workers</u></a>. In that episode, we introduced a technique to pre-warm Workers during the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshake</u></a> of their first request. That technique takes advantage of the fact that the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>TLS Server Name Indication (SNI)</u></a> is sent in the very first message of the TLS handshake. Armed with that SNI, we often have enough information to pre-warm the request’s target Worker.</p><p>Eliminating cold starts by pre-warming <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a> during TLS handshakes was a huge step forward for us, but “eliminate” is a strong word. Back then, Workers were still relatively small, and had cold starts constrained by limits explained later in this post. We’ve relaxed those limits, and users routinely deploy complex applications on Workers, often replacing origin servers. Simultaneously, TLS handshakes haven’t gotten any slower. In fact, <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> only requires a single round trip for a handshake – compared to three round trips for TLS 1.2 – and is more widely used than it was in 2021.</p><p>Earlier this month, we finished deploying a new technique intended to keep pushing the boundary on cold start reduction. The new technique (or old, depending on your perspective) uses a consistent hash ring to take advantage of our global <a href="https://www.cloudflare.com/network"><u>network</u></a>. We call this mechanism “Worker sharding”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wfKoSIUzu20UtrJ3thfLh/c5821aa90f72a344962b83dbbfbb4508/image12.png" />
          </figure>
    <div>
      <h3>What’s in a cold start?</h3>
      <a href="#whats-in-a-cold-start">
        
      </a>
    </div>
    <p>A Worker is the basic unit of compute in our <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless computing</u></a> platform. It has a simple lifecycle. We instantiate it from source code (typically JavaScript), make it serve a bunch of requests (often HTTP, but not always), and eventually shut it down some time after it stops receiving traffic, to re-use its resources for other Workers. We call that shutdown process “eviction”.</p><p>The most expensive part of the Worker’s lifecycle is the initial instantiation and first request invocation. We call this part a “cold start”. Cold starts have several phases: fetching the script source code, compiling the source code, performing a top-level execution of the resulting JavaScript module, and finally, performing the initial invocation to serve the incoming HTTP request that triggered the whole sequence of events in the first place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dsRI0IS9GmJFRzCWeaQDw/db362a6962e20976565d6ed0fc11cdf2/image11.png" />
          </figure>
    <div>
      <h3>Cold starts have become longer than TLS handshakes</h3>
      <a href="#cold-starts-have-become-longer-than-tls-handshakes">
        
      </a>
    </div>
    <p>Fundamentally, our TLS handshake technique depends on the handshake lasting longer than the cold start. This is because the duration of the TLS handshake is time that the visitor must spend waiting, regardless, so it’s beneficial to everyone if we do as much work during that time as possible. If we can run the Worker’s cold start in the background while the handshake is still taking place, and if that cold start finishes <i>before</i> the handshake, then the request will ultimately see zero cold start delay. If, on the other hand, the cold start takes <i>longer</i> than the TLS handshake, then the request will see some part of the cold start delay – though the technique still helps reduce that visible delay.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/54wEBpC1hCRXN97zMiAXu1/f1434aac21b40093bc12d8931c7bc1ec/image7.png" />
          </figure><p>In the early days, TLS handshakes lasting longer than Worker cold starts was a safe bet, and cold starts typically won the race. One of our <a href="https://blog.cloudflare.com/cloud-computing-without-containers/#cold-starts"><u>early blog posts explaining how our platform works</u></a> mentions 5 millisecond cold start times – and that was correct, at the time!</p><p>For every limit we have, our users have challenged us to relax them. Cold start times are no different. </p><p>There are two crucial limits which affect cold start time: Worker script size and the startup CPU time limit. While we didn’t make big announcements at the time, we have quietly raised both of those limits since our last <i>Eliminating Cold Starts</i> blog post:</p><ul><li><p>Worker script size (compressed) increased from <a href="https://github.com/cloudflare/cloudflare-docs/pull/6613"><u>1 MB to 5 MB</u></a>, then again from <a href="https://github.com/cloudflare/cloudflare-docs/pull/9083"><u>5 MB to 10 MB</u></a>, for paying users.</p></li><li><p>Worker script size (compressed) increased from <a href="https://github.com/cloudflare/cloudflare-docs/pull/18400"><u>1 MB to 3 MB</u></a> for free users.</p></li><li><p>Startup CPU time increased from <a href="https://github.com/cloudflare/cloudflare-docs/pull/9154"><u>200ms to 400ms</u></a>.</p></li></ul><p>We relaxed these limits because our users wanted to deploy increasingly complex applications to our platform. And deploy they did! But the increases have a cost:</p><ul><li><p>Increasing script size increases the amount of data we must transfer from script storage to the Workers runtime.</p></li><li><p>Increasing script size also increases the time complexity of the script compilation phase.</p></li><li><p>Increasing the startup CPU time limit increases the maximum top-level execution time.</p></li></ul><p>Taken together, cold starts for complex applications began to lose the TLS handshake race.</p>
    <div>
      <h3>Routing requests to an existing Worker</h3>
      <a href="#routing-requests-to-an-existing-worker">
        
      </a>
    </div>
    <p>With relaxed script size and startup time limits, optimizing cold start time directly was a losing battle. Instead, we needed to figure out how to reduce the absolute <i>number</i> of cold starts, so that requests are simply less likely to incur one.</p><p>One option is to route requests to existing Worker instances, where before we might have chosen to start a new instance.</p><p>Previously, we weren’t particularly good at routing requests to existing Worker instances. We could trivially coalesce requests to a single Worker instance if they happened to land on a machine which already hosted a Worker, because in that case it’s not a distributed systems problem. But what if a Worker already existed in our data center on a different server, and some other server received a request for the Worker? We would always choose to cold start a new Worker on the machine which received the request, rather than forward the request to the machine with the already-existing Worker, even though forwarding the request would avoid the cold start.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Yf9jgJcUGIppzhobkbG3j/4113c3a1910961b014f4f4ff9a1866be/image8.png" />
          </figure><p>To drive the point home: Imagine a visitor sends one request per minute to a data center with 300 servers, and that the traffic is load balanced evenly across all servers. On average, each server will receive one request every five hours. In particularly busy data centers, this span of time could be long enough that we need to evict the Worker to re-use its resources, resulting in a 100% cold start rate. That’s a terrible experience for the visitor.</p><p>Consequently, we found ourselves explaining to users, who saw high latency while prototyping their applications, that their latency would counterintuitively <i>decrease</i> once they put sufficient traffic on our network. This highlighted the inefficiency in our original, simple design.</p><p>If, instead, those requests were all coalesced onto one single server, we would notice multiple benefits. The Worker would receive one request per minute, which is short enough to virtually guarantee that it won’t be evicted. This would mean the visitor may experience a single cold start, and then have a 100% “warm request rate.” We would also use 99.7% (299 / 300) less memory serving this traffic. This makes room for other Workers, decreasing their eviction rate, and increasing <i>their</i> warm request rates, too – a virtuous cycle!</p><p>There’s a cost to coalescing requests to a single instance, though, right? After all, we’re adding latency to requests if we have to proxy them around the data center to a different server.</p><p>In practice, the added time-to-first-byte is less than one millisecond, and is the subject of continual optimization by our IPC and performance teams. One millisecond is far less than a typical cold start, meaning it’s always better, in every measurable way, to proxy a request to a warm Worker than it is to cold start a new one.</p>
    <div>
      <h3>The consistent hash ring</h3>
      <a href="#the-consistent-hash-ring">
        
      </a>
    </div>
    <p>A solution to this very problem lies at the heart of many of our products, including one of our oldest: the HTTP cache in our <a href="https://www.cloudflare.com/application-services/products/cdn/"><u>Content Delivery Network</u></a>.</p><p>When a visitor requests a cacheable web asset through Cloudflare, the request gets routed through a pipeline of proxies. One of those proxies is a caching proxy, which stores the asset for later, so we can serve it to future requests without having to request it from the origin again.</p><p>A Worker cold start is analogous to an HTTP cache miss, in that a request to a warm Worker is like an HTTP cache hit.</p><p>When our standard HTTP proxy pipeline routes requests to the caching layer, it chooses a cache server based on the request's cache key to optimize the HTTP cache hit rate. <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>The cache key is the request’s URL, plus some other details</u></a>. This technique is often called “sharding”. The servers are considered to be individual shards of a larger, logical system – in this case a data center’s HTTP cache. So, we can say things like, “Each data center contains one logical HTTP cache, and that cache is sharded across every server in the data center.”</p><p>Until recently, we could not make the same claim about the set of Workers in a data center. Instead, each server contained its own standalone set of Workers, and they could easily duplicate effort.</p><p>We borrow the cache’s trick to solve that. In fact, we even use the same type of data structure used by our HTTP cache to choose servers: a <a href="https://en.wikipedia.org/wiki/Consistent_hashing"><u>consistent hash ring</u></a>. A naive sharding implementation might use a classic hash table mapping Worker script IDs to server addresses. That would work fine for a set of servers which never changes. But servers are actually ephemeral and have their own lifecycle. They can crash, get rebooted, taken out for maintenance, or decommissioned. New ones can come online. When these events occur, the size of the hash table would change, necessitating a re-hashing of the whole table. Every Worker’s home server would change, and all sharded Workers would be cold started again!</p><p>A consistent hash ring improves this scenario significantly. Instead of establishing a direct correspondence between script IDs and server addresses, we map them both to a number line whose end wraps around to its beginning, also known as a ring. To look up the home server of a Worker, first we hash its script, and then we find where it lies on the ring. Next, we take the server address which comes directly on or after that position on the ring, and consider that the Worker’s home.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UJuyYc8D1CYDrRKgBmz7e/ce1da2614c2a5d712a3ab471a0ec7675/image2.png" />
          </figure><p>If a new server appears for some reason, all the Workers that lie before it on the ring get re-homed, but none of the other Workers are disturbed. Similarly, if a server disappears, all the Workers which lay before it on the ring get re-homed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61yVG9Cd7z9R7pCbDQKUXT/42a1d5f6f02b33b4c321dc3788a8d724/image6.png" />
          </figure><p>We refer to the Worker’s home server as the “shard server”. In request flows involving sharding, there is also a “shard client”. It’s also a server! The shard client initially receives a request, and, using its consistent hash ring, looks up which shard server it should send the request to. I’ll be using these two terms – shard client and shard server – in the rest of this post.</p>
    <div>
      <h3>Handling overload</h3>
      <a href="#handling-overload">
        
      </a>
    </div>
    <p>The nature of HTTP assets lend themselves well to sharding. If they are cacheable, they are static, at least for their cache <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>Time to Live (TTL)</u></a> duration. So, serving them requires time and space complexity which scales linearly with their size.</p><p>But Workers aren’t JPEGs. They are live units of compute which can use up to five minutes of CPU time per request. Their time and space complexity do not necessarily scale with their input size, and can vastly outstrip the amount of computing power we must dedicate to serving even a huge file from cache.</p><p>This means that individual Workers can easily get overloaded when given sufficient traffic. So, no matter what we do, we need to keep in mind that we must be able to scale back up to infinity. We will never be able to guarantee that a data center has only one instance of a Worker, and we must always be able to horizontally scale at the drop of a hat to support burst traffic. Ideally this is all done without producing any errors.</p><p>This means that a shard server must have the ability to refuse requests to invoke Workers on it, and shard clients must always gracefully handle this scenario.</p>
    <div>
      <h3>Two load shedding options</h3>
      <a href="#two-load-shedding-options">
        
      </a>
    </div>
    <p>I am aware of two general solutions to shedding load gracefully, without serving errors.</p><p>In the first solution, the client asks politely if it may issue the request. It then sends the request if it  receives a positive response. If it instead receives a “go away” response, it handles the request differently, like serving it locally. In HTTP, this pattern can be found in <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/100"><u>Expect: 100-continue semantics</u></a>. The main downside is that this introduces one round-trip of latency to set the expectation of success before the request can be sent. (Note that a common naive solution is to just retry requests. This works for some kinds of requests, but is not a general solution, as requests may carry arbitrarily large bodies.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4e7OUD3T8kT5ct6OiETfwb/595c92d02312956921d379ceb4268139/image4.png" />
          </figure><p>The second general solution is to send the request without confirming that it can be handled by the server, then count on the server to forward the request elsewhere if it needs to. This could even be back to the client. This avoids the round-trip of latency that the first solution incurs, but there is a tradeoff: It puts the shard server in the request path, pumping bytes back to the client. Fortunately, we have a trick to minimize the amount of bytes we actually have to send back in this fashion, which I’ll describe in the next section.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Cb7zWYrMfmQnMBooe8g2o/2a3d2b906b85cf8eff7666630ed78b9e/image9.png" />
          </figure>
    <div>
      <h3>Optimistically sending sharded requests</h3>
      <a href="#optimistically-sending-sharded-requests">
        
      </a>
    </div>
    <p>There are a couple of reasons why we chose to optimistically send sharded requests without waiting for permission.</p><p>The first reason of note is that we expect to see very few of these refused requests in practice. The reason is simple: If a shard client receives a refusal for a Worker, then it must cold start the Worker locally. As a consequence, it can serve all future requests locally without incurring another cold start. So, after a single refusal, the shard client won’t shard that Worker any more (until traffic for the Worker tapers off enough for an eviction, at least).</p><p>Generally, this means we expect that if a request gets sharded to a different server, the shard server will most likely accept the request for invocation. Since we expect success, it makes a lot more sense to optimistically send the entire request to the shard server than it does to incur a round-trip penalty to establish permission first.</p><p>The second reason is that we have a trick to avoid paying too high a cost for proxying the request back to the client, as I mentioned above.</p><p>We implement our cross-instance communication in the Workers runtime using <a href="https://capnproto.org/"><u>Cap’n Proto RPC</u></a>, whose distributed object model enables some incredible features, like <a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript-native RPC</u></a>. It is also the elder, spiritual sibling to the just-released <a href="https://blog.cloudflare.com/capnweb-javascript-rpc-library/"><u>Cap’n Web</u></a>.</p><p>In the case of sharding, Cap’n Proto makes it very easy to implement an optimal request refusal mechanism. When the shard client assembles the sharded request, it includes a handle (<a href="https://capnproto.org/rpc.html#distributed-objects"><u>called a </u><i><u>capability</u></i></a> in Cap’n Proto) to a lazily-loaded local instance of the Worker. This lazily-loaded instance has the <a href="https://github.com/cloudflare/workerd/blob/f93fd2625f8d4131d9d50762c09deddb01bb4c70/src/workerd/io/worker-interface.capnp#L582"><u>same exact interface as any other Worker exposed over RPC</u></a>. The difference is just that it’s lazy – it doesn’t get cold started until invoked. In the event the shard server decides it must refuse the request, it does not return a “go away” response, but instead returns the shard client’s own lazy capability!</p><p>The shard client’s application code only sees that it received a capability from the shard server. It doesn’t know where that capability is actually implemented. But the shard client’s <i>RPC system</i> does know where the capability lives! Specifically, it recognizes that the returned capability is actually a local capability – the same one that it passed to the shard server. Once it realizes this, it also realizes that any request bytes it continues to send to the shard server will just come looping back. So, it stops sending more request bytes, waits to receive back from the shard server all the bytes it already sent, and shortens the request path as soon as possible. This takes the shard server entirely out of the loop, preventing a “trombone effect.”</p>
    <div>
      <h3>Workers invoking Workers</h3>
      <a href="#workers-invoking-workers">
        
      </a>
    </div>
    <p>With load shedding behavior figured out, we thought the hard part was over.</p><p>But, of course, Workers may invoke other Workers. There are many ways this could occur, most obviously via <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Service Bindings</u></a>. Less obviously, many of our favorite features, such as <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, are actually cross-Worker invocations. But there is one product, in particular, that stands out for its powerful ability to invoke other Workers: <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/"><u>Workers for Platforms</u></a>.</p><p>Workers for Platforms allows you to run your own functions-as-a-service on Cloudflare infrastructure. To use the product, you deploy three special types of Workers:</p><ul><li><p>a dynamic dispatch Worker</p></li><li><p>any number of user Workers</p></li><li><p>an optional, parameterized <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/configuration/outbound-workers/"><u>outbound Worker</u></a></p></li></ul><p>A typical request flow for Workers for Platforms goes like so: First, we invoke the dynamic dispatch Worker. The dynamic dispatch Worker chooses and invokes a user Worker. Then, the user Worker invokes the outbound Worker to intercept its subrequests. The dynamic dispatch Worker chose the outbound Worker's arguments prior to invoking the user Worker.</p><p>To really amp up the fun, the dynamic dispatch Worker could have a <a href="https://developers.cloudflare.com/workers/observability/logs/tail-workers/"><u>tail Worker</u></a> attached to it. This tail Worker would need to be invoked with traces related to all the preceding invocations. Importantly, it should be invoked one single time with all events related to the request flow, not invoked multiple times for different fragments of the request flow.</p><p>You might further ask, can you nest Workers for Platforms? I don’t know the official answer, but I can tell you that the code paths do exist, and they do get exercised.</p><p>To support this nesting doll of Workers, we keep a context stack during invocations. This context includes things like ownership overrides, resource limit overrides, trust levels, tail Worker configurations, outbound Worker configurations, feature flags, and so on. This context stack was manageable-ish when everything was executed on a single thread. For sharding to be truly useful, though, we needed to be able to move this context stack around to other machines.</p><p>Our choice of Cap’n Proto RPC as our primary communications medium helped us make sense of it all. To shard Workers deep within a stack of invocations, we serialize the context stack into a Cap’n Proto data structure and send it to the shard server. The shard server deserializes it into native objects, and continues the execution where things left off.</p><p>As with load shedding, Cap’n Proto’s distributed object model provides us simple answers to otherwise difficult questions. Take the tail Worker question – how do we coalesce tracing data from invocations which got fanned out across any number of other servers back to one single place? Easy: create a capability (a live Cap’n Proto object) for a reportTraces() callback on the dynamic dispatch Worker’s home server, and put that in the serialized context stack. Now, that context stack can be passed around at will. That context stack will end up in multiple places: At a minimum, it will end up on the user Worker’s shard server and the outbound Worker’s shard server. It may also find its way to other shard servers if any of those Workers invoked service bindings! Each of those shard servers can call the reportTraces() callback, and be confident that the data will make its way back to the right place: the dynamic dispatch Worker’s home server. None of those shard servers need to actually know <i>where</i> that home server is. Phew!</p>
    <div>
      <h3>Eviction rates down, warm request rates up</h3>
      <a href="#eviction-rates-down-warm-request-rates-up">
        
      </a>
    </div>
    <p>Features like this are always satisfying to roll out, because they produce graphs showing huge efficiency gains.</p><p>Once fully rolled out, only about 4% of total requests from enterprise traffic ended up being sharded. To put that another way, 96% of all enterprise requests are to Workers which are sufficiently loaded that we <i>must</i> run multiple instances of them in a data center.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BsNColqVqUYNpfUv1wz8W/d80edb0c93cf741eab3603a9b2ef57e8/image5.png" />
          </figure><p>Despite that low total rate of sharding, we reduced our global Worker eviction rate by 10x. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1w3sQH8iQRP6oQFCCTm9BQ/815c49d51c473aafebfb001f58762797/image10.png" />
          </figure><p>Our eviction rate is a measure of memory pressure within our system. You can think of it like garbage collection at a macro level, and it has the same implications. Fewer evictions means our system uses memory more efficiently. This has the happy consequence of using less CPU to clean up our memory. More relevant to Workers users, the increased efficiency means we can keep Workers in memory for an order of magnitude longer, improving their warm request rate and reducing their latency.</p><p>The high leverage shown – sharding just 4% of our traffic to improve memory efficiency by 10x – is a consequence of the power-law distribution of Internet traffic.</p><p>A <a href="https://en.wikipedia.org/wiki/Power_law"><u>power law distribution</u></a> is a phenomenon which occurs across many fields of science, including linguistics, sociology, physics, and, of course, computer science. Events which follow power law distributions typically see a huge amount clustered in some small number of “buckets”, and the rest spread out across a large number of those “buckets”. Word frequency is a classic example: A small handful of words like “the”, “and”, and “it” occur in texts with extremely high frequency, while other words like “eviction” or “trombone” might occur only once or twice in a text.</p><p>In our case, the majority of Workers requests goes to a small handful of high-traffic Workers, while a very long tail goes to a huge number of low-traffic Workers. The 4% of requests which were sharded are all to low-traffic Workers, which are the ones that benefit the most from sharding.</p><p>So did we eliminate cold starts? Or will there be an <i>Eliminating Cold Starts 3</i> in our future?</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/419NdQonMOOAsirFmMl885/28b0266ee311ebffb342002e5e1cf54e/image3.png" />
          </figure><p>For enterprise traffic, our warm request rate increased from 99.9% to 99.99% – that’s three 9’s to four 9’s. Conversely, this means that the cold start rate went from 0.1% to 0.01% of requests, a 10x decrease. A moment’s thought, and you’ll realize that this is coherent with the eviction rate graph I shared above: A 10x decrease in the number of Workers we destroy over time must imply we’re creating 10x fewer to begin with.</p><p>Simultaneously, our warm request rate became less volatile throughout the course of the day.</p><p>Hmm.</p><p>I hate to admit this to you, but I still notice a little bit of space at the top of the graph. 😟</p><p><a href="https://www.cloudflare.com/careers/"><u>Can you help us get to five 9’s?</u></a></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cap'n Proto]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[TLS]]></category>
            <guid isPermaLink="false">1mLzSCaF2U02LD3DDd97Mn</guid>
            <dc:creator>Harris Hancock</dc:creator>
        </item>
        <item>
            <title><![CDATA[Network performance update: Birthday Week 2025]]></title>
            <link>https://blog.cloudflare.com/network-performance-update-birthday-week-2025/</link>
            <pubDate>Fri, 26 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ On the Internet, being fast is what matters and at Cloudflare, we are committed to being the fastest network in the world. ]]></description>
            <content:encoded><![CDATA[ <p>We are committed to being the fastest network in the world because improvements in our performance translate to improvements for the own end users of your application. We are excited to share that Cloudflare continues to be the fastest network for the most peered networks in the world.</p><p>We relentlessly measure our own performance and our performance against peers. We publish those results routinely, starting with our first update in <a href="https://blog.cloudflare.com/benchmarking-edge-network-performance/"><u>June 2021</u></a> and most recently with our last post in <a href="https://blog.cloudflare.com/tr-tr/network-performance-update-birthday-week-2024/"><u>September 2024</u></a>.</p><p>Today’s update breaks down where we have improved since our update last year and what our priorities are going into the next year. While we are excited to be the fastest in the greatest number of last-mile ISPs, we are never done improving and have more work to do.</p>
    <div>
      <h3>How do we measure this metric, and what are the results?</h3>
      <a href="#how-do-we-measure-this-metric-and-what-are-the-results">
        
      </a>
    </div>
    <p>We measure network performance by attempting to capture what the experience is like for Internet users across the globe. To do that we need to simulate what their connection is like from their last-mile ISP to our networks.</p><p>We start by taking the 1,000 largest networks in the world based on estimated population. We use that to give ourselves a representation of real users in nearly every geography.</p><p>We then measure performance itself with TCP connection time. TCP connection time is the time it takes for an end user to connect to the website or endpoint they are trying to reach. We chose this metric because we believe this most closely approximates what users perceive to be Internet speed, as opposed to other metrics which are either too scientific (ignoring real world challenges like congestion or distance) or too broad.</p><p>We take the trimean measurement of TCP connection times to calculate our metric. The trimean is a weighted average of three statistical values: the first quartile, the median, and the third quartile. This approach allows us to reduce some of the noise and outliers and get a comprehensive picture of quality.</p><p>For this year’s update, we examined the trimean of TCP connection times measured from August 6 to September 4, Cloudflare is the #1 provider in 40% of the top 1000 networks. In our September 2024 update, we shared that we were the #1 provider in 44% of the top 1000 networks.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6aMHnKc3pQMa8oHds3N1uZ/fce6e2eecf2e7e8c257d6a2409befcdc/image2.png" />
          </figure><p>The TCP Connection Time (Trimean) graph shows that we are the fastest TCP connection time in 383 networks, but that would make us the fastest in 38% of the top 1,000. We exclude networks that aren’t last-mile ISPs, such as transit networks, since they don’t reflect the end user experience, which brings the number of measured networks to 964 and makes Cloudflare the fastest in 40% of measured ISPs and the fastest across the top networks.</p>
    <div>
      <h3>How do we capture this data? </h3>
      <a href="#how-do-we-capture-this-data">
        
      </a>
    </div>
    <p>A Cloudflare-branded error page does more than just display an error; it kicks off a real-world speed test. Behind the scenes, on a selection of our error pages, we use Real User Measurements (RUM), which involves a browser retrieving a small file from multiple networks, including Cloudflare, Amazon CloudFront, Google, Fastly and Akamai.</p><p>Running these tests lets us gather performance data directly from the user's perspective, providing a genuine comparison of different network speeds. We do this to understand where our network is fastest and, more importantly, where we can make further improvements. For a deeper dive into the technical details, the <a href="https://blog.cloudflare.com/introducing-radar-internet-quality-page/"><u>Speed Week blog post</u></a> covers the full methodology.</p><p>By using RUM data, we track key metrics like TCP Connection Time, Time to First Byte (TTFB), and Time to Last Byte (TTLB). These are widely recognized, industry-standard metrics that allow us to objectively measure how quickly and efficiently a website loads for actual users. By monitoring these benchmarks, we can objectively compare our performance against other networks.</p><p>We specifically chose the top 1000 networks by estimated population from APNIC, excluding those that aren’t last-mile ISPs. Consistency is key: by analyzing the same group of networks in every cycle, we ensure our measurements and reporting remain reliable and directly comparable over time.</p>
    <div>
      <h3>How do the results compare across countries?</h3>
      <a href="#how-do-the-results-compare-across-countries">
        
      </a>
    </div>
    <p>The map below shows the fastest providers per country and Cloudflare is fastest in dozens of countries. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/b6jJn6IQTCWQDhjHdtb9P/9a324658130c08caf865a81f604b2000/image5.png" />
          </figure><p>The color coding is generated by grouping all the measurements we generate by which country the measurement originates from. Then we look at the trimean measurements for each provider to identify who is the fastest… Akamai was measured as well, but providers are only represented in the map if they ranked first in a country which Akamai does not anywhere in the world.</p><p>These slim margins mean that the fastest provider in a country is often determined by <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/"><u>latency</u></a> differences so small that the fastest provider is often only faster by less than 5%. As an example, let’s look at India, a country where we are currently the second-fastest provider.</p><table><tr><td><p><b>India (IN)</b></p></td><td><p></p></td><td><p></p></td><td><p></p></td></tr><tr><td><p><b>Rank</b></p></td><td><p><b>Entity </b></p></td><td><p><b>Connect Time (Trimean)</b></p></td><td><p><b>#1 Diff</b></p></td></tr><tr><td><p>#1</p></td><td><p>CloudFront</p></td><td><p>107 ms</p></td><td><p>-</p></td></tr><tr><td><p>#2</p></td><td><p>Cloudflare</p></td><td><p>113 ms</p></td><td><p>+4.81% (+5.16 ms)</p></td></tr><tr><td><p>#3</p></td><td><p>Google</p></td><td><p>117 ms</p></td><td><p>+8.74% (+9.39 ms)</p></td></tr><tr><td><p>#4 </p></td><td><p>Fastly</p></td><td><p>133 ms</p></td><td><p>+24% (+26 ms)</p></td></tr><tr><td><p>#5</p></td><td><p>Akamai</p></td><td><p>144 ms</p></td><td><p>+34% (+37 ms)</p></td></tr></table><p>In India, Cloudflare is 5ms behind Cloudfront, the #1 provider (To put milliseconds into perspective, the average human eye blink lasts between 100ms and 400ms). The competition for the number one spot in many countries is fierce and often shifts day by day. For example, in Mexico on Tuesday, August 5th, Cloudflare was the second-fastest provider by 0.73 ms but then on Tuesday, August 12th, Cloudflare was the fastest provider by 3.72 ms. </p><table><tr><td><p><b>Mexico (MX)</b></p></td><td><p></p></td><td><p></p></td><td><p></p></td><td><p></p></td></tr><tr><td><p><b>Date</b></p></td><td><p><b>Rank</b></p></td><td><p><b>Entity </b></p></td><td><p><b>Connect Time (Trimean)</b></p></td><td><p><b>#1 Diff</b></p></td></tr><tr><td><p>August 5, 2025</p></td><td><p>#1</p></td><td><p>CloudFront</p></td><td><p>116 ms</p></td><td><p>-</p></td></tr><tr><td><p></p></td><td><p>#2</p></td><td><p>Cloudflare</p></td><td><p>116 ms</p></td><td><p>+0.63% (+0.73 ms)</p></td></tr><tr><td><p>August 12, 2025</p></td><td><p>#1</p></td><td><p>Cloudflare</p></td><td><p>106 ms</p></td><td><p>-</p></td></tr><tr><td><p></p></td><td><p>#2</p></td><td><p>CloudFront</p></td><td><p>109 ms</p></td><td><p>+3.52% (+3.72 ms)</p></td></tr></table><p>Because ranking reorderings are common, we also review country and network level rankings to evaluate and benchmark our performance. </p>
    <div>
      <h3>Focusing on where we are not the fastest yet</h3>
      <a href="#focusing-on-where-we-are-not-the-fastest-yet">
        
      </a>
    </div>
    <p>As mentioned above, in September 2024, Cloudflare was fastest in 44% of measured ISPs. These values can shift as providers constantly make improvements to their networks. One way we focus in on how we are prioritizing improving is to not just observe where we are not the fastest but to measure how far we are from the leader.</p><p>In these locations we tend to pace extremely close to the fastest provider, giving us an opportunity to capture the spot as we <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/">relentlessly improve</a>. In networks where Cloudflare is 2nd, over 50% of those networks have a less than 5% difference (10ms or less) with the top provider.</p><table><tr><td><p><b>Country</b></p></td><td><p><b>ASN</b></p></td><td><p><b>#1</b></p></td><td><p><b>Cloudflare Rank</b></p></td><td><p><b>#1 Diff (ms)</b></p></td><td><p><b>#1 Diff (%)</b></p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS36352</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>25 ms</p></td><td><p>32%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS46475</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>35 ms</p></td><td><p>29%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS29802</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>8.03 ms</p></td><td><p>21%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS20473</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>15 ms</p></td><td><p>13%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS7018</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>23 ms</p></td><td><p>13%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS4181</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>8.19 ms</p></td><td><p>11%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS62240</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>18 ms</p></td><td><p>9.77%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS22773</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>12 ms</p></td><td><p>9.48%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS6167</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>13 ms</p></td><td><p>7.55%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS11427</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>9.33 ms</p></td><td><p>5.27%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS6614</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>6.68 ms</p></td><td><p>4.12%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS4922</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>3.38 ms</p></td><td><p>3.86%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS11492</b></p></td><td><p>Fastly</p></td><td><p><b>2</b></p></td><td><p>3.73 ms</p></td><td><p>3.33%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS11351</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>5.14 ms</p></td><td><p>3.04%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS396356</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>4.12 ms</p></td><td><p>2.23%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS212238</b></p></td><td><p>Google</p></td><td><p><b>2</b></p></td><td><p>3.42 ms</p></td><td><p>1.35%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS20055</b></p></td><td><p>Fastly</p></td><td><p><b>2</b></p></td><td><p>1.22 ms</p></td><td><p>1.33%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS40021</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>2.06 ms</p></td><td><p>0.91%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS12271</b></p></td><td><p>Fastly</p></td><td><p><b>2</b></p></td><td><p>1.26 ms</p></td><td><p>0.89%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS141039</b></p></td><td><p>CloudFront</p></td><td><p><b>2</b></p></td><td><p>1.26 ms</p></td><td><p>0.88%</p></td></tr></table><p>In networks where Cloudflare is 3rd, 50% of those networks are less than a 10% difference with the top provider (10ms or less). Margins are small and suggest that in instances where Cloudflare isn’t number one across networks, we’re extremely close to our competitors and the top networks change day over day. </p><table><tr><td><p><b>Country</b></p></td><td><p><b>ASN</b></p></td><td><p><b>#1</b></p></td><td><p><b>Cloudflare Rank</b></p></td><td><p><b>#1 Diff (ms)</b></p></td><td><p><b>#1 Diff (%)</b></p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS6461</b></p></td><td><p>Google</p></td><td><p><b>3</b></p></td><td><p>33 ms</p></td><td><p>39%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS81</b></p></td><td><p>Fastly</p></td><td><p><b>3</b></p></td><td><p>43 ms</p></td><td><p>35%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS14615</b></p></td><td><p>Google</p></td><td><p><b>3</b></p></td><td><p>24 ms</p></td><td><p>24%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS13977</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>21 ms</p></td><td><p>19%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS33363</b></p></td><td><p>Google</p></td><td><p><b>3</b></p></td><td><p>29 ms</p></td><td><p>18%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS63949</b></p></td><td><p>Google</p></td><td><p><b>3</b></p></td><td><p>9.56 ms</p></td><td><p>14%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS14593</b></p></td><td><p>Fastly</p></td><td><p><b>3</b></p></td><td><p>17 ms</p></td><td><p>13%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS23089</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>7.4 ms</p></td><td><p>11%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS16509</b></p></td><td><p>Fastly</p></td><td><p><b>3</b></p></td><td><p>10 ms</p></td><td><p>9.48%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS209</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>9.69 ms</p></td><td><p>6.87%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS27364</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>8.76 ms</p></td><td><p>6.61%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS11404</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>6.11 ms</p></td><td><p>6.16%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS46690</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>5.91 ms</p></td><td><p>5.43%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS136787</b></p></td><td><p>CloudFront</p></td><td><p><b>3</b></p></td><td><p>8.23 ms</p></td><td><p>5.18%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS6079</b></p></td><td><p>Fastly</p></td><td><p><b>3</b></p></td><td><p>5.45 ms</p></td><td><p>4.49%</p></td></tr><tr><td><p><b>US</b></p></td><td><p><b>AS5650</b></p></td><td><p>Google</p></td><td><p><b>3</b></p></td><td><p>3.91 ms</p></td><td><p>3.35%</p></td></tr></table><p>Countries with an abundance of networks, like the United States, have a lot of noise we need to calibrate against. For example, the graph below represents the performance of all providers for a major ISP like AS701 (Verizon Business).</p><p><sub>AS701 (Verizon Business) Connect Time (P95) between 2025-08-09 and 2025-09-09</sub></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kiADi8Ld1teDWjMgnE4Qq/d6d3b1ca387ac12de1aac86b415129d1/image6.png" />
          </figure><p>In this chart, the “P95” value, or 95th percentile, refers to one point of a percentile distribution. The P95 shows the value below which 95% of the data points fall and is specifically good at helping identify the slowest or worst-case user experiences, such as those on poor networks or older devices. Additionally, we review the other numbers lower on the percentile chain in the table below, which tell us how performance varies across the full range of data. When we do so, the picture becomes more nuanced.</p><table><tr><td><p><b>AS701 (Verizon Business) Provider Rankings for Connect Time at P95, P75 and P50</b></p></td><td><p></p></td><td><p></p></td><td><p></p></td><td><p></p></td></tr><tr><td><p><b>Rank</b></p></td><td><p><b>Entity </b></p></td><td><p><b>Connect Time (P95)</b></p></td><td><p><b>Connect Time (P75)</b></p></td><td><p><b>Connect Time (P50)</b></p></td></tr><tr><td><p>#1</p></td><td><p>Fastly</p></td><td><p>128 ms</p></td><td><p>66 ms</p></td><td><p>48 ms</p></td></tr><tr><td><p>#2</p></td><td><p>Google</p></td><td><p>134 ms</p></td><td><p>72 ms</p></td><td><p>54 ms</p></td></tr><tr><td><p>#3</p></td><td><p>CloudFront</p></td><td><p>139 ms</p></td><td><p>67 ms</p></td><td><p>47 ms</p></td></tr><tr><td><p>#4 </p></td><td><p>Cloudflare</p></td><td><p>141 ms</p></td><td><p>68 ms</p></td><td><p>49 ms</p></td></tr><tr><td><p>#5</p></td><td><p>Akamai</p></td><td><p>160 ms</p></td><td><p>84 ms</p></td><td><p>61 ms</p></td></tr></table><p>At the 95th percentile for AS701, Cloudflare ranks 4th but at the 75th and 50th, Cloudflare is only 2 milliseconds slower than the fastest provider. In other words, when reviewing more than one point along the distribution at the network level, Cloudflare is keeping up with the top providers for the less extreme samples. To capture these details, it’s important to look at the range of outcomes, not just one percentile.</p><p>To better reflect the full spectrum of user experiences, we started using the trimean in July 2025 to rank providers. This metric combines values from across the distribution of data - specifically the 75th, 50th and 25th percentiles - which gives a more balanced representation of overall performance, rather than only focusing on the extremes. Summarizing user experience with a single number is always challenging, but the trimean helps us compare providers in a way that better reflects how users actually experience the Internet.</p><p>Cloudflare is the fastest provider in 40% of networks in the majority of real-world conditions, not just in worst-case scenarios. Still, the 95th percentile remains key to understanding how performance holds up in challenging conditions and where other providers might fall behind in performance. When we review the 95th percentile across the same date range for all the networks, not just AS701, Cloudflare is fastest across roughly the same amount of networks but by 103 more networks than the next fastest provider. Being faster in such a wide margin of networks tells us that Cloudflare is particularly strong in the challenging, long-tail cases that other providers struggle with.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jOvjHJBG1fefaz25yi8sk/4e649bdeaf743b3bbeb8cd696fe9669d/image4.png" />
          </figure><p>Our performance data shows that even when we are not the top-ranked provider, we remain exceptionally competitive, often trailing the leader by a mere handful of percentage points. Our strength at the 95th percentile also highlights our superior performance in the most challenging scenarios. Cloudflare’s ability to outperform other providers, in the worst-case, is a testament to the resilience and efficiency of our network.</p><p>Moving forward, we'll continue to share multiple metrics and continue to make improvements to our network —and we’ll use this data to do it! Let’s talk about how. </p>
    <div>
      <h3>How does Cloudflare use this data to improve?</h3>
      <a href="#how-does-cloudflare-use-this-data-to-improve">
        
      </a>
    </div>
    <p>Cloudflare applies this data to identify regions and networks that need prioritization. If we are consistently slower than other providers in a network, we want to know why, so we can fix it.</p><p>For example, the graph below shows the 95th percentile of Connect Time for AS8966. Prior to June 13, 2025, our performance was suffering, and we were the slowest provider for the network. By referencing our own measurement data, we prioritized partner data centers in the region and almost immediately performance improved for users connecting through AS8966.</p><p>Cloudflare’s partner data centers consist of collaborations with local service providers who host Cloudflare's equipment within their own facilities. This allows us to expand our network to new locations and get closer to users more quickly. In the case of AS8966, adding a new partner data center took us from being ranked last to ranked first and improved latency by roughly 150ms in one day. By using a data-driven approach, we made our network faster and most importantly, improved the end user experience.</p><p><sub>TCP Connect Time (P95) for AS8966</sub></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kX76JFqDOZq0FF798XLRM/4dc346e0a33dd564f7d42db24f91cae1/image3.png" />
          </figure>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We are always working to build a faster network and will continue sharing our process as we go. Our approach is straightforward: identify performance bottlenecks, implement fixes, and report the results. We believe in being transparent about our methods and are committed to a continuous cycle of improvement to achieve the best possible performance. Follow our blog for the latest performance updates as we continue to optimize our network and share our progress.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Network Performance Update]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Network Services]]></category>
            <guid isPermaLink="false">6pNJpwIyHtXuRYh4AhebeR</guid>
            <dc:creator>Lai Yi Ohlsen</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare uses the world’s greatest collection of performance data to make the world’s fastest global network even faster]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-uses-the-worlds-greatest-collection-of-performance-data/</link>
            <pubDate>Fri, 26 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is using its vast traffic to send responses faster than ever before, by learning the characteristics of each individual network and tuning our congestion control system.    ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare operates the fastest network on the planet. <a href="https://blog.cloudflare.com/20-percent-internet-upgrade"><u>We’ve shared an update today</u></a> about how we are overhauling the software technology that accelerates every server in our fleet, improving speed globally.</p><p>That is not where the work stops, though. To improve speed even further, we have to also make sure that our network swiftly handles the Internet-scale congestion that hits it every day, routing traffic to our now-faster servers.</p><p>We have invested in congestion control for years. Today, we are excited to share how we are applying a superpower of our network, our massive Free Plan user base, to optimize performance and find the best way to route traffic across our network for all our customers globally.</p><p>Early results have seen performance increases that average 10% faster than the prior baseline. We achieved this by applying different algorithmic methods to improve performance based on the data we observe about the Internet each day. We are excited to begin rolling out these improvements to all customers.</p>
    <div>
      <h3>How does traffic arrive in our network?</h3>
      <a href="#how-does-traffic-arrive-in-our-network">
        
      </a>
    </div>
    <p>The Internet is a massive collection of interconnected networks, each composed of many machines (“nodes”). Data is transmitted by breaking it up into small packets, and passing them from one machine to another (over a “link”). Each one of these machines is linked to many others, and each link has limited capacity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fTvUVuusvuivy20i1ikSm/4a46cfc5bf3263a8712a52ed3b207af6/image5.png" />
          </figure><p>When we send a packet over the Internet, it will travel in a series of “hops” over the links from A to B.  At any given time, there will be one link (one “hop”) with the least available capacity for that path. It doesn’t matter where in the connection this hop is — it will be the bottleneck.</p><p>But there’s a challenge — when you’re sending data over the Internet, you don’t know what route it’s going to take. In fact, each node decides for itself which route to send the traffic through, and different packets going from A to B can take entirely different routes. The dynamic and decentralized nature of the system is what makes the Internet so effective, but it also makes it very hard to work out how much data can be sent.  So — how can a sender know where the bottleneck is, and how fast to send data?</p><p>Between Cloudflare nodes, our <a href="https://developers.cloudflare.com/argo-smart-routing/"><u>Argo Smart Routing</u></a> product takes advantage of our visibility into the global network to speed up communication. Similarly, when we initiate connections to customer origins, we can leverage Argo and other insights to optimize them. However, the speed of a connection from your phone or laptop (the Client below) to the nearest Cloudflare datacenter will depend on the capacity of the bottleneck hop in the chain from you to Cloudflare, which happens outside our network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ca9pXZUfgCIlCyi5LMikk/220154f58f8dbfbfb45b49db4f068dc9/image2.png" />
          </figure>
    <div>
      <h3>What happens when too much data arrives at once?</h3>
      <a href="#what-happens-when-too-much-data-arrives-at-once">
        
      </a>
    </div>
    <p>If too much data arrives at any one node in a network in the path of a request being processed, the requestor will experience delays due to congestion. The data will either be queued for a while (risking <a href="https://en.wikipedia.org/wiki/Bufferbloat"><u>bufferbloat</u></a>), or some of it will simply get dropped. Protocols like TCP and QUIC respond to packets being dropped by retransmitting the data, but this introduces a delay, and can even make the problem worse by further overloading the limited capacity.</p><p>If cloud infrastructure providers like Cloudflare don’t manage congestion carefully, we risk overloading the system, slowing down the rate of data getting through. This actually <a href="https://en.wikipedia.org/wiki/Network_congestion#Congestive_collapse"><u>happened in the early days of the Internet</u></a>. To avoid this, the Internet infrastructure community has developed systems for controlling congestion, which give everyone a turn to send their data, without overloading the network. This is an evolving challenge, as the network grows ever more complicated, and the best method to implement congestion control is a constant pursuit. Many different algorithms have been developed, which take different sources of information and signals, optimize in a particular method, and respond to congestion in different ways.</p><p>Congestion control algorithms use a number of signals to estimate the right rate to send traffic, without knowing how the network is set up. One important signal has been <b>loss</b>. When a packet is received, the receiver sends an “ACK,” telling the sender the packet got through. If it’s dropped somewhere along the way, the sender never gets the receipt, and after a timeout will treat the packet as having been lost.</p><p>More recent algorithms have used additional data. For example, a popular algorithm called <a href="https://blog.cloudflare.com/http-2-prioritization-with-nginx/#bbr-congestion-control"><u>BBR (Bottleneck Bandwidth and Round-trip propagation time)</u></a>, which we have been using for much of our traffic, attempts to build a model during each connection of the maximum amount of data that can be transmitted in a given time period, using estimates of the round trip time as well as loss information.</p><p>The best algorithm to use often depends on the workload. For example, for interactive traffic like a video call, an algorithm that biases towards sending too much traffic can cause queues to build up, leading to high latency and poor video experience. If one were to optimize solely for that use case though, and avoid that by sending less traffic, the network will not make the best use of the connection for clients doing bulk downloads. The performance optimization outcome varies, depending on a lot of different factors.  But – we have visibility into many of them!</p><p>BBR was an exciting development in congestion control approach, moving from reactive loss-based approaches to proactive model-based optimization, resulting in significantly better performance for modern networks. Our data gives us an opportunity to go further, applying different algorithmic methods to improve performance. </p>
    <div>
      <h3>How can we do better?</h3>
      <a href="#how-can-we-do-better">
        
      </a>
    </div>
    <p>All the existing algorithms are constrained to use only information gathered during the lifetime of the current connection. Thankfully, we know far more about the Internet at any given moment than this!  With Cloudflare’s perspective on traffic, we see much more than any one customer or ISP might see at any given time.</p><p>Every day, we see traffic from essentially every major network on the planet. When a request comes into our system, we know what client device we’re talking to, what type of network is enabling the connection, and whether we’re talking to consumer ISPs or cloud infrastructure providers.</p><p>We know about the patterns of load across the global Internet, and the locations where we believe systems are overloaded, within our network, or externally. We know about the networks that have stable properties, which have high packet loss due to cellular data connections, and the ones that traverse low earth orbit satellite links and radically change their routes every 15 seconds. </p>
    <div>
      <h3>How does this work?</h3>
      <a href="#how-does-this-work">
        
      </a>
    </div>
    <p>We have been in the process of migrating our network technology stack to use a new platform, powered by Rust, that provides more flexibility to experiment with varying the parameters in the algorithms used to handle congestion control. Then we needed data.</p><p>The data powering these experiments needs to reflect the measure we’re trying to optimize, which is the user experience. It’s not just enough that we’re sending data to nearly all the networks on the planet; we have to be able to see what is the experience that customers have. So how do we do that, at our scale?</p><p>First, we have detailed “passive” logs of the rate at which data is able to be sent from our network, and how long it takes for the destination to acknowledge receipt. This covers all our traffic, and gives us an idea of how quickly the data was received by the client, but doesn’t guarantee to tell us about the user experience.</p><p>Next, we have a system for gathering <a href="https://developers.cloudflare.com/speed/observatory/rum-beacon/"><u>Real User Measurement</u></a> (RUM) data, which records information in supported web browsers about metrics such as Page Load Time (PLT). Any Cloudflare customer can enable this and will receive detailed insights in their dashboard. In addition, we use this metadata in aggregate across all our customers and networks to understand what customers are really experiencing. </p><p>However, RUM data is only going to be present for a small proportion of connections across our network. So, we’ve been working to find a way to predict the RUM measures by extrapolating from the data we see only in passive logs. For example, here are the results of an experiment we performed comparing two different algorithms against the cubic baseline.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/u3C9Otkgf46yMcc2gGQTe/adde0c39d62f2fb6c88b439d826e8f4a/image4.png" />
          </figure><p>Now, here’s the same timescale, observed through the prediction based on our passive logs. The curves are very similar - but even more importantly, the ratio between the curves is very similar. <b>This is huge!</b> We can use a relatively small amount of RUM data to validate our findings, but optimize our network in a much more fine-grained way by using the full firehose of our passive logs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ExUljHIAUBxFiUuFccRtY/6db8fa8d3d388f484b6ac7f71eeab83f/image3.png" />
          </figure><p>Extrapolating too far becomes unreliable, so we’re also working with some of our largest customers to improve our visibility of the behaviour of the network from their clients’ point of view, which allows us to extend this predictive model even further. In return, we’ll be able to give our customers insights into the true experience of their clients, in a way that no other platform can offer.</p>
    <div>
      <h3>What is next?</h3>
      <a href="#what-is-next">
        
      </a>
    </div>
    <p>We’re currently running our experiments and improved algorithms for congestion control on all of our free tier QUIC traffic. As we learn more, verify on more complex customers, and expand to TCP traffic, we’ll gradually roll this out to all our customers, for all traffic, over 2026 and beyond. The results have led to as much as a 10% improvement as compared to the baseline!</p><p>We’re working with a select group of enterprises to test this in an early access program. If you’re interested in learning more, <a href="https://forms.gle/MCBjEEhYQvwWFccx9"><u>contact us</u></a>! </p> ]]></content:encoded>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">18yRAwafvLrtG4qnJbMd3P</guid>
            <dc:creator>Steve Goldsmith</dc:creator>
            <dc:creator>Richard Boulton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Every Cloudflare feature, available to everyone]]></title>
            <link>https://blog.cloudflare.com/enterprise-grade-features-for-all/</link>
            <pubDate>Thu, 25 Sep 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is making every feature available to any customer. ]]></description>
            <content:encoded><![CDATA[ <p>Over the next year Cloudflare will make nearly every feature we offer available to any customer who wants to buy and use it regardless of whether they are an enterprise account. No need to pick up a phone and talk to a sales team member. No requirement to find time with a solutions engineer in our team to turn on a feature. No contract necessary. We believe that if you want to use something we offer, you should just be able to buy it.</p><p>Today’s launch starts by bringing Single Sign-On (SSO) into our dashboard out of our enterprise plan and making it available to any user. That capability is the first of many. We will be sharing updates over the next few months as more and more features become available for purchase on any plan.</p><p>We are also making a commitment to ensuring that all future releases will follow this model. The goal is not to restrict new tools to the enterprise tier for some amount of time before making them widely available. We believe helping build a better Internet means making sure the best tools are available to anyone who needs them.</p>
    <div>
      <h3>Enterprise grade for everyone</h3>
      <a href="#enterprise-grade-for-everyone">
        
      </a>
    </div>
    <p>It’s not enough to build the best tools on the web. At Cloudflare our mission is to help build a better Internet and that means making the tools we build accessible. We believe the best way to make the Internet faster and more secure is to put powerful features into the hands of as many people as possible.</p><p>We first launched an Enterprise tier years ago when larger customers came to us looking to scale their usage of Cloudflare in new ways. They needed procurement options beyond a credit card, like invoices, custom contracts, and dedicated support. This offering was a necessary and important step to bring the benefits of our network and tools to large organizations with complex needs.</p><p>This created an unintended side effect in how we shipped products. Some of our most powerful and innovative features were launched within an enterprise-only tier. This created a gap, a two-tiered system where some of the most advanced features were reserved only for the largest companies.</p><p>It also created a divergence in our product development. Features built for our self-service customers had to be incredibly simple and intuitive from day-one. Features designated “enterprise-only” didn’t always face that same pressure to scale – we could instead rely on our solutions teams or partners to help set up and support.</p><p>It’s time to fix that. Starting today, we are doing away with the concept of “enterprise-only” features. Over the coming months and quarters, we will make many of our most advanced capabilities available to all of our customers.</p><p>The change will help build a more secure Internet by removing barriers to the adoption of the most advanced tools available. The change improves the experience for all customers. Smaller teams on our self-service plans will have access to the most powerful configuration options we offer. Existing enterprise teams will have easier pathways to adopt new tools without calling their account manager. And our own Product teams have even more reason to continue to make all features we ship easy to use.</p><p>Today we are beginning with dashboard SSO with instructions on how to begin setting that up right now below. It is the first of many though and capabilities like apex proxying and expanded upload limits, along with many others of our most requested enterprise features, will follow.</p>
    <div>
      <h3>Starting with how you sign in to Cloudflare</h3>
      <a href="#starting-with-how-you-sign-in-to-cloudflare">
        
      </a>
    </div>
    <p>One example of a feature we launched only to enterprise customers because of the complexity in setting it up is SSO. Enterprise teams maintain their own identity provider where they can manage internal employee accounts and how their team members log into different services.</p><p>They integrate these identity providers with the tools their employees need so that team members do not need to create and remember a username and password for each and every service. More importantly, the management of identity in a single place gives enterprises the ability to control authentication policies, onboard and offboard users, and hand out licenses for tools.</p><p>We first launched our own SSO support <a href="https://blog.cloudflare.com/introducing-single-sign-on-for-the-cloudflare-dashboard/"><u>way back in 2018</u></a>. In the last seven years we have been helping thousands of enterprise customers manually set this up, but we know that teams of all sizes rely on the security and convenience of an identity provider. As part of this announcement, the first enterprise feature we are making available to everyone is dashboard SSO.</p><p>The functionality is available immediately to anyone on any plan. To get started, follow the instructions <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/dash-sso-apps/"><u>here</u></a> to integrate your identity provider with Cloudflare and to then connect your domain with your account. By setting up your identity provider for dashboard SSO you will also be able to begin using the vast majority of our Zero Trust security features, as well, which are available <a href="https://www.cloudflare.com/plans/zero-trust-services/"><u>at no cost</u></a> for up to 50 users.</p><p>We also know that some teams are too early or distributed to have a full-fledged identity provider but want the convenience and security of managing logins in one place. To that end, we are also excited to launch support for GitHub as a social login provider to the Cloudflare dashboard as part of today’s announcement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BI0r2eZHaNulh9ur0L5HD/386d1f63b0ece9bea08c51c036b6584c/unnamed__35_.png" />
          </figure>
    <div>
      <h3>And extending to almost everything else over the next year</h3>
      <a href="#and-extending-to-almost-everything-else-over-the-next-year">
        
      </a>
    </div>
    <p>We prioritized dashboard SSO because just about every team that uses Cloudflare wants it. This one change helps make nearly every customer safer by allowing them to centrally manage team access. As we burn down the list of previously enterprise-only features, we will continue targeting those that have similar broad impact.</p><p>Some capabilities, like Magic Transit, have less broad appeal. The organizations that maintain their own networks and want to deploy Magic Transit tend to already want to be enterprise customers for account management reasons. That said, we still can improve their experience by making tools like Magic Transit available to all plans because we will have to remove some of the friction in the setup that we have historically just solved with people hours from our solution engineers and partners.</p><p>We also realize that the way some of these features are priced only made sense with an invoice or enterprise license agreement model. To make this work, we need to revisit how some of our usage metering and billing functions. That will continue to be a priority for us, and we are excited about how this will push us to continue making our packaging and billing even simpler for all customers.</p><p>There are some features that we can’t make available to everyone because of non-technical reasons. For example, using our China Network has complicated legal requirements in China that are impossible for us to manage for millions of customers. </p>
    <div>
      <h3>Self-service by default going forward</h3>
      <a href="#self-service-by-default-going-forward">
        
      </a>
    </div>
    <p>One thing we are not announcing today is a strategy to continue to release “enterprise-only” features for a while before they eventually make it to the self-service plans. Going forward, to launch something at Cloudflare the team will need to make sure that any customer can buy it off the shelf without talking to someone.</p><p>We expect that requirement to improve how all products are built here, not just the more advanced capabilities. We also consider it mission-critical. We have a long history of making the kinds of tools that only the largest businesses could buy available to anyone, from universal SSL <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>over a decade ago</u></a> to newer features this week that were available for self-service plans immediately like <a href="https://blog.cloudflare.com/per-customer-bot-defenses/"><u>per-customer bot detection IDs</u></a> and security of data in transit <a href="https://blog.cloudflare.com/saas-to-saas-security/"><u>between SaaS applications</u></a>. We are excited to continue this tradition.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>You can get started right now setting up dashboard SSO in your Cloudflare account using the documentation available <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/dash-sso-apps/"><u>here</u></a>. We will continue to share updates as previously enterprise-only features are made available to any plan.
</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[PAYGO]]></category>
            <category><![CDATA[Enterprise]]></category>
            <category><![CDATA[Plans]]></category>
            <guid isPermaLink="false">5GlnTlWFL0ezm1fRnEzoF1</guid>
            <dc:creator>Dane Knecht</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare's developer platform keeps getting better, faster, and more powerful. Here's everything that's new.]]></title>
            <link>https://blog.cloudflare.com/cloudflare-developer-platform-keeps-getting-better-faster-and-more-powerful/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's developer platform keeps getting better, faster, and more powerful. Here's everything that's new. ]]></description>
            <content:encoded><![CDATA[ <p>When you build on Cloudflare, we consider it our job to do the heavy lifting for you. That’s been true since we <a href="https://blog.cloudflare.com/introducing-cloudflare-workers/"><u>introduced Cloudflare Workers in 2017</u></a>, when we first provided a runtime for you where you could just focus on building. </p><p>That commitment is still true today, and many of today’s announcements are focused on just that — removing friction where possible to free you up to build something great. </p><p>There are only so many blog posts we can write (and that you can read)! We have been busy on a much longer list of new improvements, and many of them we’ve been rolling out consistently over the course of the year. Today’s announcement breaks down all the new capabilities in detail, in one single post. The features being released today include:</p><ul><li><p><a href="#more-node-js-apis-and-packages-just-work-on-workers"><u>Use more APIs from Node.js</u></a> — including node:fs and node:https</p></li><li><p><a href="#ai-search-formerly-autorag-now-with-more-models-to-choose-from"><u>Use models from different providers in AI Search</u></a> (formerly AutoRAG)</p></li><li><p>Deploy <u>l</u>arger container instances and more concurrent instances to our Containers platform</p></li><li><p>Run 30 concurrent headless web browsers (previously 10), via the <a href="#playwright-in-browser-rendering-is-now-ga"><u>Browser Rendering API</u></a></p></li><li><p>Use the <a href="#playwright-in-browser-rendering-is-now-ga"><u>Playwright browser automation library</u></a> with the Browser Rendering API — now fully supported and GA</p></li><li><p>Use 4 vCPUs (prev 2) and 20GB of disk (prev 8GB) with <a href="#more-node-js-apis-and-packages-just-work-on-workers"><u>Workers Builds — now GA</u></a></p></li><li><p>Connect to production services and resources from local development with Remote Bindings — now GA</p></li><li><p><a href="#infrequent-access-in-r2-is-now-ga"><u>R2 Infrequent Access GA</u></a> - lower-cost storage class for backups, logs, and long-tail content</p></li><li><p>Resize, clip and reformat video files on-demand with Media Transformations — now GA</p></li></ul><p>Alongside that, we’re constantly adding new building blocks, to make sure you have all the tools you need to build what you set out to. Those launches (that also went out today, but require a bit more explanation) include:</p><ul><li><p>Connect to Postgres databases <a href="http://blog.cloudflare.com/planetscale-postgres-workers"><u>running on Planetscale</u></a></p></li><li><p>Send transactional emails via the new <a href="http://blog.cloudflare.com/email-service"><u>Cloudflare Email Service</u></a></p></li><li><p>Run distributed SQL queries with the new <a href="http://blog.cloudflare.com/cloudflare-data-platform"><u>Cloudflare Data Platform</u></a></p></li><li><p>Deploy your own <a href="https://www.cloudflare.com/learning/ai/how-to-get-started-with-vibe-coding/">AI vibe coding</a> platform to Cloudflare with <a href="https://blog.cloudflare.com/deploy-your-own-ai-vibe-coding-platform"><u>VibeSDK</u></a></p></li></ul>
    <div>
      <h2>AI Search (formerly AutoRAG) — now with More Models To Choose From</h2>
      <a href="#ai-search-formerly-autorag-now-with-more-models-to-choose-from">
        
      </a>
    </div>
    <p>AutoRAG is now AI Search! The new name marks a new and bigger mission: to make world-class search infrastructure available to every developer and business. AI Search is no longer just about retrieval for LLM apps: it’s about giving you a fast, flexible index for your content that is ready to power any AI experience. With recent additions like <a href="https://blog.cloudflare.com/conversational-search-with-nlweb-and-autorag/"><u>NLWeb support</u></a>, we are expanding beyond simple retrieval to provide a foundation for top quality search experiences that are open and built for the future of the web.</p><p>With AI Search you can now use models from different providers like OpenAI and Anthropic. Last month during AI Week we announced <a href="https://blog.cloudflare.com/ai-gateway-aug-2025-refresh/"><u>BYO Provider Keys for AI Gateway</u></a>. That capability now extends to AI Search. By attaching your keys to the AI Gateway linked to your AI Search instance, you can use many more models for both embedding and inference.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RUPN3CB5MOHuF0qcaJ9Nq/527f20fb8c2109c2007a5a3eeffaaadc/image2.png" />
          </figure><p>Once configured, your AI Search instance will be able to reference models available through your AI Gateway when making a <code>/ai-search</code> request:</p>
            <pre><code>export default {
  async fetch(request, env) {
    
    // Query your AI Search instance with a natural language question to an OpenAI model
    const result = await env.AI.autorag("my-ai-search").aiSearch({
      query: "What's new for Cloudflare Birthday Week?",
      model: "openai/gpt-5"
    });

    // Return only the generated answer as plain text
    return new Response(result.response, {
      headers: { "Content-Type": "text/plain" },
    });
  },
};</code></pre>
            <p>In the coming weeks we will also roll out updates to align the APIs with the new name. The existing APIs will continue to be supported for the time being. Stay tuned to the AI Search <a href="https://developers.cloudflare.com/changelog/?product=ai-search"><u>Changelog</u></a> and <a href="https://discord.cloudflare.com/"><u>Discord</u></a> for more updates!</p>
    <div>
      <h2>Connect to production services and resources from local development with Remote Bindings — now GA</h2>
      <a href="#connect-to-production-services-and-resources-from-local-development-with-remote-bindings-now-ga">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/workers/development-testing/#remote-bindings"><u>Remote bindings</u></a> for local development are generally available, supported in <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> v4.37.0, the <a href="https://developers.cloudflare.com/workers/vite-plugin/"><u>Cloudflare Vite plugin</u></a>, and the <code>@cloudflare/vitest-pool-workers</code> package. Remote bindings are bindings that are configured to connect to a deployed resource on your Cloudflare account <i>instead </i>of the locally simulated resource. </p><p>For example, here’s how you can instruct Wrangler or Vite to send all requests to <code>env.MY_BUCKET</code> to hit the real, deployed R2 bucket instead of a locally simulated one: </p>
            <pre><code>{
  "name": "my-worker",
  "compatibility_date": "2025-09-25",

  "r2_buckets": [
    {
      "bucket_name": "my-bucket",
      "binding": "MY_BUCKET",
      "remote": true
    },
  ],
}</code></pre>
            <p>With the above configuration, all requests to <code>env.MY_BUCKET</code> will be proxied to the remote resource, but the Worker code will still execute locally. This means you get all the benefits of local development like faster execution times – without having to seed local databases with data. </p><p>You can pair remote bindings with <a href="https://developers.cloudflare.com/workers/wrangler/environments/"><b><u>environments</u></b></a>, so that you can use staging data during local development and leave production data untouched. </p><p>For example, here’s how you could point Wrangler or Vite to send all requests to <code>env.MY_BUCKET</code> to <code>staging-storage-bucket</code> when you run <code>wrangler dev --env staging</code> (<code>CLOUDFLARE_ENV=staging vite dev</code> if using Vite). </p>
            <pre><code>{
  "name": "my-worker",
  "compatibility_date": "2025-09-25",

"env": {
    "staging": {
      "r2_buckets": [
        {
          "binding": "MY_BUCKET",
          "bucket_name": "staging-storage-bucket",
          "remote": true
        }
      ]
    },
    "production": {
      "r2_buckets": [
        {
          "binding": "MY_BUCKET",
          "bucket_name": "production-storage-bucket" 
        }
      ]
    }
  }
}</code></pre>
            
    <div>
      <h2>More Node.js APIs and packages “just work” on Workers</h2>
      <a href="#more-node-js-apis-and-packages-just-work-on-workers">
        
      </a>
    </div>
    <p>Over the past year, we have been hard at work to make Workers more compatible with Node.js packages and APIs.</p><p>Several weeks ago, <a href="https://blog.cloudflare.com/bringing-node-js-http-servers-to-cloudflare-workers/"><u>we shared how node:http and node:https APIs are now supported on Workers</u></a>. This means that you can run backend Express and Koa.js work with only a few additional lines of code:</p>
            <pre><code>import { httpServerHandler } from 'cloudflare:node';
import express from 'express';

const app = express();

app.get('/', (req, res) =&gt; {
  res.json({ message: 'Express.js running on Cloudflare Workers!' });
});

app.listen(3000);
export default httpServerHandler({ port: 3000 });</code></pre>
            <p>And there’s much, much more. You can now:</p><ul><li><p>Read and write temporary files in Workers, using <code>node:fs</code></p></li><li><p>Do DNS looking using <a href="https://one.one.one.one/"><u>1.1.1.1</u></a> with <code>node:dns</code></p></li><li><p>Use <code>node:net</code> and <code>node:tls</code> for first class Socket support</p></li><li><p>Use common hashing libraries with <code>node:crypto</code></p></li><li><p>Access environment variables in a Node-like fashion on <code>process.env</code></p></li></ul><p><a href="https://blog.cloudflare.com/nodejs-workers-2025"><u>Read our full recap of the last year’s Node.js-related changes</u></a> for all the details.</p><p>With these changes, Workers become even more powerful and easier to adopt, regardless of where you’re coming from. The APIs that you are familiar with are there, and more packages you need will just work.</p>
    <div>
      <h2>Larger Container instances, more concurrent instances</h2>
      <a href="#larger-container-instances-more-concurrent-instances">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/containers/"><u>Cloudflare Containers</u></a> now has higher limits on concurrent instances and an upcoming new, larger instance type.</p><p>Previously you could run 50 instances of the <code>dev</code> instance type or 25 instances of the <code>basic</code> instance type concurrently. Now you can run concurrent containers with up to 400 GiB of memory, 100 vCPUs, and 2 TB of disk. This allows you to run up to 1000 <code>dev</code> instances or 400 <code>basic</code> instances concurrently. Enterprise customers can push far beyond these limits — contact us if you need more. If you are using Containers to power your app and it goes viral, you’ll have the ability to scale on Cloudflare.</p><p>Cloudflare Containers also now has a new <a href="https://developers.cloudflare.com/containers/platform-details/limits/"><u>instance type</u></a> coming soon — <code>standard-2</code> which includes 8 GiB of memory, 1 vCPU, and 12 GB of disk. This new instance type is an ideal default for workloads that need more resources, from <a href="https://github.com/cloudflare/sandbox-sdk"><u>AI Sandboxes</u></a> to data processing jobs.</p>
    <div>
      <h2>Workers Builds provides more disk and CPU — and is now GA</h2>
      <a href="#workers-builds-provides-more-disk-and-cpu-and-is-now-ga">
        
      </a>
    </div>
    <p>Last Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#continuous-integration-and-delivery"><u>announced the launch</u></a> of our integrated <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD pipeline</a>, Workers Builds, in open beta. We also gave you <a href="https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/"><u>a detailed look</u></a> into how we built this system on our <a href="https://developers.cloudflare.com/workers/"><u>Workers platform</u></a> using <a href="https://developers.cloudflare.com/containers/"><u>Containers</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/log-explorer/log-search/"><u>Workers Logs</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a>.</p><p>This year, we are excited to announce that Workers Builds is now Generally Available. Here’s what’s new:</p><ul><li><p><a href="https://developers.cloudflare.com/changelog/2025-08-04-builds-increased-disk-size/"><b><u>Increased disk space for all plans</u></b></a>: We've increased the disk size from 8 GB to 20 GB for both free and paid plans, giving you more space for your projects and dependencies</p></li><li><p><a href="https://developers.cloudflare.com/changelog/2025-09-07-builds-increased-cpu-paid/"><b><u>More compute for paid plans</u></b></a>: We’ve doubled the CPU power for paid plans from 2 vCPU to 4 vCPU, making your builds significantly faster</p></li><li><p><b>Faster single-core and multi-core performance</b>: To ensure consistent, high performance builds, we now run your builds on the fastest available CPUs at the time your build runs</p></li></ul><p>Haven’t used <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Workers Builds</u></a> yet? You can try it by <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>connecting a Git repository to an existing Worker</u></a>, or try it out on a fresh new project by clicking any <a href="https://developers.cloudflare.com/workers/platform/deploy-buttons/"><u>Deploy to Cloudflare button</u></a>, like the one below that deploys <a href="https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template"><u>a blog built with Astro</u></a> to your Cloudflare account:</p><a href="https://deploy.workers.cloudflare.com/?url=https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/astro-blog-starter-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>A more consistent look and feel for the Cloudflare dashboard</h2>
      <a href="#a-more-consistent-look-and-feel-for-the-cloudflare-dashboard">
        
      </a>
    </div>
    <p><a href="https://dash.cloudflare.com/?to=/:account/workers/durable-objects"><u>Durable Objects</u></a>, <a href="https://dash.cloudflare.com/?to=/:account/r2"><u>R2</u></a>, and <a href="https://dash.cloudflare.com/?to=/:account/workers-and-pages"><u>Workers</u></a> now all have a more consistent look with the rest of our developer platform. As you explore these pages you’ll find that things should load faster, feel smoother and are easier to use.</p><p>Across storage products, you can now customize the table that lists the resources on your account, choose which data you want to see, sort by any column, and hide columns you don’t need. In the Workers and Pages dashboard, we’ve reduced clutter and have modernized the design to make it faster for you to get the data you need.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0NkomTAkx45wn7nF4WYip/efb7b706d0ab7df34bfe229a025f4782/image4.png" />
          </figure><p>And when you create a new <a href="https://developers.cloudflare.com/pipelines/"><u>Pipeline</u></a> or a <a href="https://developers.cloudflare.com/hyperdrive"><u>Hyperdrive</u></a> configuration, you’ll find a new interface that helps you get started and guides you through each step.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Lx2tgIBRIWsq82p3Vjj3o/3a23f7065c9f354ed66dbe311f5d1d86/image1.png" />
          </figure><p>This work is ongoing, and we’re excited to continue improving with the help of your feedback, so keep it coming!</p>
    <div>
      <h2>Resize, clip and reformat video files on-demand with Media Transformations — now GA</h2>
      <a href="#resize-clip-and-reformat-video-files-on-demand-with-media-transformations-now-ga">
        
      </a>
    </div>
    <p>In March 2025 we <a href="https://blog.cloudflare.com/media-transformations-for-video-open-beta/"><u>announced Media Transformations</u></a> in open beta, which brings the magic of <a href="https://developers.cloudflare.com/images/transform-images/"><u>Image transformations</u></a> to short-form video files — including video files stored outside of Cloudflare. Since then, we have increased input and output limits, and added support for audio-only extraction. Media Transformations is now generally available.</p><p>Media Transformations is ideal if you have a large existing volume of short videos, such as generative AI output, e-commerce product videos, social media clips, or short marketing content. Content like this should be fetched from your existing storage like R2 or S3 directly, optimized by Cloudflare quickly, and delivered efficiently as small MP4 files or used to extract still images and audio.</p>
            <pre><code>https://example.com/cdn-cgi/media/&lt;OPTIONS&gt;/&lt;SOURCE-VIDEO&gt;

EXAMPLE, RESIZE:
https://example.com/cdn-cgi/media/width=760/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4


EXAMPLE, STILL THUMBNAIL:
https://example.com/cdn-cgi/media/mode=frame,time=3s,width=120,height=120,fit=cover/https://pub-d9fcbc1abcd244c1821f38b99017347f.r2.dev/aus-mobile.mp4</code></pre>
            <p>Media Transformations includes a free tier available to all customers and is included with Media Platform subscriptions. Check out the <a href="https://developers.cloudflare.com/stream/transform-videos/"><u>transform videos documentation</u></a> for all the latest, then enable transformations for your zone today!</p>
    <div>
      <h2>Infrequent Access in R2 is now GA</h2>
      <a href="#infrequent-access-in-r2-is-now-ga">
        
      </a>
    </div>
    <p>R2 Infrequent Access is now generally available. Last year, we introduced the <a href="https://blog.cloudflare.com/r2-events-gcs-migration-infrequent-access/#infrequent-access-private-beta"><u>Infrequent Access</u></a> storage class designed for data that doesn’t need to be accessed frequently. It’s a great fit for use cases including long-tail user content, logs, or data backups.</p><p>Since launch, Infrequent Access has been proven in production by our customers running these types of workloads at scale. The results confirmed our goal: a storage class that reduces storage costs while maintaining performance and durability.</p><p><a href="https://developers.cloudflare.com/r2/pricing/"><u>Pricing</u></a> is simple. You pay less on data storage, while data retrievals are billed per GB to reflect the additional compute required to serve data from underlying storage optimized for less frequent access. And as with all of R2, there are <b>no egress fees</b>, so you don’t pay for the bandwidth to move data out.

Here’s how you can upload an object to R2 infrequent access class via Workers:</p>
            <pre><code>export default {
  async fetch(request, env) {

    // Upload the incoming request body to R2 in Infrequent Access class
    await env.MY_BUCKET.put("my-object", request.body, {
      storageClass: "InfrequentAccess",
    });

    return new Response("Object uploaded to Infrequent Access!", {
      headers: { "Content-Type": "text/plain" },
    });
  },
};</code></pre>
            <p>You can also monitor your Infrequent Access vs. Standard storage usage directly in your R2 dashboard for each bucket. Get started with <a href="https://developers.cloudflare.com/r2/get-started/"><u>R2</u></a> today!</p>
    <div>
      <h2>Playwright in Browser Rendering is now GA</h2>
      <a href="#playwright-in-browser-rendering-is-now-ga">
        
      </a>
    </div>
    <p>We’re excited to announce three updates to Browser Rendering:</p><ol><li><p>Our support for <a href="https://developers.cloudflare.com/browser-rendering/platform/playwright/"><u>Playwright</u></a> is now Generally Available, giving developers the stability and confidence to run critical browser tasks.</p></li><li><p>We’re introducing support for <a href="https://developers.cloudflare.com/browser-rendering/platform/stagehand/"><u>Stagehand</u></a>, enabling developers to build AI agents using natural language, powered by Cloudflare Workers AI.</p></li><li><p>Finally, to help developers scale, we are tripling <a href="https://developers.cloudflare.com/browser-rendering/platform/limits/#workers-paid"><u>limits for paid plans</u></a>, with more increases to come. </p></li></ol><p>The browser is no longer only used by humans. AI agents need to be able to reliably navigate browsers in the same way a human would, whether that's booking flights, filling in customer info, or scraping structured data. Playwright gives AI agents the ability to interact with web pages and perform complex tasks on behalf of humans. However, running browsers at scale is a significant infrastructure challenge. Cloudflare Browser Rendering solves this by providing headless browsers on-demand. By moving Playwright support to Generally Available, and now synced with the latest version v1.55, customers have a production-ready foundation to build reliable, scalable applications on. </p><p>To help AI agents better navigate the web, we’re introducing support for Stagehand, an open source browser automation framework.  Rather than dictating exact steps or specifying selectors, Stagehand enables developers to build more reliably and flexibly by combining code with natural-language instructions powered by AI. This makes it possible for AI agents to navigate and adapt if a website changes - just like a human would. </p><p>To get started with Playwright and Stagehand, check our <a href="https://developers.cloudflare.com/changelog/2025-09-25-br-playwright-ga-stagehand-limits/"><u>changelog</u></a> with code examples and more. </p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4HnTgcx06k7ccxS0rYwIkC</guid>
            <dc:creator>Brendan Irvine-Broque</dc:creator>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Korinne Alpers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Partnering to make full-stack fast: deploy PlanetScale databases directly from Workers]]></title>
            <link>https://blog.cloudflare.com/planetscale-postgres-workers/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve teamed up with PlanetScale to make shipping full-stack applications on Cloudflare Workers even easier.  ]]></description>
            <content:encoded><![CDATA[ <p>We’re not burying the lede on this one: you can now connect <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a> to your PlanetScale databases directly and ship full-stack applications backed by Postgres or MySQL. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3tcLGobPxPIHoDYEiGcY0X/d970a4a6b8a9e6ebc7d06ab57b168007/Frame_1321317798__1_.png" />
          </figure><p>We’ve teamed up with <a href="https://planetscale.com/"><u>PlanetScale</u></a> because we wanted to partner with a database provider that we could confidently recommend to our users: one that shares our obsession with performance, reliability and developer experience. These are all critical factors for any development team building a serious application. </p><p>Now, when connecting to PlanetScale databases, your connections are automatically configured for optimal performance with <a href="https://www.cloudflare.com/developer-platform/products/hyperdrive/"><u>Hyperdrive</u></a>, ensuring that you have the fastest access from your Workers to your databases, regardless of where your Workers are running.</p>
    <div>
      <h3>Building full-stack</h3>
      <a href="#building-full-stack">
        
      </a>
    </div>
    <p>As Workers has matured into a full-stack platform, we’ve introduced more options to facilitate your connectivity to data. With <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, we made it easy to store configuration and cache unstructured data on the edge. With <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, we made it possible to build multi-tenant apps with simple, isolated SQL databases. And with Hyperdrive, we made connecting to external databases fast and scalable from Workers.</p><p>Today, we’re introducing a new choice for building on Cloudflare: Postgres and MySQL PlanetScale databases, directly accessible from within the Cloudflare dashboard. Link your Cloudflare and PlanetScale accounts, stop manually copying API keys back-and-forth, and connect Workers to any of your PlanetScale databases (production or otherwise!).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71rXsGZgXWem4yvkhdtHsP/55f9433b5447c09703ef39a547881497/image3.png" />
          </figure><p><sup>Connect to a PlanetScale database — no figuring things out on your own</sup></p><p>Postgres and MySQL are the most popular options for building applications, and with good reason. Many large companies have built and scaled on these databases, providing for a robust ecosystem (like Cloudflare!). And you may want to have access to the power, familiarity, and functionality that these databases provide. </p><p>Importantly, all of this builds on <a href="https://blog.cloudflare.com/it-it/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive</u></a>, our distributed connection pooler and query caching infrastructure. Hyperdrive keeps connections to your databases warm to avoid incurring latency penalties for every new request, reduces the CPU load on your database by managing a connection pool, and can cache the results of your most frequent queries, removing load from your database altogether. Given that about 80% of queries for a typical transactional database are read-only, this can be substantial — we’ve observed this in reality!</p>
    <div>
      <h3>No more copying credentials around</h3>
      <a href="#no-more-copying-credentials-around">
        
      </a>
    </div>
    <p>Starting today, you can <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?step=1&amp;modal=1"><u>connect to your PlanetScale databases from the Cloudflare dashboard</u></a> in just a few clicks. Connecting is now secure by default with a one-click password rotation option, without needing to copy and manage credentials back and forth. A Hyperdrive configuration will be created for your PlanetScale database, providing you with the optimal setup to start building on Workers.</p><p>And the experience spans both Cloudflare and PlanetScale dashboards: you can also create and view attached Hyperdrive configurations for your databases from the PlanetScale dashboard.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3I7WyAGXCLY8xhugPlIhl5/0ec38f0248140a628d805df7bb62dcc3/image2.png" />
          </figure><p>By automatically integrating with Hyperdrive, your PlanetScale databases are optimally configured for access from Workers. When you connect your database via Hyperdrive, Hyperdrive’s Placement system automatically determines the location of the database and places its pool of database connections in Cloudflare data centers with the lowest possible latency. </p><p>When one of your Workers connects to your Hyperdrive configuration for your PlanetScale database, Hyperdrive will ensure the fastest access to your database by eliminating the unnecessary roundtrips included in a typical database connection setup. Hyperdrive will resolve connection setup within the Hyperdrive client and use existing connections from the pool to quickly serve your queries. Better yet, Hyperdrive allows you to cache your query results in case you need to scale for high-read workloads. </p><p>This is a peek under the hood of how Hyperdrive makes access to PlanetScale as fast as possible. We’ve previously blogged about <a href="https://blog.cloudflare.com/it-it/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive’s technical underpinnings</u></a> — it’s worth a read. And with this integration with Hyperdrive, you can easily connect to your databases across different Workers applications or environments, without having to reconfigure your credentials. All in all, a perfect match.</p>
    <div>
      <h3>Get started with PlanetScale and Workers</h3>
      <a href="#get-started-with-planetscale-and-workers">
        
      </a>
    </div>
    <p>With this partnership, we’re making it trivially easy to build on Workers with PlanetScale. Want to build a new application on Workers that connects to your existing PlanetScale cluster? With just a few clicks, you can create a globally deployed app that can query your database, cache your hottest queries, and keep your database connections warmed for fast access from Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eTtJKz4sxeNvClVQMWIFg/9c91fb02b1cd4eca7ad5ef013e7ab0f0/image4.png" />
          </figure><p><sup><i>Connect directly to your PlanetScale MySQL or Postgres databases from the Cloudflare dashboard, for optimal configuration with Hyperdrive.</i></sup></p><p>To get started, you can:</p><ul><li><p>Head to the <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?step=1&amp;modal=1"><u>Cloudflare dashboard</u></a> and connect your PlanetScale account</p></li><li><p>… or head to <a href="https://app.planetscale.com/"><u>PlanetScale</u></a> and connect your Cloudflare account</p></li><li><p>… and then deploy a Worker</p></li></ul><p>Review the <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive docs</u></a> and/or the <a href="https://planetscale.com/docs"><u>PlanetScale docs</u></a> to learn more about how to connect Workers to PlanetScale and start shipping.</p> ]]></content:encoded>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Partnership]]></category>
            <category><![CDATA[Database]]></category>
            <guid isPermaLink="false">7ibt13YouHX6Ew1wLZn5pi</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Adrian Gracia  </dc:creator>
        </item>
        <item>
            <title><![CDATA[R2 SQL: a deep dive into our new distributed query engine]]></title>
            <link>https://blog.cloudflare.com/r2-sql-deep-dive/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ R2 SQL provides a built-in, serverless way to run ad-hoc analytic queries against your R2 Data Catalog. This post dives deep under the Iceberg into how we built this distributed engine. ]]></description>
            <content:encoded><![CDATA[ <p>How do you run SQL queries over petabytes of data… without a server?</p><p>We have an answer for that: <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a>, a serverless query engine that can sift through enormous datasets and return results in seconds.</p><p>This post details the architecture and techniques that make this possible. We'll walk through our Query Planner, which uses <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> to prune terabytes of data before reading a single byte, and explain how we distribute the work across Cloudflare’s <a href="https://www.cloudflare.com/network"><u>global network</u></a>, <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a> for massively parallel execution.</p>
    <div>
      <h3>From catalog to query</h3>
      <a href="#from-catalog-to-query">
        
      </a>
    </div>
    <p>During Developer Week 2025, we <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/"><u>launched</u></a> R2 Data Catalog, a managed <a href="https://iceberg.apache.org/"><u>Apache Iceberg</u></a> catalog built directly into your Cloudflare R2 bucket. Iceberg is an open table format that provides critical database features like transactions and schema evolution for petabyte-scale <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>. It gives you a reliable catalog of your data, but it doesn’t provide a way to query it.</p><p>Until now, reading your R2 Data Catalog required setting up a separate service like <a href="https://spark.apache.org/"><u>Apache Spark</u></a> or <a href="https://trino.io/"><u>Trino</u></a>. Operating these engines at scale is not easy: you need to provision clusters, manage resource usage, and be responsible for their availability, none of which contributes to the primary goal of getting value from your data.</p><p><a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a> removes that step entirely. It’s a serverless query engine that executes retrieval SQL queries against your Iceberg tables, right where your data lives.</p>
    <div>
      <h3>Designing a query engine for petabytes</h3>
      <a href="#designing-a-query-engine-for-petabytes">
        
      </a>
    </div>
    <p>Object storage is fundamentally different from a traditional database’s storage. A database is structured by design; R2 is an ocean of objects, where a single logical table can be composed of potentially millions of individual files, large and small, with more arriving every second.</p><p>Apache Iceberg provides a powerful layer of logical organization on top of this reality. It works by managing the table's state as an immutable series of snapshots, creating a reliable, structured view of the table by manipulating lightweight metadata files instead of rewriting the data files themselves.</p><p>However, this logical structure doesn't change the underlying physical challenge: an efficient query engine must still find the specific data it needs within that vast collection of files, and this requires overcoming two major technical hurdles:</p><p><b>The I/O problem</b>: A core challenge for query efficiency is minimizing the amount of data read from storage. A brute-force approach of reading every object is simply not viable. The primary goal is to read only the data that is absolutely necessary.</p><p><b>The Compute problem</b>: The amount of data that does need to be read can still be enormous. We need a way to give the right amount of compute power to a query, which might be massive, for just a few seconds, and then scale it down to zero instantly to avoid waste.</p><p>Our architecture for R2 SQL is designed to solve these two problems with a two-phase approach: a <b>Query Planner</b> that uses metadata to intelligently prune the search space, and a <b>Query Execution</b> system that distributes the work across Cloudflare's global network to process the data in parallel.</p>
    <div>
      <h2>Query Planner</h2>
      <a href="#query-planner">
        
      </a>
    </div>
    <p>The most efficient way to process data is to avoid reading it in the first place. This is the core strategy of the R2 SQL Query Planner. Instead of exhaustively scanning every file, the planner makes use of the metadata structure provided by R2 Data Catalog to prune the search space, that is, to avoid reading huge swathes of data irrelevant to a query.</p><p>This is a top-down investigation where the planner navigates the hierarchy of Iceberg metadata layers, using <b>stats</b> at each level to build a fast plan, specifying exactly which byte ranges the query engine needs to read.</p>
    <div>
      <h3>What do we mean by “stats”?</h3>
      <a href="#what-do-we-mean-by-stats">
        
      </a>
    </div>
    <p>When we say the planner uses "stats" we are referring to summary metadata that Iceberg stores about the contents of the data files. These statistics create a coarse map of the data, allowing the planner to make decisions about which files to read, and which to ignore, without opening them.</p><p>There are two primary levels of statistics the planner uses for pruning:</p><p><b>Partition-level stats</b>: Stored in the Iceberg manifest list, these stats describe the range of partition values for all the data in a given Iceberg manifest file. For a partition on <code>day(event_timestamp)</code>, this would be the earliest and latest day present in the files tracked by that manifest.</p><p><b>Column-level stats</b>: Stored in the manifest files, these are more granular stats about each individual data file. Data files in R2 Data Catalog are formatted using the <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a>. For every column of a Parquet file, the manifest stores key information like:</p><ul><li><p>The minimum and maximum values. If a query asks for <code>http_status = 500</code>, and a file’s stats show its <code>http_status</code> column has a min of 200 and a max of 404, that entire file can be skipped.</p></li><li><p>A count of null values. This allows the planner to skip files when a query specifically looks for non-null values (e.g.,<code> WHERE error_code IS NOT NULL</code>) and the file's metadata reports that all values for <code>error_code</code> are null.</p></li></ul><p>Now, let's see how the planner uses these stats as it walks through the metadata layers.</p>
    <div>
      <h3>Pruning the search space</h3>
      <a href="#pruning-the-search-space">
        
      </a>
    </div>
    <p>The pruning process is a top-down investigation that happens in three main steps:</p><ol><li><p><b>Table metadata and the current snapshot</b></p></li></ol><p>The planner begins by asking the catalog for the location of the current table metadata. This is a JSON file containing the table's current schema, partition specs, and a log of all historical snapshots. The planner then fetches the latest snapshot to work with.</p><p>2. <b>Manifest list and partition pruning</b></p><p>The current snapshot points to a single Iceberg manifest list. The planner reads this file and uses the partition-level stats for each entry to perform the first, most powerful pruning step, discarding any manifests whose partition value ranges don't satisfy the query. For a table partitioned by <code>day(event_timestamp</code>), the planner can use the min/max values in the manifest list to immediately discard any manifests that don't contain data for the days relevant to the query.</p><p>3.<b> Manifests and file-level pruning</b></p><p>For the remaining manifests, the planner reads each one to get a list of the actual Parquet data files. These manifest files contain more granular, column-level stats for each individual data file they track. This allows for a second pruning step, discarding entire data files that cannot possibly contain rows matching the query's filters.</p><p>4. <b>File row-group pruning</b></p><p>Finally, for the specific data files that are still candidates, the Query Planner uses statistics stored inside Parquet file's footers to skip over entire row groups.</p><p>The result of this multi-layer pruning is a precise list of Parquet files, and of row groups within those Parquet files. These become the query work units that are dispatched to the Query Execution system for processing.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GKvgbex2vhIBqQ1G5UFjQ/2a99db7ae786b8e22a326bac0c9037d9/1.png" />
          </figure>
    <div>
      <h3>The Planning pipeline</h3>
      <a href="#the-planning-pipeline">
        
      </a>
    </div>
    <p>In R2 SQL, the multi-layer pruning we've described so far isn't a monolithic process. For a table with millions of files, the metadata can be too large to process before starting any real work. Waiting for a complete plan would introduce significant latency.</p><p>Instead, R2 SQL treats planning and execution together as a concurrent pipeline. The planner's job is to produce a stream of work units for the executor to consume as soon as they are available.</p><p>The planner’s investigation begins with two fetches to get a map of the table's structure: one for the table’s snapshot and another for the manifest list.</p>
    <div>
      <h4>Starting execution as early as possible</h4>
      <a href="#starting-execution-as-early-as-possible">
        
      </a>
    </div>
    <p>From that point on, the query is processed in a streaming fashion. As the Query Planner reads through the manifest files and subsequently the data files they point to and prunes them, it immediately emits any matching data files/row groups as work units to the execution queue.</p><p>This pipeline structure ensures the compute nodes can begin the expensive work of data I/O almost instantly, long before the planner has finished its full investigation.</p><p>On top of this pipeline model, the planner adds a crucial optimization: <b>deliberate ordering</b>. The manifest files are not streamed in an arbitrary sequence. Instead, the planner processes them in an order matching by the query's <code>ORDER BY</code> clause, guided by the metadata stats. This ensures that the data most likely to contain the desired results is processed first.</p><p>These two concepts work together to address query latency from both ends of the query pipeline.</p><p>The streamed planning pipeline lets us start crunching data as soon as possible, minimizing the delay before the first byte is processed. At the other end of the pipeline, the deliberate ordering of that work lets us finish early by finding a definitive result without scanning the entire dataset.</p><p>The next section explains the mechanics behind this "finish early" strategy.</p>
    <div>
      <h4>Stopping early: how to finish without reading everything</h4>
      <a href="#stopping-early-how-to-finish-without-reading-everything">
        
      </a>
    </div>
    <p>Thanks to the Query Planner streaming work units in an order matching the <code>ORDER BY </code>clause, the Query Execution system first processes the data that is most likely to be in the final result set.</p><p>This prioritization happens at two levels of the metadata hierarchy:</p><p><b>Manifest ordering</b>: The planner first inspects the manifest list. Using the partition stats for each manifest (e.g., the latest timestamp in that group of files), it decides which entire manifest files to stream first.</p><p><b>Parquet file ordering</b>: As it reads each manifest, it then uses the more granular column-level stats to decide the processing order of the individual Parquet files within that manifest.</p><p>This ensures a constantly prioritized stream of work units is sent to the execution engine. This prioritized stream is what allows us to stop the query early.</p><p>For instance, with a query like ... <code>ORDER BY timestamp DESC LIMIT 5</code>, as the execution engine processes work units and sends back results, the planner does two things concurrently:</p><p>It maintains a bounded heap of the best 5 results seen so far, constantly comparing new results to the oldest timestamp in the heap.</p><p>It keeps a "high-water mark" on the stream itself. Thanks to the metadata, it always knows the absolute latest timestamp of any data file that has not yet been processed.</p><p>The planner is constantly comparing the state of the heap to the water mark of the remaining stream. The moment the oldest timestamp in our Top 5 heap is newer than the high-water mark of the remaining stream, the entire query can be stopped.</p><p>At that point, we can prove no remaining work unit could possibly contain a result that would make it into the top 5. The pipeline is halted, and a complete, correct result is returned to the user, often after reading only a fraction of the potentially matching data.</p><p>Currently, R2 SQL supports ordering on columns that are part of the table's partition key only. This is a limitation we are working on lifting in the future.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qN9TeEuRZJIidYXFictG/8a55cc6088be3abdc3b27878daa76e40/image4.png" />
          </figure>
    <div>
      <h3>Architecture</h3>
      <a href="#architecture">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wkvnT24y5E0k5064cqu0T/939402d16583647986eec87617379900/image3.png" />
          </figure>
    <div>
      <h2>Query Execution</h2>
      <a href="#query-execution">
        
      </a>
    </div>
    <p>Query Planner streams the query work in bite-sized pieces called row groups. A single Parquet file usually contains multiple row groups, but most of the time only a few of them contain relevant data. Splitting query work into row groups allows R2 SQL to only read small parts of potentially multi-GB Parquet files.</p><p>The server that receives the user’s request and performs query planning assumes the role of query coordinator. It distributes the work across query workers and aggregates results before returning them to the user.</p><p>Cloudflare’s network is vast, and many servers can be in maintenance at the same time. The query coordinator contacts Cloudflare’s internal API to make sure only healthy, fully functioning servers are picked for query execution. Connections between coordinator and query worker go through <a href="https://www.cloudflare.com/en-gb/application-services/products/argo-smart-routing/"><u>Cloudflare Argo Smart Routing</u></a> to ensure fast, reliable connectivity.</p><p>Servers that receive query execution requests from the coordinator assume the role of query workers. Query workers serve as a point of horizontal scalability in R2 SQL. With a higher number of query workers, R2 SQL can process queries faster by distributing the work among many servers. That’s especially true for queries covering large amounts of files.</p><p>Both the coordinator and query workers run on Cloudflare’s distributed network, ensuring R2 SQL has plenty of compute power and I/O throughput to handle analytical workloads.</p><p>Each query worker receives a batch of row groups from the coordinator as well as an SQL query to run on it. Additionally, the coordinator sends serialized metadata about Parquet files containing the row groups. Thanks to that, query workers know exact byte offsets where each row group is located in the Parquet file without the need to read this information from R2.</p>
    <div>
      <h3>Apache DataFusion</h3>
      <a href="#apache-datafusion">
        
      </a>
    </div>
    <p>Internally, each query worker uses <a href="https://github.com/apache/datafusion"><u>Apache DataFusion</u></a> to run SQL queries against row groups. DataFusion is an open-source analytical query engine written in Rust. It is built around the concept of partitions. A query is split into multiple concurrent independent streams, each working on its own partition of data.</p><p>Partitions in DataFusion are similar to partitions in Iceberg, but serve a different purpose. In Iceberg, partitions are a way to physically organize data on object storage. In DataFusion, partitions organize in-memory data for query processing. While logically they are similar – rows grouped together based on some logic – in practice, a partition in Iceberg doesn’t always correspond to a partition in DataFusion.</p><p>DataFusion partitions map perfectly to the R2 SQL query worker’s data model because each row group can be considered its own independent partition. Thanks to that, each row group is processed in parallel.</p><p>At the same time, since row groups usually contain at least 1000 rows, R2 SQL benefits from vectorized execution. Each DataFusion partition stream can execute the SQL query on multiple rows in one go, amortizing the overhead of query interpretation.</p><p>There are two ends of the spectrum when it comes to query execution: processing all rows sequentially in one big batch and processing each individual row in parallel. Sequential processing creates a so-called “tight loop”, which is usually more CPU cache friendly. In addition to that, we can significantly reduce interpretation overhead, as processing a large number of rows at a time in batches means that we go through the query plan less often. Completely parallel processing doesn’t allow us to do these things, but makes use of multiple CPU cores to finish the query faster.</p><p>DataFusion’s architecture allows us to achieve a balance on this scale, reaping benefits from both ends. For each data partition, we gain better CPU cache locality and amortized interpretation overhead. At the same time, since many partitions are processed in parallel, we distribute the workload between multiple CPUs, cutting the execution time further.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Tis1F5C1x3x6sIyJLL8ju/aae094818b1b7f6f8d6f857305948fbd/image1.png" />
          </figure><p>In addition to the smart query execution model, DataFusion also provides first-class Parquet support.</p><p>As a file format, Parquet has multiple optimizations designed specifically for query engines. Parquet is a column-based format, meaning that each column is physically separated from others. This separation allows better compression ratios, but it also allows the query engine to read columns selectively. If the query only ever uses five columns, we can only read them and skip reading the remaining fifty. This massively reduces the amount of data we need to read from R2 and the CPU time spent on decompression.</p><p>DataFusion does exactly that. Using R2 ranged reads, it is able to read parts of the Parquet files containing the requested columns, skipping the rest.</p><p>DataFusion’s optimizer also allows us to push down any filters to the lowest levels of the query plan. In other words, we can apply filters right as we are reading values from Parquet files. This allows us to skip materialization of results we know for sure won’t be returned to the user, cutting the query execution time further.</p>
    <div>
      <h3>Returning query results</h3>
      <a href="#returning-query-results">
        
      </a>
    </div>
    <p>Once the query worker finishes computing results, it returns them to the coordinator through <a href="https://grpc.io/"><u>the gRPC protocol</u></a>.</p><p>R2 SQL uses <a href="https://arrow.apache.org/"><u>Apache Arrow</u></a> for internal representation of query results. Arrow is an in-memory format that efficiently represents arrays of structured data. It is also used by DataFusion during query execution to represent partitions of data.</p><p>In addition to being an in-memory format, Arrow also defines the <a href="https://arrow.apache.org/docs/format/Columnar.html#format-ipc"><u>Arrow IPC</u></a> serialization format. Arrow IPC isn’t designed for long-term storage of the data, but for inter-process communication, which is exactly what query workers and the coordinator do over the network. The query worker serializes all the results into the Arrow IPC format and embeds them into the gRPC response. The coordinator in turn deserializes results and can return to working on Arrow arrays.</p>
    <div>
      <h2>Future plans</h2>
      <a href="#future-plans">
        
      </a>
    </div>
    <p>While R2 SQL is currently quite good at executing filter queries, we also plan to rapidly add new capabilities over the coming months. This includes, but is not limited to, adding:</p><ul><li><p>Support for complex aggregations in a distributed and scalable fashion;</p></li><li><p>Tools to help provide visibility in query execution to help developers improve performance;</p></li><li><p>Support for many of the configuration options Apache Iceberg supports.</p></li></ul><p>In addition to that, we have plans to improve our developer experience by allowing users to query their R2 Data Catalogs using R2 SQL from the Cloudflare Dashboard.</p><p>Given Cloudflare’s distributed compute, network capabilities, and ecosystem of developer tools, we have the opportunity to build something truly unique here. We are exploring different kinds of indexes to make R2 SQL queries even faster and provide more functionality such as full text search, geospatial queries, and more. </p>
    <div>
      <h2>Try it now!</h2>
      <a href="#try-it-now">
        
      </a>
    </div>
    <p>It’s early days for R2 SQL, but we’re excited for users to get their hands on it. R2 SQL is available in open beta today! Head over to our<a href="https://developers.cloudflare.com/r2-sql/get-started/"> <u>getting started guide</u></a> to learn how to create an end-to-end data pipeline that processes and delivers events to an R2 Data Catalog table, which can then be queried with R2 SQL.</p><p>
We’re excited to see what you build! Come share your feedback with us on our<a href="http://discord.cloudflare.com/"> <u>Developer Discord</u></a>.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Edge Computing]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[SQL]]></category>
            <guid isPermaLink="false">7znvjodLkg1AxYlR992it2</guid>
            <dc:creator>Yevgen Safronov</dc:creator>
            <dc:creator>Nikita Lapkov</dc:creator>
            <dc:creator>Jérôme Schneider</dc:creator>
        </item>
        <item>
            <title><![CDATA[Safe in the sandbox: security hardening for Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/safe-in-the-sandbox-security-hardening-for-cloudflare-workers/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We are further hardening Cloudflare Workers with the latest software and hardware features. We use defense-in-depth, including V8 sandboxes and the CPU's memory protection keys to keep your data safe. ]]></description>
            <content:encoded><![CDATA[ <p>As a <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless</u></a> cloud provider, we run your code on our globally distributed infrastructure. Being able to run customer code on our network means that anyone can take advantage of our global presence and low latency. Workers isn’t just efficient though, we also make it simple for our users. In short: <a href="https://workers.cloudflare.com/"><u>You write code. We handle the rest</u></a>.</p><p>Part of 'handling the rest' is making Workers as secure as possible. We have previously written about our <a href="https://blog.cloudflare.com/mitigating-spectre-and-other-security-threats-the-cloudflare-workers-security-model/"><u>security architecture</u></a>. Making Workers secure is an interesting problem because the whole point of Workers is that we are running third party code on our hardware. This is one of the hardest security problems there is: any attacker has the full power available of a programming language running on the victim's system when they are crafting their attacks.</p><p>This is why we are constantly updating and improving the Workers Runtime to take advantage of the latest improvements in both hardware and software. This post shares some of the latest work we have been doing to keep Workers secure.</p><p>Some background first: <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Workers</u></a> is built around the <a href="https://v8.dev/"><u>V8</u></a> JavaScript runtime, originally developed for Chromium-based browsers like Chrome. This gives us a head start, because V8 was forged in an adversarial environment, where it has always been under intense attack and <a href="https://github.blog/security/vulnerability-research/getting-rce-in-chrome-with-incorrect-side-effect-in-the-jit-compiler/"><u>scrutiny</u></a>. Like Workers, Chromium is built to run adversarial code safely. That's why V8 is constantly being tested against the best fuzzers and sanitizers, and over the years, it has been hardened with new technologies like <a href="https://v8.dev/blog/oilpan-library"><u>Oilpan/cppgc</u></a> and improved static analysis.</p><p>We use V8 in a slightly different way, though, so we will be describing in this post how we have been making some changes to V8 to improve security in our use case.</p>
    <div>
      <h2>Hardware-assisted security improvements from Memory Protection Keys</h2>
      <a href="#hardware-assisted-security-improvements-from-memory-protection-keys">
        
      </a>
    </div>
    <p>Modern CPUs from Intel, AMD, and ARM have support for <a href="https://man7.org/linux/man-pages/man7/pkeys.7.html"><u>memory protection keys</u></a>, sometimes called <i>PKU</i>, Protection Keys for Userspace. This is a great security feature which increases the power of virtual memory and memory protection.</p><p>Traditionally, the memory protection features of the CPU in your PC or phone were mainly used to protect the kernel and to protect different processes from each other. Within each process, all threads had access to the same memory. Memory protection keys allow us to prevent specific threads from accessing memory regions they shouldn't have access to.</p><p>V8 already <a href="https://issues.chromium.org/issues/41480375"><u>uses memory protection keys</u></a> for the <a href="https://en.wikipedia.org/wiki/Just-in-time_compilation"><u>JIT compilers</u></a>. The JIT compilers for a language like JavaScript generate optimized, specialized versions of your code as it runs. Typically, the compiler is running on its own thread, and needs to be able to write data to the code area in order to install its optimized code. However, the compiler thread doesn't need to be able to run this code. The regular execution thread, on the other hand, needs to be able to run, but not modify, the optimized code. Memory protection keys offer a way to give each thread the permissions it needs, but <a href="https://en.wikipedia.org/wiki/W%5EX"><u>no more</u></a>. And the V8 team in the Chromium project certainly aren't standing still. They describe some of their future plans for memory protection keys <a href="https://docs.google.com/document/d/1l3urJdk1M3JCLpT9HDvFQKOxuKxwINcXoYoFuKkfKcc/edit?tab=t.0#heading=h.gpz70vgxo7uc"><u>here</u></a>.</p><p>In Workers, we have some different requirements than Chromium. <a href="https://developers.cloudflare.com/workers/reference/security-model/"><u>The security architecture for Workers</u></a> uses V8 isolates to separate different scripts that are running on our servers. (In addition, we have <a href="https://blog.cloudflare.com/spectre-research-with-tu-graz/"><u>extra mitigations</u></a> to harden the system against <a href="https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)"><u>Spectre</u></a> attacks). If V8 is working as intended, this should be enough, but we believe in <i>defense in depth</i>: multiple, overlapping layers of security controls.</p><p>That's why we have deployed internal modifications to V8 to use memory protection keys to isolate the isolates from each other. There are up to 15 different keys available on a modern x64 CPU and a few are used for other purposes in V8, so we have about 12 to work with. We give each isolate a random key which is used to protect its V8 <i>heap data</i>, the memory area containing the JavaScript objects a script creates as it runs. This means security bugs that might previously have allowed an attacker to read data from a different isolate would now hit a hardware trap in 92% of cases. (Assuming 12 keys, 92% is about 11/12.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cHaaZrAhQf759og04S63G/59ff1974dc878ec8ad7d40f1f079be37/image9.png" />
          </figure><p>The illustration shows an attacker attempting to read from a different isolate. Most of the time this is detected by the mismatched memory protection key, which kills their script and notifies us, so we can investigate and remediate. The red arrow represents the case where the attacker got lucky by hitting an isolate with the same memory protection key, represented by the isolates having the same colors.</p><p>However, we can further improve on a 92% protection rate. In the last part of this blog post we'll explain how we can lift that to 100% for a particular common scenario. But first, let's look at a software hardening feature in V8 that we are taking advantage of.</p>
    <div>
      <h2>The V8 sandbox, a software-based security boundary</h2>
      <a href="#the-v8-sandbox-a-software-based-security-boundary">
        
      </a>
    </div>
    <p>Over the past few years, V8 has been gaining another defense in depth feature: the V8 sandbox. (Not to be confused with the <a href="https://blog.cloudflare.com/sandboxing-in-linux-with-zero-lines-of-code/"><u>layer 2 sandbox</u></a> which Workers have been using since the beginning.) The V8 sandbox has been a multi-year project that has been gaining <a href="https://v8.dev/blog/sandbox"><u>maturity</u></a> for a while. The sandbox project stems from the observation that many V8 security vulnerabilities start by corrupting objects in the V8 heap memory. Attackers then leverage this corruption to reach other parts of the process, giving them the opportunity to escalate and gain more access to the victim's browser, or even the entire system.</p><p>V8's sandbox project is an ambitious software security mitigation that aims to thwart that escalation: to make it impossible for the attacker to progress from a corruption on the V8 heap to a compromise of the rest of the process. This means, among other things, removing all pointers from the heap. But first, let's explain in as simple terms as possible, what a memory corruption attack is.</p>
    <div>
      <h3>Memory corruption attacks</h3>
      <a href="#memory-corruption-attacks">
        
      </a>
    </div>
    <p>A memory corruption attack tricks a program into misusing its own memory. Computer memory is just a store of integers, where each integer is stored in a location. The locations each have an <i>address</i>, which is also just a number. Programs interpret the data in these locations in different ways, such as text, pixels, or <i>pointers</i>. Pointers are addresses that identify a different memory location, so they act as a sort of arrow that points to some other piece of data.</p><p>Here's a concrete example, which uses a buffer overflow. This is a form of attack that was historically common and relatively simple to understand: Imagine a program has a small buffer (like a 16-character text field) followed immediately by an 8-byte pointer to some ordinary data. An attacker might send the program a 24-character string, causing a "buffer overflow." Because of a vulnerability in the program, the first 16 characters fill the intended buffer, but the remaining 8 characters spill over and overwrite the adjacent pointer.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5VlcKOYtfRHwWZVDb6GOPm/517ae1987c89273e1f33eb6ca11d752d/image5.png" />
          </figure><p><sup><i>See below for how such an attack would now be thwarted.</i></sup></p><p>Now the pointer has been redirected to point at sensitive data of the attacker's choosing, rather than the normal data it was originally meant to access. When the program tries to use what it believes is its normal pointer, it's actually accessing sensitive data chosen by the attacker.</p><p>This type of attack works in steps: first create a small confusion (like the buffer overflow), then use that confusion to create bigger problems, eventually gaining access to data or capabilities the attacker shouldn't have.  The attacker can eventually use the misdirection to either steal information or plant malicious data that the program will treat as legitimate.</p><p>This was a somewhat abstract description of memory corruption attacks using a buffer overflow, one of the simpler techniques. For some much more detailed and recent examples, see <a href="https://googleprojectzero.blogspot.com/2015/06/what-is-good-memory-corruption.html"><u>this description from Google</u></a>, or this <a href="https://medium.com/@INTfinitySG/miscellaneous-series-2-a-script-kiddie-diary-in-v8-exploit-research-part-1-5b0bab211f5a"><u>breakdown of a V8 vulnerability</u></a>.</p>
    <div>
      <h3>Compressed pointers in V8</h3>
      <a href="#compressed-pointers-in-v8">
        
      </a>
    </div>
    <p>Many attacks are based on corrupting pointers, so ideally we would remove all pointers from the memory of the program.  Since an object-oriented language's heap is absolutely full of pointers, that would seem, on its face, to be a hopeless task, but it is enabled by an earlier development. Starting in 2020, V8 has offered the option of saving memory by using <a href="https://v8.dev/blog/pointer-compression"><u>compressed pointers</u></a>. This means that, on a 64-bit system, the heap uses only 32 bit offsets, relative to a base address. This limits the total heap to maximally 4 GiB, a limitation that is acceptable for a browser, and also fine for individual scripts running in a V8 isolate on Cloudflare Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sO5ByQzR62UcxZiaxwcaq/2f2f0c04af57bb492e9ecaa321935112/image1.png" />
          </figure><p><sup><i>An artificial object with various fields, showing how the layout differs in a compressed vs. an uncompressed heap. The boxes are 64 bits wide.</i></sup></p><p>If the whole of the heap is in a single 4 GiB area then the first 32 bits of all pointers will be the same, and we don't need to store them in every pointer field in every object. In the diagram we can see that the object pointers all start with 0x12345678, which is therefore redundant and doesn't need to be stored. This means that object pointer fields and integer fields can be reduced from 64 to 32 bits.</p><p>We still need 64 bit fields for some fields like double precision floats and for the sandbox offsets of buffers, which are typically used by the script for input and output data. See below for details.</p><p>Integers in an uncompressed heap are stored in the high 32 bits of a 64 bit field. In the compressed heap, the top 31 bits of a 32 bit field are used. In both cases the lowest bit is set to 0 to indicate integers (as opposed to pointers or offsets).</p><p>Conceptually, we have two methods for compressing and decompressing, using a base address that is divisible by 4 GiB:</p>
            <pre><code>// Decompress a 32 bit offset to a 64 bit pointer by adding a base address.
void* Decompress(uint32_t offset) { return base + offset; }
// Compress a 64 bit pointer to a 32 bit offset by discarding the high bits.
uint32_t Compress(void* pointer) { return (intptr_t)pointer &amp; 0xffffffff; }</code></pre>
            <p>This pointer compression feature, originally primarily designed to save memory, can be used as the basis of a sandbox.</p>
    <div>
      <h3>From compressed pointers to the sandbox</h3>
      <a href="#from-compressed-pointers-to-the-sandbox">
        
      </a>
    </div>
    <p>The biggest 32-bit unsigned integer is about 4 billion, so the <code>Decompress()</code> function cannot generate any pointer that is outside the range [base, base + 4 GiB]. You could say the pointers are trapped in this area, so it is sometimes called the <i>pointer cage</i>. V8 can reserve 4 GiB of virtual address space for the pointer cage so that only V8 objects appear in this range. By eliminating <i>all</i> pointers from this range, and following some other strict rules, V8 can contain any memory corruption by an attacker to this cage. Even if an attacker corrupts a 32 bit offset within the cage, it is still only a 32 bit offset and can only be used to create new pointers that are still trapped within the pointer cage.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3r5H81eDvHgaPIBFw5gG6B/65ffa220f9141a81af893183a09321ac/image7.png" />
          </figure><p><sup><i>The buffer overflow attack from earlier no longer works because only the attacker's own data is available in the pointer cage.</i></sup></p><p>To construct the sandbox, we take the 4 GiB pointer cage and add another 4 GiB for buffers and other data structures to make the 8 GiB sandbox. This is why the buffer offsets above are 33 bits, so they can reach buffers in the second half of the sandbox (40 bits in Chromium with larger sandboxes). V8 stores these buffer offsets in the high 33 bits and shifts down by 31 bits before use, in case an attacker corrupted the low bits.</p><p>Cloudflare Workers have made use of compressed pointers in V8 for a while, but for us to get the full power of the sandbox we had to make some changes. Until recently, all isolates in a process had to be one single sandbox if you were using the sandboxed configuration of V8. This would have limited the total size of all V8 heaps to be less than 4 GiB, far too little for our architecture, which relies on serving 1000s of scripts at once.</p><p>That's why we commissioned <a href="https://www.igalia.com/"><u>Igalia</u></a> to add<a href="https://dbezhetskov.dev/multi-sandboxes/"><u> isolate groups</u></a> to V8. Each isolate group has its own sandbox and can have 1 or more isolates within it. Building on this change we have been able to start using the sandbox, eliminating a whole class of potential security issues in one stroke. Although we can place multiple isolates in the same sandbox, we are currently only putting a single isolate in each sandbox.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jwaGI8xIAC6755vw2BWfE/d8b0cd5b36dbe8b5e628c62ef7f3d474/image2.png" />
          </figure><p><sup><i>The layout of the sandbox. In the sandbox there can be more than one isolate, but all their heap pages must be in the pointer cage: the first 4 GiB of the sandbox. Instead of pointers between the objects, we use 32 bit offsets. The offsets for the buffers are 33 bits, so they can reach the whole sandbox, but not outside it.</i></sup></p>
    <div>
      <h2>Virtual memory isn't infinite, there's a lot going on in a Linux process</h2>
      <a href="#virtual-memory-isnt-infinite-theres-a-lot-going-on-in-a-linux-process">
        
      </a>
    </div>
    <p>At this point, we were not quite done, though. Each sandbox reserves 8 GiB of space in the virtual memory map of the process, and it must be 4 GiB aligned <a href="https://v8.dev/blog/pointer-compression"><u>for efficiency</u></a>. It uses much less physical memory, but the sandbox mechanism requires this much virtual space for its security properties. This presents us with a problem, since a Linux process 'only' has 128 TiB of virtual address space in a 4-level page table (another 128 TiB are reserved for the kernel, not available to user space).</p><p>At Cloudflare, we want to run Workers as efficiently as possible to keep costs and prices down, and to offer a generous free tier. That means that on each machine we have so many isolates running (one per sandbox) that it becomes hard to place them all in a 128 TiB space.</p><p>Knowing this, we have to place the sandboxes carefully in memory. Unfortunately, the Linux syscall, <a href="https://man7.org/linux/man-pages/man2/mmap.2.html"><u>mmap</u></a>, does not allow us to specify the alignment of an allocation unless you can guess a free location to request. To get an 8 GiB area that is 4 GiB aligned, we have to ask for 12 GiB, then find the aligned 8 GiB area that must exist within that, and return the unused (hatched) edges to the OS:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Dqey3y5ZsPugD3pyRpQUY/cdadceeb96dbb01a2062dc98c7c554bc/image6.png" />
          </figure><p>If we allow the Linux kernel to place sandboxes randomly, we end up with a layout like this with gaps. Especially after running for a while, there can be both 8 GiB and 4 GiB gaps between sandboxes:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6oaIPZnjaJrLYoFK6v03oI/6c53895f1151d70f71511d8cdfa35f00/image3.png" />
          </figure><p>Sadly, because of our 12 GiB alignment trick, we can't even make use of the 8 GiB gaps. If we ask the OS for 12 GiB, it will never give us a gap like the 8 GiB gap between the green and blue sandboxes above. In addition, there are a host of other things going on in the virtual address space of a Linux process: the malloc implementation may want to grab pages at particular addresses, the executable and libraries are mapped at a random location by ASLR, and V8 has allocations outside the sandbox.</p><p>The latest generation of x64 CPUs supports a much bigger address space, which solves both problems, and Linux kernels are able to make use of the extra bits with <a href="https://en.wikipedia.org/wiki/Intel_5-level_paging"><u>five level page tables</u></a>. A process has to <a href="https://lwn.net/Articles/717293/"><u>opt into this</u></a>, which is done by a single mmap call suggesting an address outside the 47 bit area. The reason this needs an opt-in is that some programs can't cope with such high addresses. Curiously, V8 is one of them.</p><p>This isn't hard to fix in V8, but not all of our fleet has been upgraded yet to have the necessary hardware. So for now, we need a solution that works with the existing hardware. We have modified V8 to be able to grab huge memory areas and then use <a href="https://man7.org/linux/man-pages/man2/mprotect.2.html"><u>mprotect syscalls</u></a> to create tightly packed 8 GiB spaces for sandboxes, bypassing the inflexible mmap API.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kPgAWxoR7nDsZHUOBsNMp/15e7b2a1aac827acfce8b0d614e44cde/image8.png" />
          </figure>
    <div>
      <h2>Putting it all together</h2>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>Taking control of the sandbox placement like this actually gives us a security benefit, but first we need to describe a particular threat model.</p><p>We assume for the purposes of this threat model that an attacker has an arbitrary way to corrupt data within the sandbox. This is historically the first step in many V8 exploits. So much so that there is a <a href="https://bughunters.google.com/about/rules/chrome-friends/5745167867576320/chrome-vulnerability-reward-program-rules#v8-sandbox-bypass-rewards"><u>special tier</u></a> in Google's V8 bug bounty program where you may <i>assume</i> you have this ability to corrupt memory, and they will pay out if you can leverage that to a more serious exploit.</p><p>However, we assume that the attacker does not have the ability to execute arbitrary machine code. If they did, they could <a href="https://www.usenix.org/system/files/sec20fall_connor_prepub.pdf"><u>disable memory protection keys</u></a>. Having access to the in-sandbox memory only gives the attacker access to their own data. So the attacker must attempt to escalate, by corrupting data inside the sandbox to access data outside the sandbox.</p><p>You will recall that the compressed, sandboxed V8 heap only contains 32 bit offsets. Therefore, no corruption there can reach outside the pointer cage. But there are also arrays in the sandbox — vectors of data with a given size that can be accessed with an index. In our threat model, the attacker can modify the sizes recorded for those arrays and the indexes used to access elements in the arrays. That means an attacker could potentially turn an array in the sandbox into a tool for accessing memory incorrectly. For this reason, the V8 sandbox normally has <i>guard regions</i> around it: These are 32 GiB virtual address ranges that have no virtual-to-physical address mappings. This helps guard against the worst case scenario: Indexing an array where the elements are 8 bytes in size (e.g. an array of double precision floats) using a maximal 32 bit index. Such an access could reach a distance of up to 32 GiB outside the sandbox: 8 times the maximal 32 bit index of four billion.</p><p>We want such accesses to trigger an alarm, rather than letting an attacker access nearby memory.  This happens automatically with guard regions, but we don't have space for conventional 32 GiB guard regions around every sandbox.</p><p>Instead of using conventional guard regions, we can make use of memory protection keys. By carefully controlling which isolate group uses which key, we can ensure that no sandbox within 32 GiB has the same protection key. Essentially, the sandboxes are acting as each other's guard regions, protected by memory protection keys. Now we only need a wasted 32 GiB guard region at the start and end of the huge packed sandbox areas.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53MPs8P84ayqEiTXh7gV5O/88104f74f1d51dbdda8d987e1c7df3aa/image10.png" />
          </figure><p>With the new sandbox layout, we use strictly rotating memory protection keys. Because we are not using randomly chosen memory protection keys, for this threat model the 92% problem described above disappears. Any in-sandbox security issue is unable to reach a sandbox with the same memory protection key. In the diagram, we show that there is no memory within 32 GiB of a given sandbox that has the same memory protection key. Any attempt to access memory within 32 GiB of a sandbox will trigger an alarm, just like it would with unmapped guard regions.</p>
    <div>
      <h2>The future</h2>
      <a href="#the-future">
        
      </a>
    </div>
    <p>In a way, this whole blog post is about things our customers <i>don't</i> need to do. They don't need to upgrade their server software to get the latest patches, we do that for them. They don't need to worry whether they are using the most secure or efficient configuration. So there's no call to action here, except perhaps to sleep easy.</p><p>However, if you find work like this interesting, and especially if you have experience with the implementation of V8 or similar language runtimes, then you should consider coming to work for us. <a href="https://job-boards.greenhouse.io/cloudflare/jobs/6718312?gh_jid=6718312"><u>We are recruiting both in the US and in Europe</u></a>. It's a great place to work, and Cloudflare is going from strength to strength.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Malicious JavaScript]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <guid isPermaLink="false">7bZyPF4nBnr5gisZW2crax</guid>
            <dc:creator>Erik Corry</dc:creator>
            <dc:creator>Ketan Gupta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the Cloudflare Data Platform: ingest, store, and query your data directly on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/cloudflare-data-platform/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare Data Platform, launching today, is a fully-managed suite of products for ingesting, transforming, storing, and querying analytical data, built on Apache Iceberg and R2 storage. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>For Developer Week in April 2025, we announced <a href="https://blog.cloudflare.com/r2-data-catalog-public-beta/"><u>the public beta of R2 Data Catalog</u></a>, a fully managed <a href="https://iceberg.apache.org/docs/nightly/"><u>Apache Iceberg</u></a> catalog on top of <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>Cloudflare R2 object storage</u></a>. Today, we are building on that foundation with three launches:</p><ul><li><p><b>Cloudflare Pipelines</b> receives events sent via Workers or HTTP, transforms them with SQL, and ingests them into Iceberg or as files on R2</p></li><li><p><b>R2 Data Catalog</b> manages the Iceberg metadata and now performs ongoing maintenance, including compaction, to improve query performance</p></li><li><p><b>R2 SQL</b> is our in-house distributed SQL engine, designed to perform petabyte-scale queries over your data in R2</p></li></ul><p>Together, these products make up the <b>Cloudflare Data Platform</b>, a complete solution for ingesting, storing, and querying analytical data tables.</p><p>Like all <a href="https://www.cloudflare.com/developer-platform/products/"><u>Cloudflare Developer Platform products</u></a>, they run on our global compute infrastructure. They’re built around open standards and interoperability. That means that you can bring your own Iceberg query engine — whether that's PyIceberg, DuckDB, or Spark — connect with other platforms like Databricks and Snowflake — and pay no egress fees to access your data.</p><p>Analytical data is critical for modern companies. It allows you to understand your user’s behavior, your company’s performance, and alerts you to issues. But traditional data infrastructure is expensive and hard to operate, requiring fixed cloud infrastructure and in-house expertise. We built the Cloudflare Data Platform to be easy enough for anyone to use with affordable, usage-based pricing.</p><p>If you're ready to get started now, follow the <a href="https://developers.cloudflare.com/pipelines/getting-started/"><u>Data Platform tutorial</u></a> for a step-by-step guide through creating a <a href="https://developers.cloudflare.com/pipelines/"><u>Pipeline</u></a> that processes and delivers events to an <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> table, which can then be queried with <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a>. Or read on to learn about how we got here and how all of this works.</p>
    <div>
      <h3>How did we end up building a Data Platform?</h3>
      <a href="#how-did-we-end-up-building-a-data-platform">
        
      </a>
    </div>
    <p>We <a href="https://blog.cloudflare.com/introducing-r2-object-storage/"><u>launched R2 Object Storage in 2021</u></a> with a radical pricing strategy: no <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/"><u>egress fees</u></a> — the bandwidth costs that traditional cloud providers charge to get data out — effectively ransoming your data. This was possible because we had already built one of the largest global networks, interconnecting with thousands of ISPs, cloud services, and other enterprises.</p><p>Object storage powers a wide range of use cases, from media to static assets to AI training data. But over time, we've seen an increasing number of companies using open data and table formats to store their analytical data warehouses in R2.</p><p>The technology that enables this is <a href="https://iceberg.apache.org/"><u>Apache Iceberg</u></a>. Iceberg is a <i>table format</i>, which provides database-like capabilities (including updates, ACID transactions, and schema evolution) on top of data files in object storage. In other words, it’s a metadata layer that tells clients which data files make up a particular logical table, what the schemas are, and how to efficiently query them.</p><p>The adoption of Iceberg across the industry meant users were no longer locked-in to one query engine. But egress fees still make it cost-prohibitive to query data across regions and clouds. R2, with <a href="https://www.cloudflare.com/the-net/egress-fees-exit/"><u>zero-cost egress</u></a>, solves that problem — users would no longer be locked-in to their clouds either. They could store their data in a vendor-neutral location and let teams use whatever query engine made sense for their data and query patterns.</p><p>But users still had to manage all of the metadata and other infrastructure themselves. We realized there was an opportunity for us to solve a major pain point and reduce the friction of storing data lakes on R2. This became R2 Data Catalog, our managed Iceberg catalog.</p><p>With the data stored on R2 and metadata managed, that still left a few gaps for users to solve.</p><p>How do you get data into your Iceberg tables? Once it's there, how do you optimize for query performance? And how do you actually get value from your data without needing to self-host a query engine or use another cloud platform?</p><p>In the rest of this post, we'll walk through how the three products that make up the Data Platform solve these challenges.</p>
    <div>
      <h3>Cloudflare Pipelines</h3>
      <a href="#cloudflare-pipelines">
        
      </a>
    </div>
    <p>Analytical data tables are made up of <i>events</i>, things that happened at a particular point in time. They might come from server logs, mobile applications, or IoT devices, and are encoded in data formats like JSON, Avro, or Protobuf. They ideally have a schema — a standardized set of fields — but might just be whatever a particular team thought to throw in there.</p><p>But before you can query your events with Iceberg, they need to be ingested, structured according to a schema, and written into object storage. This is the role of <a href="https://developers.cloudflare.com/pipelines/"><u>Cloudflare Pipelines</u></a>.</p><p>Built on top of <a href="https://www.arroyo.dev"><u>Arroyo</u></a>, a stream processing engine we acquired earlier this year, Pipelines receives events, transforms them with SQL queries, and sinks them to R2 and R2 Data Catalog.</p><p>Pipelines is organized around three central objects:</p><p><a href="https://developers.cloudflare.com/pipelines/streams/"><b><u>Streams</u></b></a> are how you get data into Cloudflare. They're durable, buffered queues that receive events and store them for processing. Streams can accept events in two ways: via an HTTP endpoint or from a <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>Cloudflare Worker binding</u></a>.</p><p><a href="https://developers.cloudflare.com/pipelines/sinks/"><b><u>Sinks</u></b></a> define the destination for your data. We support ingesting into R2 Data Catalog, as well as writing raw files to R2 as JSON or <a href="https://parquet.apache.org/"><u>Apache Parquet</u></a>. Sinks can be configured to frequently write files, prioritizing low-latency ingestion, or to write less frequent, larger files to get better query performance. In either case, ingestion is <i>exactly-once</i>, which means that we will never duplicate or drop events on their way to R2.</p><p><a href="https://developers.cloudflare.com/pipelines/pipelines/"><b><u>Pipelines</u></b></a> connect streams and sinks via <a href="https://developers.cloudflare.com/pipelines/sql-reference/"><u>SQL transformations</u></a>, which can modify events before writing them to storage. This enables you to <i>shift left</i>, pushing validation, schematization, and processing to your ingestion layer to make your queries easy, fast, and correct.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ExEHrwqgUuYLCm2q6yUkN/bf31d44dd7b97666af37cb2c25f12808/unnamed__33_.png" />
          </figure><p>For example, here's a pipeline that ingests events from a clickstream data source and writes them to Iceberg:</p>
            <pre><code>INSERT into events_table
SELECT
  user_id,
  lower(event) AS event_type,
  to_timestamp_micros(ts_us) AS event_time,
  regexp_match(url, '^https?://([^/]+)')[1]  AS domain,
  url,
  referrer,
  user_agent
FROM events_json
WHERE event = 'page_view'
  AND NOT regexp_like(user_agent, '(?i)bot|spider');</code></pre>
            <p>SQL transformations are very powerful and give you full control over how data is structured and written into the table. For example, you can</p><ul><li><p>Schematize and normalize your data, even using <a href="https://developers.cloudflare.com/pipelines/sql-reference/scalar-functions/json/"><u>JSON functions</u></a> to extract fields from arbitrary JSON</p></li><li><p>Filter out events or split them into separate tables with their own schemas</p></li><li><p>Redact sensitive information before storage with regexes</p></li><li><p>Unroll nested arrays and objects into separate events</p></li></ul><p>Initially, Pipelines supports stateless transformations. In the future, we'll leverage more of <a href="https://www.arroyo.dev/blog/stateful-stream-processing/"><u>Arroyo's stateful processing capabilities</u></a> to support aggregations, incrementally-updated materialized views, and joins.</p><p>Cloudflare Pipelines is available today in open beta. You can create a pipeline using the dashboard, Wrangler, or the REST API. To get started, check out our <a href="https://developers.cloudflare.com/pipelines/getting-started/"><u>developer docs.</u></a></p><p>We aren’t currently billing for Pipelines during the open beta. However, R2 storage and operations incurred by sinks writing data to R2 are billed at <a href="https://developers.cloudflare.com/r2/pricing/"><u>standard rates</u></a>. When we start billing, we anticipate charging based on the amount of data read, the amount of data processed via SQL transformations, and data delivered.</p>
    <div>
      <h3>R2 Data Catalog</h3>
      <a href="#r2-data-catalog">
        
      </a>
    </div>
    <p>We launched the open beta of <a href="https://developers.cloudflare.com/r2/data-catalog/"><u>R2 Data Catalog</u></a> in April and have been amazed by the response. Query engines like DuckDB <a href="https://duckdb.org/docs/stable/guides/network_cloud_storage/cloudflare_r2_import.html"><u>have added native support</u></a>, and we've seen useful integrations like <a href="https://blog.cloudflare.com/marimo-cloudflare-notebooks/"><u>marimo notebooks</u></a>.</p><p>It makes getting started with Iceberg easy. There’s no need to set up a database cluster, connect to object storage, or manage any infrastructure. You can create a catalog with a couple of <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> commands:</p>
            <pre><code>$ npx wrangler bucket create mycatalog 
$ npx wrangler r2 bucket catalog enable mycatalog</code></pre>
            <p>This provisions a data lake that can scale to petabytes of storage, queryable by whatever engine you want to use with zero egress fees.</p><p>But just storing the data isn't enough. Over time, as data is ingested, the number of underlying data files that make up a table will grow, leading to slower and slower query performance.</p><p>This is a particular problem with low-latency ingestion, where the goal is to have events queryable as quickly as possible. Writing data frequently means the files are smaller, and there are more of them. Each file needed for a query has to be listed, downloaded, and read. The overhead of too many small files can dominate the total query time.</p><p>The solution is <i>compaction</i>, a periodic maintenance operation performed automatically by the catalog. Compaction rewrites small files into larger files which reduces metadata overhead and increases query performance. </p><p>Today we are launching compaction support in R2 Data Catalog. Enabling it for your catalog is as easy as:
</p>
            <pre><code>$ npx wrangler r2 bucket catalog compaction enable mycatalog</code></pre>
            <p>We're starting with support for small-file compaction, and will expand to additional compaction strategies in the future. Check out the <a href="https://developers.cloudflare.com/r2/data-catalog/about-compaction/"><u>compaction documentation</u></a> to learn more about how it works and how to enable it.</p><p>At this time, during open beta, we aren’t billing for R2 Data Catalog. Below is our current thinking on future pricing:</p><table><tr><td><p>
</p></td><td><p><b>Pricing*</b></p></td></tr><tr><td><p>R2 storage</p><p>For standard storage class</p></td><td><p>$0.015 per GB-month (no change)</p></td></tr><tr><td><p>R2 Class A operations</p></td><td><p>$4.50 per million operations (no change)</p></td></tr><tr><td><p>R2 Class B operations</p></td><td><p>$0.36 per million operations (no change)</p></td></tr><tr><td><p>Data Catalog operations</p><p>e.g., create table, get table metadata, update table properties</p></td><td><p>$9.00 per million catalog operations</p></td></tr><tr><td><p>Data Catalog compaction data processed</p></td><td><p>$0.005 per GB processed</p><p>$2.00 per million objects processed</p></td></tr><tr><td><p>Data egress</p></td><td><p>$0 (no change, always free)</p></td></tr></table><p><i>*prices subject to change prior to General Availability</i></p><p>We will provide at least 30 days notice before billing starts or if anything changes.</p>
    <div>
      <h3>R2 SQL</h3>
      <a href="#r2-sql">
        
      </a>
    </div>
    <p>Having data in R2 Data Catalog is only the first step; the real goal is getting insights and value from it. Traditionally, that means setting up and managing DuckDB, Spark, Trino, or another query engine, adding a layer of operational overhead between you and those insights. What if instead you could run queries directly on Cloudflare?</p><p>Now you can. We’ve built a query engine specifically designed for R2 Data Catalog and Cloudflare’s edge infrastructure. We call it <a href="https://developers.cloudflare.com/r2-sql/"><u>R2 SQL</u></a>, and it’s available today as an open beta.</p><p>With Wrangler, running a query on an R2 Data Catalog table is as easy as</p>
            <pre><code>$ npx wrangler r2 sql query "{WAREHOUSE}" "\
  SELECT user_id, url FROM events \
  WHERE domain = 'mywebsite.com'"</code></pre>
            <p>Cloudflare's ability to schedule compute anywhere on its global network is the foundation of R2 SQL's design. This lets us process data directly where it lives, instead of requiring you to manage centralized clusters for your analytical workloads.</p><p>R2 SQL is tightly integrated with R2 Data Catalog and R2, which allows the query planner to go beyond simple storage scanning and make deep use of the rich statistics stored in the R2 Data Catalog metadata. This provides a powerful foundation for a new class of query optimizations, such as auxiliary indexes or enabling more complex analytical functions in the future.</p><p>The result is a fully serverless experience for users. You can focus on your SQL without needing a deep understanding of how the engine operates. If you are interested in how R2 SQL works, the team has written <a href="https://blog.cloudflare.com/r2-sql-deep-dive"><u>a deep dive into how R2 SQL’s distributed query engine works at scale.</u></a></p><p>The open beta is an early preview of R2 SQL querying capabilities, and is initially focused around filter queries. Over time, we will be expanding its capabilities to cover more SQL features, like complex aggregations.</p><p>We're excited to see what our users do with R2 SQL. To try it out, see <a href="https://developers.cloudflare.com/r2-sql/"><u>the documentation</u></a> and <a href="https://developers.cloudflare.com/r2-sql/get-started/"><u>tutorials</u></a><b>. </b>During the beta, R2 SQL usage is not currently billed, but R2 storage and operations incurred by queries are billed at standard rates. We plan to charge for the volume of data scanned by queries in the future and will provide notice before billing begins.</p>
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>Today, you can use the Cloudflare Data Platform to ingest events into R2 Data Catalog and query them via R2 SQL. In the first half of 2026, we’ll be expanding on the capabilities in all of these products, including:</p><ul><li><p>Integration with <a href="https://developers.cloudflare.com/logs/logpush/"><u>Logpush</u></a>, so you can transform, store, and query your logs directly within Cloudflare</p></li><li><p>User-defined functions via Workers, and stateful processing support for streaming transformations</p></li><li><p>Expanding the featureset of R2 SQL to cover aggregations and joins</p></li></ul><p>In the meantime, you can get started with the Cloudflare Data Platform by following <a href="https://developers.cloudflare.com/pipelines/getting-started/"><u>the tutorial</u></a> to create an end-to-end analytical data system, from ingestion with Pipelines, through storage in R2 Data Catalog, and querying with R2 SQL. 

We’re excited to see what you build! Come share your feedback with us on our <a href="http://discord.cloudflare.com/"><u>Developer Discord</u></a>.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Data Catalog]]></category>
            <category><![CDATA[Pipelines]]></category>
            <guid isPermaLink="false">1InN6nunuaGKjLU7DcoArr</guid>
            <dc:creator>Micah Wylde</dc:creator>
            <dc:creator>Alex Graham</dc:creator>
            <dc:creator>Jérôme Schneider</dc:creator>
        </item>
        <item>
            <title><![CDATA[Choice: the path to AI sovereignty]]></title>
            <link>https://blog.cloudflare.com/sovereign-ai-and-choice/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Championing AI sovereignty through choice: diverse tools, data control, and no vendor lock-in. We're enabling this in India, Japan, and Southeast Asia, offering local, open-source models on Workers AI ]]></description>
            <content:encoded><![CDATA[ <p>Every government is laser-focused on the potential for national transformation by AI. Many view AI as an unparalleled opportunity to solve complex national challenges, drive economic growth, and improve the lives of their citizens. Others are concerned about the risks AI can bring to its society and economy. Some sit somewhere between these two perspectives. But as plans are drawn up by governments around the world to address the question of AI development and adoption, all are grappling with the critical question of sovereignty — how much of this technology, mostly centered in the United States and China, needs to be in their direct control? </p><p>Each nation has their own response to that question — some seek ‘self-sufficiency’ and total authority. Others, particularly those that do not have the capacity to build the full AI technology stack, are approaching it layer-by-layer, seeking to build on the capacities their country does have and then forming strategic partnerships to fill the gaps. </p><p>We believe AI sovereignty at its core is about choice. Each nation should have the ability to select the right tools for the task, to control its own data, and to deploy applications at will, all without being locked into a single provider or a single way of doing things. It's about autonomy and options, realized through a diversified, resilient digital supply chain. </p><p>Cloudflare’s mission is to help build a better Internet. We make tools for developers around the world to build Internet and AI applications that are widely, and in many cases, freely, available. We work on standards to improve interoperability and prevent <a href="https://www.cloudflare.com/learning/cloud/what-is-vendor-lock-in/"><u>vendor lock-in</u></a>. And we are global — our <a href="https://www.cloudflare.com/network/"><u>network</u></a> spans 330 cities in over 125 countries. By supporting local developers to build and deploy AI tools and services right where they are, Cloudflare can help each nation on their path to greater AI sovereignty. </p>
    <div>
      <h2>Creating a future that enables many AI options</h2>
      <a href="#creating-a-future-that-enables-many-ai-options">
        
      </a>
    </div>
    <p>Many nations recognize the practical challenge of realizing a robust AI-driven future that incorporates sovereignty — the significant cost and complexity of the infrastructure needed to set AI in action. Cloudflare believes that countries can achieve their objectives by creating vibrant marketplaces that allow multiple options, and we are creating a path for governments that provides maximum choice:</p><p><b>Infrastructure accessibility: </b>Countries often focus on building large data centers that have the compute capacity to train general purpose AI models, neglecting the infrastructure needed to effectively deploy AI. Because of their proximity to end users, distributed edge networks are critical to ensuring that consumers can actually use AI technologies at scale. Although some AI technologies will be designed to work on-device, many will need more power to run <a href="https://blog.cloudflare.com/best-place-region-earth-inference/"><u>AI inference</u></a>, the tasks that users ask an AI engine to complete.  Distributed networks are equipped to run AI workloads at the edge, to help deliver the low latency and high performance needed for advanced technologies. Cloudflare’s distributed network gives developers a path to rapidly deploy their apps globally without massive upfront investments. </p><p><b>Inclusivity</b>: Nations want their entire economies, from the small businesses, to research institutions, to non-profits and enterprises, to benefit from AI transformation. <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>Serverless models</u></a> like Cloudflare’s make it easy to get started. Developers pay only for what they use, rather than being locked into paying for expensive and unnecessary compute, dramatically lowering the barrier to entry. Our free tier allows developers to experiment, build, and even launch applications without any cost, while our pay-as-you-go model for increased usage removes the significant financial barriers that might otherwise keep advanced AI out of reach. </p><p><b>Control over data: </b>An important part of sovereignty is the ability to control your own data. We believe countries should avoid equating this type of control with data locality, focusing instead on integrating <a href="https://blog.cloudflare.com/cloudflare-for-ai-supporting-ai-adoption-at-scale-with-a-security-first-approach/"><u>security tools that provide visibility and the ability to restrict access to data</u></a>.  Cloudflare’s global, distributed network ensures that developers can experiment, build, and deploy AI-powered applications right where they are, setting rules and controls at the Internet edge.</p><p><b>Multi-modal, dynamic markets</b>: Building new applications with closed AI models can make it challenging to switch models later, and can make developers dependent on particular providers. AI strategies must embrace diversity — developers should have access to a wide variety of both open source and closed AI models. Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> platform, with over <a href="https://developers.cloudflare.com/workers-ai/models/"><u>50 open source models</u></a>, is model agnostic, helping to create a competitive, dynamic environment where developers can swap models in and out as better, cheaper, or more specialized options become available. Cloudflare’s <a href="https://www.cloudflare.com/en-gb/lp/dg/brand/api-security/"><u>AI Gateway</u></a> allows our customers to connect and control all their AI models, regardless of vendor, in a single, unified, interoperable platform. </p><p>Underpinning all of this is the importance of<b> open standards</b> that encourage interoperability. Open standards and protocols throughout the AI technology stack help prevent dependency, create dynamic and competitive markets, and create choice for governments and their developers.</p>
    <div>
      <h2>Championing regional AI innovation  </h2>
      <a href="#championing-regional-ai-innovation">
        
      </a>
    </div>
    <p>Many countries have started to put their own mark on how to spur innovation in their markets, starting with <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>large language models (LLM)</u></a>.  AI development to date has mostly centered around LLMs  trained on English-centric data, and increasingly, Chinese-centric data, leaving behind those who can’t fully access this technology in these two languages. Recognizing this gap, these nations are building and freely offering AI models trained on local language datasets that are fine-tuned to the nuances of their own cultures and languages. This approach lowers the barrier to entry for local businesses, organizations, and governments to create customized AI solutions for their specific markets. Open-sourcing these LLMs is to recognize that AI sovereignty is a means to an end. The goal is innovation, economic growth, and the ability to solve meaningful problems.</p><p>Cloudflare is now supporting these sovereign AI initiatives in <b>India, Japan, and Southeast Asia</b>. We are bringing these locally-developed, open-source AI models to developers around the world through our serverless inference platform, Workers AI.</p><p><b>India</b>: <a href="https://indiaai.gov.in/"><u>India’s national vision</u></a> is “AI for All”, which focuses on AI driving inclusive growth and social empowerment. India will host the momentous global <a href="https://impact.indiaai.gov.in/home"><u>AI Impact Summit</u></a> in 2026, and a key element of showcasing empowering technological advancements that are accessible to the Global South. With its immense linguistic diversity, India is at the forefront of creating models that serve its hundreds of millions of Internet users in their native tongues. A cornerstone in this endeavor is the Government of India’s <a href="https://bhashini.gov.in/"><u>Bhashini</u></a>, a digital public good platform that enables all Indian citizens to access the Internet and digital services in 22 official languages.</p><p>Cloudflare is now offering <a href="https://ai4bharat.iitm.ac.in/"><u>AI4Bharat</u></a>’s <a href="https://huggingface.co/ai4bharat/indictrans2-en-indic-1B"><u>IndicTrans2 model</u></a>, a key open source language model that is also part of the Bhashini initiative. The model is able to translate text across 22 Indic languages, including Bengali, Gujarati, Hindi, Tamil, Sanskrit and even traditionally low-resourced languages like Kashmiri, Manipuri and Sindhi. </p><p>You can use the <a href="https://developers.cloudflare.com/workers-ai/models/indictrans2-en-indic-1B"><u>@cf/ai4bharat/indictrans2-en-indic-1B</u></a> model on Workers AI as follows:</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/ai/run/@cf/ai4bharat/indictrans2-en-indic-1B \
  --header 'Authorization: Bearer TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
    "text": ["What is your favourite food?", "I like pizza"],
    "target_language": "guj_Gujr"
}'</code></pre>
            <p><b>Japan: </b>Japan has a very clear and expansive <a href="http://www8.cao.go.jp/cstp/ai/aistratagy2022en.pdf"><u>vision</u></a> of AI development. Concerned about Japan’s slow AI uptake, the Japanese government aims to make the country “the world’s most friendly AI nation” by creating the ideal conditions for AI growth, both at home and abroad. A major initiative for Japan’s government is supporting AI that deeply understands the complexities and cultural context of the Japanese language. </p><p>Cloudflare is offering Preferred Networks, Inc.(PFN)  <a href="https://huggingface.co/pfnet/plamo-embedding-1b"><u>PLaMo-Embedding-1B</u></a>, a home-grown Japanese text embedding model, made freely and openly available. The Japanese government supported PFN through its <a href="https://www.meti.go.jp/english/policy/mono_info_service/geniac/index.html"><u>Generative AI Accelerator Challenge (GENIAC)</u></a> program, which supports local LLM development through subsidized access to compute resources for training. The PLaMo Embedding model enables users to generate high-quality embeddings for Japanese text, which is helpful for building RAG-powered applications and semantic search use cases.</p><p>You can use the <a href="https://developers.cloudflare.com/workers-ai/models/plamo-embedding-1b"><u>@cf/pfnet/plamo-embedding-1b</u></a> model on Workers AI as follows:</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/ai/run/@cf/pfnet/plamo-embedding-1b \
  --header 'Authorization: Bearer TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
  	"text": [
            "PLaMo-Embedding-1Bは、Preferred Networks, Inc. によって開発された日本語テキスト埋め込みモデルです。",
            "最近は随分と暖かくなりましたね。"
        ]
}'</code></pre>
            <p><b>Southeast Asia: </b>As Chair of the<a href="https://www.imda.gov.sg/about-imda/international-relations/asean-working-group-on-ai-governance"><u> Association of Southeast Asian Nations (ASEAN) Working Group on AI Governance</u></a>,<b> </b>Singapore’s ambitious <a href="https://www.smartnation.gov.sg/initiatives/national-ai-strategy"><u>National AI Strategy 2.0</u></a> aims to ensure that AI is a public good, both for Southeast Asia and the world. As a cornerstone of this strategy, Singapore is championing the development and adoption of SEA-LION, a family of open-source LLMs designed for Southeast Asia's diverse languages and cultures. The initiative aims to establish the nation as an inclusive global AI leader, ensuring the technology is both accessible and regionally relevant to its multilingual and multicultural populaces. The models are adept in numerous regional languages, including Bahasa Indonesia, Bahasa Malaysia, Thai, Vietnamese, and Tamil, unlocking AI technologies for a significant portion of the Asian and global population.</p><p><a href="https://huggingface.co/aisingapore/Gemma-SEA-LION-v4-27B-IT-FP8-Dynamic"><u>SEA-LION model v4-27B</u></a> is now available on the Workers AI platform. SEA-LION v4 stands out on the Singapore government’s leaderboard as its most powerful, efficient, multimodal and multilingual model yet. </p><p>You can use the <a href="https://developers.cloudflare.com/workers-ai/models/gemma-sea-lion-v4-27b-it"><u>@cf/aisingapore/gemma-sea-lion-v4-27b-it</u></a> model on Workers AI as follows:</p>
            <pre><code>curl --request POST \
  --url https://api.cloudflare.com/client/v4/accounts/ACCOUNT_ID/ai/run/@cf/aisingapore/gemma-sea-lion-v4-27b-it \
  --header 'Authorization: Bearer TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{
  "messages": [
    {
      "role": "user",
      "content": "แล้วทำผัดไทยอย่างไร"
    }
  ]
}'</code></pre>
            
    <div>
      <h2>Bringing AI models to the world</h2>
      <a href="#bringing-ai-models-to-the-world">
        
      </a>
    </div>
    <p>Singapore, India and Japan have all chosen to open-source many of their local language models, a strategy that champions an expansive vision of AI sovereignty. This approach demonstrates a crucial understanding: true AI sovereignty is ensuring you have choices. </p><p>Supporting local language open source models is more than just supporting technology; this is a shared commitment to fostering an open, interoperable, and competitive AI ecosystem by empowering governments and developers to solve local problems, create economic opportunities, and preserve their digital and cultural heritages.</p><p>We are honored to support the initiatives of the governments of India, Japan, and Singapore on this journey. We believe that by putting their sovereign AI models into the hands of developers in their economies, we can help unlock a powerful wave of innovation that is more diverse, equitable, and representative of the world we live in. The future of AI is being built today, and we are proud to ensure that AI developers <i>everywhere</i> are at the forefront. </p><p>Choice is the foundation of AI sovereignty. We’re starting with the models from India, Japan, and Singapore on our serverless inference platform, but it’s only the start. Come build with us! Take the first step for free on <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><b><u>Workers AI</u></b><u>.</u></a></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">4jLFHUrYeSeoFnGedLBZQy</guid>
            <dc:creator>Carly Ramsey</dc:creator>
            <dc:creator>Smrithi Ramesh</dc:creator>
            <dc:creator>Michelle Chen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Cloudflare Email Service’s private beta]]></title>
            <link>https://blog.cloudflare.com/email-service/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re launching Cloudflare Email Service. Send and receive email directly from your Workers with native bindings—no API keys needed. Sign up for the private beta. ]]></description>
            <content:encoded><![CDATA[ <p>If you are building an application, you rely on email to communicate with your users. You validate their signup, notify them about events, and send them invoices through email. The service continues to find new purpose with agentic workflows and other AI-powered tools that rely on a simple email as an input or output.</p><p>And it is a pain for developers to manage. It’s frequently the most annoying burden for most teams. Developers deserve a solution that is simple, reliable, and deeply integrated into their workflow. </p><p>Today, we're excited to announce just that: the private beta of Email Sending, a new capability that allows you to send transactional emails directly from Cloudflare Workers. Email Sending joins and expands our popular <a href="https://developers.cloudflare.com/email-routing/"><u>Email Routing</u></a> product, and together they form the new Cloudflare Email Service — a single, unified developer experience for all your email needs.</p><p>With Cloudflare Email Service, we’re distilling our years of experience <a href="https://developers.cloudflare.com/cloudflare-one/email-security/"><u>securing</u></a> and <a href="https://developers.cloudflare.com/email-routing/"><u>routing</u></a> emails, and combining it with the power of the developer platform. Now, sending an email is as easy as adding a binding to a Worker and calling <code>send</code>:</p>
            <pre><code>export default {
  async fetch(request, env, ctx) {

    await env.SEND_EMAIL.send({
      to: [{ email: "hello@example.com" }],
      from: { email: "api-sender@your-domain.com", name: "Your App" },
      subject: "Hello World",
      text: "Hello World!"
    });

    return new Response(`Successfully sent email!`);
  },
};</code></pre>
            
    <div>
      <h3>Email experience is user experience</h3>
      <a href="#email-experience-is-user-experience">
        
      </a>
    </div>
    <p>Email is a core tenet of your user experience. It’s how you stay in touch with your users when they are outside your applications. Users rely on email to inform them when they need to take actions such as password resets, purchase receipts, magic login links, and onboarding flows. When they fail, your application fails.</p><p>That means it’s crucial that emails need to land in your users’ inboxes, both reliably and quickly. A magic link that arrives ten minutes late is a lost user. An email delivered to a spam folder breaks user flows and can erode trust in your product. That’s why we’re focusing on deliverability and time-to-inbox with Cloudflare Email Service. </p><p>To do this, we’re tightly integrating with DNS to automatically configure the necessary DNS records — like SPF, DKIM and DMARC — such that email providers can verify your sending domain and trust your emails. Plus, in true Cloudflare fashion, Email Service is a global service. That means that we can deliver your emails with low latency anywhere in the world, without the complexity of managing servers across regions.</p>
    <div>
      <h3>Simple and flexible for developers</h3>
      <a href="#simple-and-flexible-for-developers">
        
      </a>
    </div>
    <p>Treating email as a core piece of your application also means building for every touchpoint in your development workflow. We’re building Email Service as part of the Cloudflare stack to make developing with email feels as natural as writing a Worker. </p><p>In practice, that means solving for every part of the transactional email workflow:</p><ul><li><p>Starting with Email Service is easy. Instead of managing API keys and secrets, you can use the <code>Email</code> binding to your <code>wrangler.jsonc</code> and send emails securely and with no risk of leaked credentials. </p></li><li><p>You can use Workers to process incoming mail, store attachments in R2, and add tasks to Queues to get email sending off the hot path of your application. And you can use <code>wrangler</code> to emulate Email Sending locally, allowing you to test your user journeys without jumping between tools and environments.</p></li><li><p>In production, you have clear <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> over your emails with bounce rates and delivery events. And, when a user reports a missing email, you can quickly dive into the delivery status to debug issues quickly and help get your user back on track.</p></li></ul><p>We’re also making sure Email Service seamlessly fits into your existing applications. If you need to send emails from external services, you can do so using either REST APIs or SMTP. Likewise, if you’ve been leaning on existing email frameworks (like <a href="https://react.email/"><u>React Email</u></a>) to send rich, HTML-rendered emails to users, you can continue to use them with Email Service. Import the library, render your template, and pass it to the `send` method just as you would elsewhere.</p>
            <pre><code>import { render, pretty, toPlainText } from '@react-email/render';
import { SignupConfirmation } from './templates';

export default {
  async fetch(request, env, ctx) {

    // Convert React Email template to html
    const html = await pretty(await render(&lt;SignupConfirmation url="https://your-domain.com/confirmation-id"/&gt;));

    // Use the Email Sending binding to send emails
    await env.SEND_EMAIL.send({
      to: [{ email: "hello@example.com" }],
      from: { email: "api-sender@your-domain.com", name: "Welcome" },
      subject: "Signup Confirmation",
      html,
      text: toPlainText(html)
    });

    return new Response(`Successfully sent email!`);
  }
};</code></pre>
            
    <div>
      <h3>Email Routing and Email Sending: Better together</h3>
      <a href="#email-routing-and-email-sending-better-together">
        
      </a>
    </div>
    <p>Sending email is only half the story. Applications often need to receive and parse emails to create powerful workflows. By combining Email Sending with our existing <a href="https://developers.cloudflare.com/email-routing"><u>Email Routing</u></a> capabilities, we're providing a complete, end-to-end solution for all your application's email needs.</p><p>Email Routing allows you to create custom email addresses on your domain and handle incoming messages programmatically with a Worker, which can enable powerful application flows such as:</p><ul><li><p>Using <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to parse, summarize and even label incoming emails: flagging security events from customers, early signs of a bug or incident, and/or generating automatic responses based on those incoming emails.</p></li><li><p>Creating support tickets in systems like JIRA or Linear from emails sent to <code>support@your-domain.com</code>.</p></li><li><p>Processing invoices sent to <code>invoices@your-domain.com</code> and storing attachments in R2.</p></li></ul><p>To use Email Routing, add the <code>email</code> handler to your Worker application and process it as needed:</p>
            <pre><code>export default {
  // Create an email handler to process emails delivered to your Worker
  async email(message, env, ctx) {

    // Classify incoming emails using Workers AI
    const { score, label } = env.AI.run("@cf/huggingface/distilbert-sst-2-int8", { text: message.raw" })

    env.PROCESSED_EMAILS.send({score, label, message});
  },
};  </code></pre>
            <p>When you combine inbound routing with outbound sending, you can close the loop entirely within Cloudflare. Imagine a user emails your support address. A Worker can receive the email, parse its content, call a third-party API to create a ticket, and then use the Email Sending binding to send an immediate confirmation back to the user with their ticket number. That’s the power of a unified Email Service.</p><p>Email Sending will require a paid Workers subscription, and we'll be charging based on messages sent. We're still finalizing the packaging, and we'll update our documentation, <a href="https://developers.cloudflare.com/changelog/"><u>changelog</u></a>, and notify users as soon as we have final pricing and long before we start charging. Email Routing limits will remain unchanged.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Email is core to your application today, and it's becoming essential for the next generation of AI agents, background tasks, and automated workflows. We built the Cloudflare Email Service to be the engine for this new era of applications, we’ll be making it available in private beta this November.</p><ul><li><p>Interested in Email Sending? <a href="https://forms.gle/BX6ECfkar3oVLQxs7"><u>Sign up to the waitlist here.</u></a> </p></li><li><p>Want to start processing inbound emails? Get started with <a href="https://developers.cloudflare.com/email-routing/"><u>Email Routing</u></a>, which is available now, remains free and will be folded into the new email sending APIs coming.</p></li></ul><p>We’re excited to be adding Email Service to our Developer Platform, and we’re looking forward to seeing how you reimagine user experiences that increasingly rely on emails!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Email Service]]></category>
            <guid isPermaLink="false">3yl6uG1uh1UE5rplzBlLad</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
        </item>
        <item>
            <title><![CDATA[A year of improving Node.js compatibility in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/nodejs-workers-2025/</link>
            <pubDate>Thu, 25 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Over the year we have greatly expanded Node.js compatibility. There are hundreds of new Node.js APIs now available that make it easier to run existing Node.js code on our platform. ]]></description>
            <content:encoded><![CDATA[ <p>We've been busy.</p><p>Compatibility with the broad JavaScript developer ecosystem has always been a key strategic investment for us. We believe in open standards and an open web. We want you to see <a href="https://workers.cloudflare.com/"><u>Workers</u></a> as a powerful extension of your development platform with the ability to just drop code in that Just Works. To deliver on this goal, the Cloudflare Workers team has spent the past year significantly expanding compatibility with the Node.js ecosystem, enabling hundreds (if not thousands) of popular <a href="https://npmjs.com"><u>npm</u></a> modules to now work seamlessly, including the ever popular <a href="https://expressjs.com"><u>express</u></a> framework.</p><p>We have implemented a <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>substantial subset of the Node.js standard library</u></a>, focusing on the most commonly used, and asked for, APIs. These include:</p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Module</span></th>
    <th><span>API documentation</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>node:console</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/console.html"><span>https://nodejs.org/docs/latest/api/console.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:crypto</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/crypto.html"><span>https://nodejs.org/docs/latest/api/crypto.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:dns</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/dns.html"><span>https://nodejs.org/docs/latest/api/dns.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:fs</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/fs.html"><span>https://nodejs.org/docs/latest/api/fs.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:http</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/http.html"><span>https://nodejs.org/docs/latest/api/http.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:https</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/https.html"><span>https://nodejs.org/docs/latest/api/https.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:net</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/net.html"><span>https://nodejs.org/docs/latest/api/net.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:process</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/process.html"><span>https://nodejs.org/docs/latest/api/process.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:timers</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/timers.html"><span>https://nodejs.org/docs/latest/api/timers.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:tls</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/tls.html"><span>https://nodejs.org/docs/latest/api/tls.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:zlib</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/zlib.html"><span>https://nodejs.org/docs/latest/api/zlib.html</span></a><span> </span></td>
  </tr>
</tbody></table></div><p>Each of these has been carefully implemented to approximate Node.js' behavior as closely as possible where feasible. Where matching <a href="http://nodejs.org"><u>Node.js</u></a>' behavior is not possible, our implementations will throw a clear error when called, rather than silently failing or not being present at all. This ensures that packages that check for the presence of these APIs will not break, even if the functionality is not available.</p><p>In some cases, we had to implement entirely new capabilities within the runtime in order to provide the necessary functionality. For <code>node:fs</code>, we added a new virtual file system within the Workers environment. In other cases, such as with <code>node:net</code>, <code>node:tls</code>, and <code>node:http</code>, we wrapped the new Node.js APIs around existing Workers capabilities such as the <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>Sockets API</u></a> and <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/"><code><u>fetch</u></code></a>.</p><p>Most importantly, <b>all of these implementations are done natively in the Workers runtime</b>, using a combination of TypeScript and C++. Whereas our earlier Node.js compatibility efforts relied heavily on polyfills and shims injected at deployment time by developer tooling such as <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>, we are moving towards a model where future Workers will have these APIs available natively, without need for any additional dependencies. This not only improves performance and reduces memory usage, but also ensures that the behavior is as close to Node.js as possible.</p>
    <div>
      <h2>The networking stack</h2>
      <a href="#the-networking-stack">
        
      </a>
    </div>
    <p>Node.js has a rich set of networking APIs that allow applications to create servers, make HTTP requests, work with raw TCP and UDP sockets, send DNS queries, and more. Workers do not have direct access to raw kernel-level sockets though, so how can we support these Node.js APIs so packages still work as intended? We decided to build on top of the existing <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>managed Sockets</u></a> and fetch APIs. These implementations allow many popular Node.js packages that rely on networking APIs to work seamlessly in the Workers environment.</p><p>Let's start with the HTTP APIs.</p>
    <div>
      <h3>HTTP client and server support</h3>
      <a href="#http-client-and-server-support">
        
      </a>
    </div>
    <p>From the moment we announced that we would be pursuing Node.js compatibility within Workers, users have been asking specifically for an implementation of the <code>node:http</code> module. There are countless modules in the ecosystem that depend directly on APIs like <code>http.get(...)</code> and <code>http.createServer(...)</code>.</p><p>The <code>node:http</code> and <code>node:https</code> modules provide APIs for creating HTTP clients and servers. <a href="https://blog.cloudflare.com/bringing-node-js-http-servers-to-cloudflare-workers/"><u>We have implemented both</u></a>, allowing you to create HTTP clients using <code>http.request()</code> and servers using <code>http.createServer()</code>. <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/"><u>The HTTP client implementation</u></a> is built on top of the Fetch API, while the HTTP server implementation is built on top of the Workers runtime’s existing request handling capabilities.</p><p>The client side is fairly straightforward:</p>
            <pre><code>import http from 'node:http';

export default {
  async fetch(request) {
    return new Promise((resolve, reject) =&gt; {
      const req = http.request('http://example.com', (res) =&gt; {
        let data = '';
        res.setEncoding('utf8');
        res.on('data', (chunk) =&gt; {
          data += chunk;
        });
        res.on('end', () =&gt; {
          resolve(new Response(data));
        });
      });
      req.on('error', (err) =&gt; {
        reject(err);
      });
      req.end();
    });
  }
}
</code></pre>
            <p>The server side is just as simple but likely even more exciting. We've often been asked about the possibility of supporting <a href="https://expressjs.com/"><u>Express</u></a>, or <a href="https://koajs.com/"><u>Koa</u></a>, or <a href="https://fastify.dev/"><u>Fastify</u></a> within Workers, but it was difficult to do because these were so dependent on the Node.js APIs. With the new additions it is now possible to use both Express and Koa within Workers, and we're hoping to be able to add Fastify support later. </p>
            <pre><code>import { createServer } from "node:http";
import { httpServerHandler } from "cloudflare:node";

const server = createServer((req, res) =&gt; {
  res.writeHead(200, { "Content-Type": "text/plain" });
  res.end("Hello from Node.js HTTP server!");
});

export default httpServerHandler(server);
</code></pre>
            <p>The <code>httpServerHandler()</code> function from the <code>cloudflare:nod</code>e module integrates the HTTP <code>server</code> with the Workers fetch event, allowing it to handle incoming requests.</p>
    <div>
      <h3>The <code>node:dns</code> module</h3>
      <a href="#the-node-dns-module">
        
      </a>
    </div>
    <p>The <code>node:dns</code> module provides an API for performing DNS queries. </p><p>At Cloudflare, we happen to have a <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/"><u>DNS-over-HTTPS (DoH)</u></a> service and our own <a href="https://one.one.one.one/"><u>DNS service called 1.1.1.1</u></a>. We took advantage of this when exposing <code>node:dns</code> in Workers. When you use this module to perform a query, it will just make a subrequest to 1.1.1.1 to resolve the query. This way the user doesn’t have to think about DNS servers, and the query will just work.</p>
    <div>
      <h3>The <code>node:net</code> and <code>node:tls</code> modules</h3>
      <a href="#the-node-net-and-node-tls-modules">
        
      </a>
    </div>
    <p>The <code>node:net</code> module provides an API for creating TCP sockets, while the <code>node:tls</code> module provides an API for creating secure TLS sockets. As we mentioned before, both are built on top of the existing <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>Workers Sockets API</u></a>. Note that not all features of the <code>node:net</code> and <code>node:tls</code> modules are available in Workers. For instance, it is not yet possible to create a TCP server using <code>net.createServer()</code> yet (but maybe soon!), but we have implemented enough of the APIs to allow many popular packages that rely on these modules to work in Workers.</p>
            <pre><code>import net from 'node:net';
import tls from 'node:tls';

export default {
  async fetch(request) {
    const { promise, resolve } = Promise.withResolvers();
    const socket = net.connect({ host: 'example.com', port: 80 },
        () =&gt; {
      let buf = '';
      socket.setEncoding('utf8')
      socket.on('data', (chunk) =&gt; buf += chunk);
      socket.on('end', () =&gt; resolve(new Response('ok'));
      socket.end();
    });
    return promise;
  }
}
</code></pre>
            
    <div>
      <h2>A new virtual file system and the <code>node:fs</code> module</h2>
      <a href="#a-new-virtual-file-system-and-the-node-fs-module">
        
      </a>
    </div>
    <p>What does supporting filesystem APIs mean in a serverless environment? When you deploy a Worker, it runs in Region:Earth and we don’t want you needing to think about individual servers with individual file systems. There are, however, countless existing applications and modules in the ecosystem that leverage the file system to store configuration data, read and write temporary data, and more.</p><p>Workers do not have access to a traditional file system like a Node.js process does, and for good reason! A Worker does not run on a single machine; a single request to one worker can run on any one of thousands of servers anywhere in Cloudflare's global <a href="https://www.cloudflare.com/network"><u>network</u></a>. Coordinating and synchronizing access to shared physical resources such as a traditional file system harbor major technical challenges and risks of deadlocks and more; challenges that are inherent in any massively distributed system. Fortunately, Workers provide powerful tools like <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> that provide a solution for coordinating access to shared, durable state at scale. To address the need for a file system in Workers, we built on what already makes Workers great.</p><p>We implemented a virtual file system that allows you to use the node:fs APIs to read and write temporary, in-memory files. This virtual file system is specific to each Worker. When using a stateless worker, files created in one request are not accessible in any other request. However, when using a Durable Object, this temporary file space can be shared across multiple requests from multiple users. This file system is ephemeral (for now), meaning that files are not persisted across Worker restarts or deployments, so it does not replace the use of the <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>Durable Object Storage</u></a> mechanism, but it provides a powerful new tool that greatly expands the capabilities of your Durable Objects.</p><p>The <code>node:fs</code> module provides a rich set of APIs for working with files and directories:</p>
            <pre><code>import fs from 'node:fs';

export default {
  async fetch(request) {
    // Write a temporary file
    await fs.promises.writeFile('/tmp/hello.txt', 'Hello, world!');

    // Read the file
    const data = await fs.promises.readFile('/tmp/hello.txt', 'utf-8');

    return new Response(`File contents: ${data}`);
  }
}
</code></pre>
            <p>The virtual file system supports a wide range of file operations, including reading and writing files, creating and removing directories, and working with file descriptors. It also supports standard input/output/error streams via <code>process.stdin</code>, <code>process.stdout</code>, and <code>process.stderr</code>, symbolic links, streams, and more.</p><p>While the current implementation of the virtual file system is in-memory only, we are exploring options for adding persistent storage in the future that would link to existing Cloudflare storage solutions like <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> or Durable Objects. But you don't have to wait on us! When combined with powerful tools like Durable Objects and <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>JavaScript RPC</u></a>, it's certainly possible to create your own general purpose, durable file system abstraction backed by sqlite storage.</p>
    <div>
      <h2>Cryptography with <code>node:crypto</code></h2>
      <a href="#cryptography-with-node-crypto">
        
      </a>
    </div>
    <p>The <code>node:crypto</code> module provides a comprehensive set of cryptographic functionality, including hashing, encryption, decryption, and more. We have implemented a full version of the <code>node:crypto</code> module, allowing you to use familiar cryptographic APIs in your Workers applications. There will be some difference in behavior compared to Node.js due to the fact that Workers uses <a href="https://github.com/google/boringssl/blob/main/README.md"><u>BoringSSL</u></a> under the hood, while Node.js uses <a href="https://github.com/openssl"><u>OpenSSL</u></a>. However, we have strived to make the APIs as compatible as possible, and many popular packages that rely on <code>node:crypto</code> now work seamlessly in Workers.</p><p>To accomplish this, we didn't just copy the implementation of these cryptographic operations from Node.js. Rather, we worked within the Node.js project to extract the core crypto functionality out into a separate dependency project called <a href="https://github.com/nodejs/ncrypto"><code><u>ncrypto</u></code></a> that is used – not only by Workers but Bun as well – to implement Node.js compatible functionality by simply running the exact same code that Node.js is running.</p>
            <pre><code>import crypto from 'node:crypto';

export default {
  async fetch(request) {
    const hash = crypto.createHash('sha256');
    hash.update('Hello, world!');
    const digest = hash.digest('hex');

    return new Response(`SHA-256 hash: ${digest}`);
  }
}
</code></pre>
            <p>All major capabilities of the <code>node:crypto</code> module are supported, including:</p><ul><li><p>Hashing (e.g., SHA-256, SHA-512)</p></li><li><p>HMAC</p></li><li><p>Symmetric encryption/decryption</p></li><li><p>Asymmetric encryption/decryption</p></li><li><p>Digital signatures</p></li><li><p>Key generation and management</p></li><li><p>Random byte generation</p></li><li><p>Key derivation functions (e.g., PBKDF2, scrypt)</p></li><li><p>Cipher and Decipher streams</p></li><li><p>Sign and Verify streams</p></li><li><p>KeyObject class for managing keys</p></li><li><p>Certificate handling (e.g., X.509 certificates)</p></li><li><p>Support for various encoding formats (e.g., PEM, DER, base64)</p></li><li><p>and more…</p></li></ul>
    <div>
      <h2>Process &amp; Environment</h2>
      <a href="#process-environment">
        
      </a>
    </div>
    <p>In Node.js, the <code>node:process</code> module provides a global object that gives information about, and control over, the current Node.js process. It includes properties and methods for accessing environment variables, command-line arguments, the current working directory, and more. It is one of the most fundamental modules in Node.js, and many packages rely on it for basic functionality and simply assume its presence. There are, however, some aspects of the <code>node:process</code> module that do not make sense in the Workers environment, such as process IDs and user/group IDs which are tied to the operating system and process model of a traditional server environment and have no equivalent in the Workers environment.</p><p>When <code>nodejs_compat</code> is enabled, the <code>process</code> global will be available in your Worker scripts or you can import it directly via <code>import process from 'node:process'</code>. Note that the <code>process</code> global is only available when the <code>nodejs_compat</code> flag is enabled. If you try to access <code>process</code> without the flag, it will be <code>undefined</code> and the import will throw an error.</p><p>Let's take a look at the <code>process</code> APIs that do make sense in Workers, and that have been fully implemented, starting with <code>process.env</code>.</p>
    <div>
      <h3>Environment variables</h3>
      <a href="#environment-variables">
        
      </a>
    </div>
    <p>Workers have had <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>support for environment variables</u></a> for a while now, but previously they were only accessible via the env argument passed to the Worker function. Accessing the environment at the top-level of a Worker was not possible:</p>
            <pre><code>export default {
  async fetch(request, env) {
    const config = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}
</code></pre>
            <p> With the <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><code><u>new process.env</u></code><u> implementation</u></a>, you can now access environment variables in a more familiar way, just like in Node.js, and at any scope, including the top-level of your Worker:</p>
            <pre><code>import process from 'node:process';
const config = process.env.MY_ENVIRONMENT_VARIABLE;

export default {
  async fetch(request, env) {
    // You can still access env here if you need to
    const configFromEnv = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}
</code></pre>
            <p><a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>Environment variables</u></a> are set in the same way as before, via the <code>wrangler.toml</code> or <code>wrangler.jsonc</code> configuration file, or via the Cloudflare dashboard or API. They may be set as simple key-value pairs or as JSON objects:</p>
            <pre><code>{
  "name": "my-worker-dev",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    "nodejs_compat"
  ],
  "vars": {
    "API_HOST": "example.com",
    "API_ACCOUNT_ID": "example_user",
    "SERVICE_X_DATA": {
      "URL": "service-x-api.dev.example",
      "MY_ID": 123
    }
  }
}
</code></pre>
            <p>When accessed via <code>process.env</code>, all environment variable values are strings, just like in Node.js.</p><p>Because <code>process.env</code> is accessible at the global scope, it is important to note that environment variables are accessible from anywhere in your Worker script, including third-party libraries that you may be using. This is consistent with Node.js behavior, but it is something to be aware of from a security and configuration management perspective. The <a href="https://developers.cloudflare.com/secrets-store/"><u>Cloudflare Secrets Store</u></a> can provide enhanced handling around secrets within Workers as an alternative to using environment variables.</p>
    <div>
      <h4>Importable environment and waitUntil</h4>
      <a href="#importable-environment-and-waituntil">
        
      </a>
    </div>
    <p>When not using the <code>nodejs_compat</code> flag, we decided to go a step further and make it possible to import both the environment, and the <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>waitUntil mechanism</u></a>, as a module, rather than forcing users to always access it via the <code>env</code> and <code>ctx</code> arguments passed to the Worker function. This can make it easier to access the environment in a more modular way, and can help to avoid passing the <code>env</code> argument through multiple layers of function calls. This is not a Node.js-compatibility feature, but we believe it is a useful addition to the Workers environment:</p>
            <pre><code>import { env, waitUntil } from 'cloudflare:workers';

const config = env.MY_ENVIRONMENT_VARIABLE;

export default {
  async fetch(request) {
    // You can still access env here if you need to
    const configFromEnv = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}

function doSomething() {
  // Bindings and waitUntil can now be accessed without
  // passing the env and ctx through every function call.
  waitUntil(env.RPC.doSomethingRemote());
}
</code></pre>
            <p>One important note about <code>process.env</code>: changes to environment variables via <code>process.env</code> will not be reflected in the <code>env</code> argument passed to the Worker function, and vice versa. The <code>process.env</code> is populated at the start of the Worker execution and is not updated dynamically. This is consistent with Node.js behavior, where changes to <code>process.env</code> do not affect the actual environment variables of the running process. We did this to minimize the risk that a third-party library, originally meant to run in Node.js, could inadvertently modify the environment assumed by the rest of the Worker code.</p>
    <div>
      <h3>Stdin, stdout, stderr</h3>
      <a href="#stdin-stdout-stderr">
        
      </a>
    </div>
    <p>Workers do not have a traditional standard input/output/error streams like a Node.js process does. However, we have implemented <code>process.stdin</code>, <code>process.stdout</code>, and <code>process.stderr</code> as stream-like objects that can be used similarly. These streams are not connected to any actual process stdin and stdout, but they can be used to capture output that is written to the logs captured by the Worker in the same way as <code>console.log</code> and friends, just like them, they will show up in <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>Workers Logs</u></a>.</p><p>The <code>process.stdout</code> and <code>process.stderr</code> are Node.js writable streams:</p>
            <pre><code>import process from 'node:process';

export default {
  async fetch(request) {
    process.stdout.write('This will appear in the Worker logs\n');
    process.stderr.write('This will also appear in the Worker logs\n');
    return new Response('Hello, world!');
  }
}
</code></pre>
            <p>Support for <code>stdin</code>, <code>stdout</code>, and <code>stderr</code> is also integrated with the virtual file system, allowing you to write to the standard file descriptors <code>0</code>, <code>1</code>, and <code>2</code> (representing <code>stdin</code>, <code>stdout</code>, and <code>stderr</code> respectively) using the <code>node:fs</code> APIs:</p>
            <pre><code>import fs from 'node:fs';
import process from 'node:process';

export default {
  async fetch(request) {
    // Write to stdout
    fs.writeSync(process.stdout.fd, 'Hello, stdout!\n');
    // Write to stderr
    fs.writeSync(process.stderr.fd, 'Hello, stderr!\n');

    return new Response('Check the logs for stdout and stderr output!');
  }
}
</code></pre>
            
    <div>
      <h3>Other process APIs</h3>
      <a href="#other-process-apis">
        
      </a>
    </div>
    <p>We cannot cover every <code>node:process</code> API in detail here, but here are some of the other notable APIs that we have implemented:</p><ul><li><p><code>process.nextTick(fn)</code>: Schedules a callback to be invoked after the current execution context completes. Our implementation uses the same microtask queue as promises so that it behaves exactly the same as <code>queueMicrotask(fn)</code>.</p></li><li><p><code>process.cwd()</code> and <code>process.chdir()</code>: Get and change the current virtual working directory. The current working directory is initialized to /<code>bundle</code> when the Worker starts, and every request has its own isolated view of the current working directory. Changing the working directory in one request does not affect the working directory in other requests.</p></li><li><p><code>process.exit()</code>: Immediately terminates the current Worker request execution. This is unlike Node.js where <code>process.exit()</code> terminates the entire process. In Workers, calling <code>process.exit()</code> will stop execution of the current request and return an error response to the client.</p></li></ul>
    <div>
      <h2>Compression with <code>node:zlib</code></h2>
      <a href="#compression-with-node-zlib">
        
      </a>
    </div>
    <p>The <code>node:zlib</code> module provides APIs for compressing and decompressing data using various algorithms such as gzip, deflate, and brotli. We have implemented the <code>node:zlib</code> module, allowing you to use familiar compression APIs in your Workers applications. This enables a wide range of use cases, including data compression for network transmission, response optimization, and archive handling.</p>
            <pre><code>import zlib from 'node:zlib';

export default {
  async fetch(request) {
    const input = 'Hello, world! Hello, world! Hello, world!';
    const compressed = zlib.gzipSync(input);
    const decompressed = zlib.gunzipSync(compressed).toString('utf-8');

    return new Response(`Decompressed data: ${decompressed}`);
  }
}
</code></pre>
            <p>While Workers has had built-in support for gzip and deflate compression via the <a href="https://compression.spec.whatwg.org/"><u>Web Platform Standard Compression API</u></a>, the <code>node:zlib</code> module support brings additional support for the Brotli compression algorithm, as well as a more familiar API for Node.js developers.</p>
    <div>
      <h2>Timing &amp; scheduling</h2>
      <a href="#timing-scheduling">
        
      </a>
    </div>
    <p>Node.js provides a set of timing and scheduling APIs via the <code>node:timers</code> module. We have implemented these in the runtime as well.</p>
            <pre><code>import timers from 'node:timers';

export default {
  async fetch(request) {
    timers.setInterval(() =&gt; {
      console.log('This will log every half-second');
    }, 500);

    timers.setImmediate(() =&gt; {
      console.log('This will log immediately after the current event loop');
    });

    return new Promise((resolve) =&gt; {
      timers.setTimeout(() =&gt; {
        resolve(new Response('Hello after 1 second!'));
      }, 1000);
    });
  }
}
</code></pre>
            <p>The Node.js implementations of the timers APIs are very similar to the standard Web Platform with one key difference: the Node.js timers APIs return <code>Timeout</code> objects that can be used to manage the timers after they have been created. We have implemented the <code>Timeout</code> class in Workers to provide this functionality, allowing you to clear or re-fire timers as needed.</p>
    <div>
      <h2>Console</h2>
      <a href="#console">
        
      </a>
    </div>
    <p>The <code>node:console</code> module provides a set of console logging APIs that are similar to the standard <code>console</code> global, but with some additional features. We have implemented the <code>node:console</code> module as a thin wrapper around the existing <code>globalThis.console</code> that is already available in Workers.</p>
    <div>
      <h2>How to enable the Node.js compatibility features</h2>
      <a href="#how-to-enable-the-node-js-compatibility-features">
        
      </a>
    </div>
    <p>To enable the Node.js compatibility features as a whole within your Workers, you can set the <code>nodejs_compat</code> <a href="https://developers.cloudflare.com/workers/configuration/compatibility-flags/"><u>compatibility flag</u></a> in your <a href="https://developers.cloudflare.com/workers/wrangler/configuration/"><code><u>wrangler.jsonc or wrangler.toml</u></code></a> configuration file. If you are not using Wrangler, you can also set the flag via the <a href="https://dash.cloudflare.com"><u>Cloudflare dashboard</u></a> or API:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-21",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
  ]
}
</code></pre>
            <p><b>The compatibility date here is key! Update that to the most current date, and you'll always be able to take advantage of the latest and greatest features.</b></p><p>The <code>nodejs_compat</code> flag is an umbrella flag that enables all the Node.js compatibility features at once. This is the recommended way to enable Node.js compatibility, as it ensures that all features are available and work together seamlessly. However, if you prefer, you can also enable or disable some features individually via their own compatibility flags:</p>
<div><table><thead>
  <tr>
    <th><span>Module</span></th>
    <th><span>Enable Flag (default)</span></th>
    <th><span>Disable Flag</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>node:console</span></td>
    <td><span>enable_nodejs_console_module</span></td>
    <td><span>disable_nodejs_console_module</span></td>
  </tr>
  <tr>
    <td><span>node:fs</span></td>
    <td><span>enable_nodejs_fs_module</span></td>
    <td><span>disable_nodejs_fs_module</span></td>
  </tr>
  <tr>
    <td><span>node:http (client)</span></td>
    <td><span>enable_nodejs_http_modules</span></td>
    <td><span>disable_nodejs_http_modules</span></td>
  </tr>
  <tr>
    <td><span>node:http (server)</span></td>
    <td><span>enable_nodejs_http_server_modules</span></td>
    <td><span>disable_nodejs_http_server_modules</span></td>
  </tr>
  <tr>
    <td><span>node:os</span></td>
    <td><span>enable_nodejs_os_module</span></td>
    <td><span>disable_nodejs_os_module</span></td>
  </tr>
  <tr>
    <td><span>node:process</span></td>
    <td><span>enable_nodejs_process_v2</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>node:zlib</span></td>
    <td><span>nodejs_zlib</span></td>
    <td><span>no_nodejs_zlib</span></td>
  </tr>
  <tr>
    <td><span>process.env</span></td>
    <td><span>nodejs_compat_populate_process_env</span></td>
    <td><span>nodejs_compat_do_not_populate_process_env</span></td>
  </tr>
</tbody></table></div><p>By separating these features, you can have more granular control over which Node.js APIs are available in your Workers. At first, we had started rolling out these features under the one <code>nodejs_compat</code> flag, but we quickly realized that some users perform feature detection based on the presence of certain modules and APIs and that by enabling everything all at once we were risking breaking some existing Workers. Users who are checking for the existence of these APIs manually can ensure new changes don’t break their workers by opting out of specific APIs:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
    // But disable the `node:zlib` module if necessary
    "no_nodejs_zlib",
  ]
}
</code></pre>
            <p>But, to keep things simple, <b>we recommend starting with the </b><code><b>nodejs_compat</b></code><b> flag, which will enable everything. You can always disable individual features later if needed.</b> There is no performance penalty to having the additional features enabled.</p>
    <div>
      <h3>Handling end-of-life'd APIs</h3>
      <a href="#handling-end-of-lifed-apis">
        
      </a>
    </div>
    <p>One important difference between Node.js and Workers is that Node.js has a <a href="https://nodejs.org/en/eol"><u>defined long term support (LTS) schedule</u></a> that allows it to make breaking changes at certain points in time. More specifically, Node.js can remove APIs and features when they reach end-of-life (EOL). On Workers, however, we have a rule that once a Worker is deployed, <a href="https://blog.cloudflare.com/backwards-compatibility-in-cloudflare-workers/"><u>it will continue to run as-is indefinitely</u></a>, without any breaking changes as long as the compatibility date does not change. This means that we cannot simply remove APIs when they reach EOL in Node.js, since this would break existing Workers. To address this, we have introduced a new set of compatibility flags that allow users to specify that they do not want the <code>nodejs_compat</code> features to include end-of-life APIs. These flags are based on the Node.js major version in which the APIs were removed:</p><p>The <code>remove_nodejs_compat_eol</code> flag will remove all APIs that have reached EOL up to your current compatibility date:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
    // Remove Node.js APIs that have reached EOL up to your
    // current compatibility date
    "remove_nodejs_compat_eol",
  ]
}
</code></pre>
            <ul><li><p>The <code>remove_nodejs_compat_eol_v22</code> flag will remove all APIs that reached EOL in Node.js v22. When using r<code>emovenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v22's EOL date (April 30, 2027).</p></li><li><p>The <code>remove_nodejs_compat_eol_v23</code> flag will remove all APIs that reached EOL in Node.js v23. When using r<code>emovenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v24's EOL date (April 30, 2028).</p></li><li><p>The <code>remove_nodejs_compat_eol_v24</code> flag will remove all APIs that reached EOL in Node.js v24. When using <code>removenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v24's EOL date (April 30, 2028).</p></li></ul><p>If you look at the date for <code>remove_nodejs_compat_eol_v23</code> you'll notice that it is the same as the date for <code>remove_nodejs_compat_eol_v24</code>. That is not a typo! Node.js v23 is not an LTS release, and as such it has a very short support window. It was released in October 2023 and reached EOL in May 2024. Accordingly, we have decided to group the end-of-life handling of non-LTS releases into the next LTS release. This means that when you set your compatibility date to a date after the EOL date for Node.js v24, you will also be opting out of the APIs that reached EOL in Node.js v23. Importantly, these flags will not be automatically enabled until your compatibility date is set to a date after the relevant Node.js version's EOL date, ensuring that existing Workers will have plenty of time to migrate before any APIs are removed, or can choose to just simply keep using the older APIs indefinitely by using the reverse compatibility flags like <code>add_nodejs_compat_eol_v24</code>.</p>
    <div>
      <h2>Giving back</h2>
      <a href="#giving-back">
        
      </a>
    </div>
    <p>One other important bit of work that we have been doing is expanding Cloudflare's investment back into the Node.js ecosystem as a whole. There are now five members of the Workers runtime team (plus one summer intern) that are actively contributing to the <a href="https://github.com/nodejs/node"><u>Node.js project</u></a> on GitHub, two of which are members of Node.js' Technical Steering Committee. While we have made a number of new feature contributions such as an implementation of the Web Platform Standard <a href="https://blog.cloudflare.com/improving-web-standards-urlpattern/"><u>URLPattern</u></a> API and improved implementation of <a href="https://github.com/nodejs/ncrypto"><u>crypto</u></a> operations, our primary focus has been on improving the ability for other runtimes to interoperate and be compatible with Node.js, fixing critical bugs, and improving performance. As we continue to grow our efforts around Node.js compatibility we will also grow our contributions back to the project and ecosystem as a whole.</p>
<div><table><thead>
  <tr>
    <th><span>Aaron Snell</span></th>
    <th><span>2025 Summer Intern, Cloudflare Containers</span><br /><span>Node.js Web Infrastructure Team</span></th>
    <th><img src="https://images.ctfassets.net/zkvhlag99gkb/2ud1DF6HOI3ha2ySAhPOve/803132cf224695a48698afb806bf147b/Aaron.png?h=250" /></th>
  </tr>
  <tr>
    <th><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></th>
    <th><a href="https://github.com/flakey5"><span>flakey5</span></a></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Dario Piotrowicz</span></td>
    <td><span>Senior System Engineer</span><br /><span>Node.js Collaborator</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4K17bsjek1z4u2KRTtZ8uS/d7058dea515cb057a1727bcd01a0f5d2/Dario.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/dario-piotrowicz"><span>dario-piotrowicz</span></a></td>
  </tr>
  <tr>
    <td><span>Guy Bedford</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js Collaborator</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/iYM8oWWSK89MesmQwctfc/4d86847238b1f10e18717771e2ad5ee8/Guy.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/guybedford"><span>guybedford</span></a></td>
  </tr>
  <tr>
    <td><span>James Snell</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js TSC</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4vN2YAqsEBlSnWtXRM0pTT/5e9130753ed71933fc94bc2c634425f3/James.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/jasnell"><span>jasnell</span></a></td>
  </tr>
  <tr>
    <td><span>Nicholas Paun</span></td>
    <td><span>Systems Engineer</span><br /><span>Node.js Contributor</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4ePtfLAzk4pKYi4hU4dRLX/e4dcdfe86a4e54c4d02e356e2078d214/Nicholas.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/npaun"><span>npaun</span></a></td>
  </tr>
  <tr>
    <td><span>Yagiz Nizipli</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js TSC</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nvpEqU0VHi3Se9fxJ5vE8/0f5628bc1756c7e3e363760be9c493ae/Yagiz.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/anonrig"><span>anonrig</span></a></td>
  </tr>
</tbody></table></div><p>Cloudflare is also proud to continue supporting critical infrastructure for the Node.js project through its <a href="https://openjsf.org/blog/openjs-cloudflare-partnership"><u>ongoing strategic partnership</u></a> with the OpenJS Foundation, providing free access to the project to services such as Workers, R2, DNS, and more.</p>
    <div>
      <h2>Give it a try!</h2>
      <a href="#give-it-a-try">
        
      </a>
    </div>
    <p>Our vision for Node.js compatibility in Workers is not just about implementing individual APIs, but about creating a comprehensive platform that allows developers to run existing Node.js code seamlessly in the Workers environment. This involves not only implementing the APIs themselves, but also ensuring that they work together harmoniously, and that they integrate well with the unique aspects of the Workers platform.</p><p>In some cases, such as with <code>node:fs</code> and <code>node:crypto</code>, we have had to implement entirely new capabilities that were not previously available in Workers and did so at the native runtime level. This allows us to tailor the implementations to the unique aspects of the Workers environment and ensure both performance and security.</p><p>And we're not done yet. We are continuing to work on implementing additional Node.js APIs, as well as improving the performance and compatibility of the existing implementations. We are also actively engaging with the community to understand their needs and priorities, and to gather feedback on our implementations. If there are specific Node.js APIs or npm packages that you would like to see supported in Workers, <a href="https://github.com/cloudflare/workerd/"><u>please let us know</u></a>! If there are any issues or bugs you encounter, please report them on our <a href="https://github.com/cloudflare/workerd/"><u>GitHub repository</u></a>. While we might not be able to implement every single Node.js API, nor match Node.js' behavior exactly in every case, we are committed to providing a robust and comprehensive Node.js compatibility layer that meets the needs of the community.</p><p>All the Node.js compatibility features described in this post are <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>available now</u></a>. To get started, simply enable the <code>nodejs_compat</code> compatibility flag in your <code>wrangler.toml</code> or <code>wrangler.jsonc</code> file, or via the Cloudflare dashboard or API. You can then start using the Node.js APIs in your Workers applications right away.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Servers]]></category>
            <guid isPermaLink="false">rMNgTNdCcEh6MjAlrKkL3</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[A simpler path to a safer Internet: an update to our CSAM scanning tool]]></title>
            <link>https://blog.cloudflare.com/a-simpler-path-to-a-safer-internet-an-update-to-our-csam-scanning-tool/</link>
            <pubDate>Wed, 24 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare has made it even easier to enable our free child safety tooling for all customers. ]]></description>
            <content:encoded><![CDATA[ <p>Launching a website or an online community brings people together to create and share. The operators of these platforms, sadly, also have to navigate what happens when bad actors attempt to misuse those destinations to spread the most heinous content like child sexual abuse material (CSAM).</p><p>We are committed to helping anyone on the Internet protect their platform from this kind of misuse. We <a href="https://blog.cloudflare.com/the-csam-scanning-tool/"><u>first launched</u></a> a CSAM Scanning Tool several years ago to give any website on the Internet the ability to programmatically scan content uploaded to their platform for instances of CSAM in partnership with National Center for Missing and Exploited Children (NCMEC), Interpol, and dozens of other organizations committed to protecting children. That release took technology that was only available to the largest social media platforms and provided it to any website.</p><p>However, the tool we offered still required setup work that added friction to its adoption. To help our customers file reports to NCMEC, they needed to create their own credentials. That step of creating credentials and sharing them was too confusing or too much work for small site owners. We did our best helping them with secondary reports, but we needed a method that made this seamless to encourage adoption.</p><p>Today’s announcement makes that process significantly easier for site owners, helping them contribute to keeping the Internet safer with even less manual effort. The tool no longer requires website operators to create and provide their own unique NCMEC credentials. The result is that we have seen monthly adoption of the tool increase by 1,600% since the introduction of this change in February.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Services that attempt to flag and stop the spread of CSAM rely on partner organizations, like NCMEC, who maintain lists of hashes of known CSAM. These hashes are numerical representations of images that rely on an algorithm to create a kind of digital fingerprint for a photo. Partners who operate these tools, like Cloudflare, check hashes of content provided against the list maintained by organizations like NCMEC to see if there is a match. You can read about the operation in detail in our previous announcement <a href="https://blog.cloudflare.com/the-csam-scanning-tool/#finding-similar-images"><u>here</u></a>.</p><p>We rely on fuzzy hashing, a technique that goes beyond simple one-to-one matches. If a photo of CSAM is altered even slightly — by adding a filter, cropping it, or adding some noise — the fingerprint changes completely.</p><p>A fuzzy hash, on the other hand, creates a "perceptual fingerprint." Even if an image is modified, its fuzzy hash will remain similar to the original. This allows our tool to identify matches with a high degree of confidence, even if the abuser tries to disguise the content.</p><p>The removal of the requirement to share the credential with Cloudflare removes one additional step to deploying and enabling our tool, but site operators are still expected to continue to file their own reports with NCMEC or their regional equivalent.</p>
    <div>
      <h3>What is the process now?</h3>
      <a href="#what-is-the-process-now">
        
      </a>
    </div>
    <p>The process for using the tool is now straightforward and simple:</p><p><b>Enable the Tool:</b> Activate the CSAM Scanning Tool on your Cloudflare zone and verify your notification email address.</p><p><b>Scan and Detect: </b>Our tool scans your cached content for potential CSAM, creating a fuzzy hash of each image. If a match is found with a known bad hash, a detection event is created.</p><p><b>Remediate: </b>Cloudflare blocks the URL to any identified matches and notifies you so that you may take further action.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cTjykOBheTnzbmcwjKoSI/63fb00a39807897c8b2feda9af373ec0/unnamed.png" />
          </figure>
    <div>
      <h3>What is next?</h3>
      <a href="#what-is-next">
        
      </a>
    </div>
    <p>We believe that the tools for a safer Internet should be available for everyone  — not just a few large companies.</p><p>We invite you to enable the CSAM Scanning Tool on your website today. For more technical details on how it works, please visit our <a href="https://developers.cloudflare.com/cache/reference/csam-scanning/"><u>developer documentation</u></a>. We also welcome you to join our community to discuss the technology and help us continue to build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Legal]]></category>
            <guid isPermaLink="false">4SD2BwOE3yemddmMT25cnO</guid>
            <dc:creator>Rachael Truong</dc:creator>
        </item>
    </channel>
</rss>