
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 22:53:36 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Automatically Secure: how we upgraded 6,000,000 domains by default to get ready for the Quantum Future]]></title>
            <link>https://blog.cloudflare.com/automatically-secure/</link>
            <pubDate>Wed, 24 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ After a year since we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet is in constant motion. Sites scale, traffic shifts, and attackers adapt. Security that worked yesterday may not be enough tomorrow. That’s why the technologies that protect the web — such as Transport Layer Security (TLS) and emerging post-quantum cryptography (PQC) — must also continue to evolve. We want to make sure that everyone benefits from this evolution automatically, so we enabled the strongest protections by default.</p><p>During <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/"><u>Birthday Week 2024</u></a>, we announced Automatic SSL/TLS: a service that scans origin server configurations of domains behind Cloudflare, and automatically upgrades them to the most secure encryption mode they support. In the past year, <b>this system has quietly strengthened security for more than 6 million domains </b>— ensuring Cloudflare can always connect to origin servers over the safest possible channel, without customers lifting a finger.</p><p>Now, a year after we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security.</p>
    <div>
      <h2>The Basics: TLS protocol</h2>
      <a href="#the-basics-tls-protocol">
        
      </a>
    </div>
    <p>Before diving in, let’s review the basics of Transport Layer Security (<a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a>). The protocol allows two strangers (like a client and server) to communicate securely.</p><p>Every secure web session begins with a TLS handshake. Before a single byte of your data moves across the Internet, servers and clients need to agree on a shared secret key that will protect the confidentiality and integrity of your data. The key agreement handshake kicks off with a TLS <i>ClientHello</i> message. This message is the browser/client announcing, “Here’s who I want to talk to (via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>SNI</u></a>), and here are the key agreement methods I understand.” The server then proves who it is with its own credentials in the form of a certificate, and together they establish a shared secret key that will protect everything that follows. </p><p>TLS 1.3 added a clever shortcut: instead of waiting to be told which method to use for the shared key agreement, the browser can guess what key agreement the server supports, and include one or more <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/"><u>keyshares</u></a> right away. If the guess is correct, the handshake skips an extra round trip and the secure connection is established more quickly. If the guess is wrong, the server responds with a <i>HelloRetryRequest</i> (HRR), telling the browser which key agreement method to retry with. This speculative guessing is a major reason TLS 1.3 is so much faster than TLS 1.2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/W2t0sZjiliwZ0FGfFFG6k/113c44b54da1c0355d5bf76fba3080fa/1-2.png" />
          </figure><p>Once both sides agree, the chosen keyshare is used to create a shared secret that encrypts the messages they exchange and allows only the right parties to decrypt them.</p>
    <div>
      <h3>The nitty-gritty details of key agreement</h3>
      <a href="#the-nitty-gritty-details-of-key-agreement">
        
      </a>
    </div>
    <p>Up until recently, most of these handshakes have relied on <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curve cryptography</u></a> (ECC) using a curve known as X25519. But looming on the horizon are quantum computers, which could one day break ECC algorithms like X25519 and others. To prepare, the industry is shifting toward post-quantum key agreement with <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>MLKEM</u></a>, deployed in a hybrid mode (<a href="https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/"><u>X25519 + MLKEM</u></a>). This ensures that even if quantum machines arrive, <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvested traffic today</u></a> can’t be decrypted tomorrow. X25519 + MLKEM is <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>steadily rising to become the most popular</u></a> key agreement for connections to Cloudflare.</p><p>The TLS handshake model is the foundation for how we encrypt web communications today. The history of TLS is really the story of <i>iteration under pressure</i>. It’s a protocol that had to keep evolving, so trust on the web could keep pace with how Internet traffic has changed. It’s also what makes technologies like <b>Cloudflare’s Automatic SSL/TLS</b> possible, by abstracting decades of protocol battles and crypto engineering into a single click, so customer websites can be secured by default without requiring every operator to be a cryptography expert.</p>
    <div>
      <h2>History Lesson: Stumbles and Standards</h2>
      <a href="#history-lesson-stumbles-and-standards">
        
      </a>
    </div>
    <p>Early versions of TLS (then called SSL) in the 1990s suffered from weak keys, limited protection against attacks like <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack"><u>man-in-the-middle</u></a>, and low adoption on the Internet. To stabilize things, the <a href="https://www.ietf.org/"><u>IETF</u></a> stepped in and released <a href="https://www.ietf.org/rfc/rfc2246.txt"><u>TLS 1.0</u></a>, followed by TLS <a href="https://datatracker.ietf.org/doc/html/rfc4346"><u>1.1</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc5246"><u>1.2</u></a> through the 2000s. These versions added stronger ciphers and patched new attack vectors, but years of fixes and extensions left the protocol bloated and hard to evolve.</p><p>The early 2010s marked a turning point. After the <a href="https://iapp.org/news/a/the-snowden-disclosures-10-years-on"><u>Snowden disclosures</u></a>, the Internet doubled down on encryption by default. Initiatives like <a href="https://en.wikipedia.org/wiki/Let%27s_Encrypt"><u>Let’s Encrypt</u></a>, the mass adoption of <a href="https://en.wikipedia.org/wiki/HTTPS"><u>HTTPS</u></a>, and Cloudflare’s own commitment to offer <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>SSL/TLS for free</u></a> turned encryption from optional, expensive, and complex into an easy baseline requirement for a safer Internet.</p><p>All of this momentum led to <a href="https://datatracker.ietf.org/doc/html/rfc8446"><u>TLS 1.3</u></a> (2018), which cut away legacy baggage, locked in modern cipher suites, and made encrypted connections nearly as fast as the underlying transport protocols like TCP—and sometimes even faster with <a href="https://en.wikipedia.org/wiki/QUIC"><u>QUIC</u></a>.</p>
    <div>
      <h2>The CDN Twist</h2>
      <a href="#the-cdn-twist">
        
      </a>
    </div>
    <p>As Content Delivery Networks (CDNs) rose to prominence, they reshaped how TLS was deployed. Instead of a browser talking directly to a distant server hosting content (what Cloudflare calls an origin), it now spoke to the nearest edge data center, which may in-turn speak to an origin server on the client’s behalf.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CTywdNaDxUXcGHVg5i1MP/975f9b0a74b2b5c5fb59ecb64d3268bb/2.png" />
          </figure><p>This created <b>two distinct TLS layers</b>:</p><ul><li><p><b>Edge ↔ Browser TLS:</b> The front door, built to quickly take on new improvements in security and performance. Edges and browsers adopt modern protocols (TLS 1.3, QUIC, session resumption) to cut down on latency.</p></li><li><p><b>Edge ↔ Origin TLS:</b> The backhaul, which must be more flexible. Origins might be older, more poorly maintained, run legacy TLS stacks, or require custom certificate handling.</p></li></ul><p>In practice, CDNs became <i>translators</i>: modernizing encryption at the edge while still bridging to legacy origins. It’s why you can have a blazing-fast TLS 1.3 session from your phone, even if the origin server behind the CDN hasn’t been upgraded in years. </p><p>This is where Automatic SSL/TLS sits in the story of how we secure Internet communications. </p>
    <div>
      <h2>Automatic SSL/TLS </h2>
      <a href="#automatic-ssl-tls">
        
      </a>
    </div>
    <p>Automatic SSL/TLS grew out of Cloudflare’s mission to ensure the web was as encrypted as possible. While we had initially spent an incredibly long time developing secure connections for the “front door” (from browsers to Cloudflare’s edge) with <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Universal SSL</u></a>, we knew that the “back door” (from Cloudflare’s edge to origin servers) would be slower and harder to upgrade. </p><p>One option we offered was <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a>, where a lightweight agent runs near the origin server and tunnels traffic securely back to Cloudflare. This approach ensures the connection always uses modern encryption, without requiring changes on the origin itself.</p><p>But not every customer uses Tunnel. Many connect origins directly to Cloudflare’s edge, where encryption depends on the origin server’s configuration. Traditionally this meant customers had to either manually select an <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> that worked for their origin server or rely on the default chosen by Cloudflare. </p><p>To improve the experience of choosing an encryption mode, we introduced our <a href="https://blog.cloudflare.com/ssl-tls-recommender/"><u>SSL/TLS Recommender</u></a> in 2021.</p><p>The Recommender scanned customer origin servers and then provided recommendations for their most secure encryption mode. For example, if the Recommender detected that an origin server was using a certificate signed by a trusted Certificate Authority (CA) such as Let’s Encrypt, rather than a self-signed certificate, it would recommend upgrading from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><b><u>Full</u></b><u> encryption mode</u></a> to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><b><u>Full (Strict)</u></b><u> encryption mode</u></a>.</p><p>Based on how the origin responded, Recommender would tell customers if they could improve their SSL/TLS encryption mode to be more secure. The following encryption modes represent what the SSL/TLS Recommender could recommend to customers based on their origin responses: </p><table><tr><td><p><b>SSL/TLS mode</b></p></td><td><p><b>HTTP from visitor</b></p></td><td><p><b>HTTPS from visitor</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin without certification validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict (SSL-only origin pull)</u></a></p></td><td><p>HTTPS to Origin with certificate validation check</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr></table><p>However, in the three years after launching our Recommender we discovered something troubling: of the over two million domains using Recommender, <b>only 30% of the recommendations that the system provided were followed</b>. A significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate with their origin over a more secure setting. </p><p>We were seeing sub-optimal settings that our customers could upgrade from without risk of breaking their site, but for various reasons, our users did not follow through with the recommendations. So we pushed forward by building a system that worked with Recommender and actioned the recommendations by default. </p>
    <div>
      <h2>How does Automatic SSL/TLS work? </h2>
      <a href="#how-does-automatic-ssl-tls-work">
        
      </a>
    </div>
    <p>Automatic SSL/TLS<b> </b>works by crawling websites, looking for content over both HTTP and HTTPS, then comparing the results for compatibility. It also performs checks against the TLS certificate presented by the origin and looks at the type of content that is served to ensure it matches. If the downloaded content matches, Automatic SSL/TLS elevates the encryption level for the domain to the compatible and stronger mode, without risk of breaking the site.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49AaKdddEOgWXk1Oxlg2Qp/be44b863e2f4c797fa58c8b81f93f51a/3.png" />
          </figure><p>More specifically, these are the steps that Automatic SSL/TLS takes to upgrade domain’s security: </p><ol><li><p>Each domain is scheduled for a scan <b>once per month</b> (or until it reaches the maximum supported encryption mode).</p></li><li><p>The scan evaluates the current <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the domain. If it’s lower than what the Recommender thinks the domain can support based on the <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/#:~:text=When%20the%20Recommender,recommendation%20is%20followed."><u>results</u></a> of its probes and content scans, the system begins a gradual upgrade.</p></li><li><p>Automatic SSL/TLS begins to upgrade the domain by connecting with origins over the more secure mode starting with just 1% of its traffic.</p></li><li><p>If connections to the origin succeed, the result is logged as successful.</p><ol><li><p>If they fail, the system records the failure to Cloudflare’s control plane and aborts the upgrade. Traffic is immediately downgraded back to the previous SSL/TLS setting to ensure seamless operation.</p></li></ol></li><li><p>If no issues are found, the new SSL/TLS encryption mode is applied to traffic in 10% increments until 100% of traffic uses the recommended mode.</p></li><li><p>Once 100% of traffic has been successfully upgraded with no TLS-related errors, the domain’s SSL/TLS setting is permanently updated.</p></li><li><p><b>Special handling for Flexible → Full/Strict:</b> These upgrades are more cautious because customers’ <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>cache keys</u></a> are changed (from <code>http</code> to <code>https</code> origin scheme).</p><ol><li><p>In this situation, traffic ramps up from 1% to 10% in 1% increments, allowing customers’ cache to warm-up.</p></li><li><p>After 10%, the system resumes the standard 10% increments until 100%.</p></li></ol></li></ol><p>We know that transparency and visibility are critical, especially when automated systems make changes. To keep customers informed, Automatic SSL/TLS sends a weekly digest to account <a href="https://developers.cloudflare.com/fundamentals/manage-members/roles/"><u>Super Administrators</u></a> whenever updates are made to domain encryption modes. This way, you always have visibility into what changed and when.  </p><p>In short, Automatic SSL/TLS automates what used to be trial and error: finding the strongest SSL/TLS mode your site can support while keeping everything working smoothly.</p>
    <div>
      <h2>How are we doing so far?  </h2>
      <a href="#how-are-we-doing-so-far">
        
      </a>
    </div>
    <p>So far we have onboarded <b>all Free, Pro, and Business domains to use Automatic SSL/TLS</b>. We also have enabled this for <b>all new domains</b> that will onboard onto Cloudflare regardless of plantype. Soon, we will start onboarding Enterprise customers as well. If you already have an Enterprise domain and want to try out Automatic SSL/TLS we encourage you to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>enable it in the SSL/TLS section</u></a> of the dashboard or via the API. </p><p>As of the publishing of this blog, we’ve upgraded over<b> 6 million domains</b> to be more secure without the website operators needing to manually configure anything on Cloudflare. </p><table><tr><td><p><b>Previous Encryption Mode</b></p></td><td><p><b>Upgraded Encryption Mode</b></p></td><td><p><b>Number of domains</b></p></td></tr><tr><td><p>Flexible</p></td><td><p>Full</p></td><td><p>~ 2,200,000</p></td></tr><tr><td><p>Flexible</p></td><td><p>Full (strict)</p></td><td><p>~ 2,000,000</p></td></tr><tr><td><p>Full </p></td><td><p>Full (strict)</p></td><td><p>~ 1,800,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full</p></td><td><p>~ 7,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full (strict)</p></td><td><p>~ 5,000</p></td></tr></table><p>We’re most excited about the over 4 million domains that moved from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a>, which uses HTTP to origin servers, to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Strict</u></a>, which uses HTTPS. </p><p>If you have a reason to use a particular encryption mode (e.g., on a test domain that isn’t production ready) you can always disable Automatic SSL/TLS and manually set the encryption mode that works best for your use case.</p><p>Today, SSL/TLS mode works on a domain-wide level, which can feel blunt. This means that one suboptimal subdomain can keep the entire domain in a less secure TLS setting, to ensure availability. Our long-term goal is to make these controls more precise, so that Automatic SSL/TLS and encryption modes can optimize security per origin or subdomain, rather than treating every hostname the same.</p>
    <div>
      <h2>Impact on origin-facing connections</h2>
      <a href="#impact-on-origin-facing-connections">
        
      </a>
    </div>
    <p>Since we began onboarding domains to <b>Automatic SSL/TLS</b> in late 2024 and early 2025, we’ve been able to measure how origin connections across our network are shifting toward stronger security. Looking at the ratios across all origin requests, the trends are clear:</p><ul><li><p><b>Encryption is rising.</b> Plaintext connections are steadily declining, a reflection of Automatic SSL/TLS helping millions of domains move to HTTPS by default. We’ve seen <b>a correlated 7-8% reduction in plaintext origin-bound connections.</b> Still, some origins remain on outdated configurations, and these should be upgraded to keep pace with modern security expectations.</p></li><li><p><b>TLS 1.3 is surging.</b> Since late 2024, TLS 1.3 adoption has climbed sharply, now making up the majority of encrypted origin traffic (almost 60%). While Automatic SSL/TLS doesn’t control which TLS version an origin supports, this shift is an encouraging sign for both performance and security.</p></li><li><p><b>Older versions are fading.</b> Month after month, TLS 1.2 continues to shrink, while TLS 1.0 and 1.1 are now so rare they barely register.</p></li></ul><p>The decline in plaintext connections is encouraging, but it also highlights a long tail of servers still relying on outdated packages or configurations. Sites like <a href="https://www.ssllabs.com/ssltest/"><u>SSL Labs</u></a> can be used, for instance, to check a server’s TLS configuration. However, simply copy-pasting settings to achieve a high rating can be risky, so we encourage customers to review their origin TLS configurations carefully. In addition, <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>Cloudflare origin CA</u></a> or <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> can help provide guidance for upgrading origin security.</p>
    <div>
      <h3>Upgraded domain results</h3>
      <a href="#upgraded-domain-results">
        
      </a>
    </div>
    <p>Instead of focusing on the entire network of origin-facing connections from Cloudflare, we’re now going to drill into specific changes that we’ve seen from domains that have been upgraded by <b>Automatic SSL/TLS</b>. </p><p>By January 2025, most domains had been enrolled in Automatic SSL/TLS, and the results were dramatic: a near 180-degree shift from plaintext to encrypted communication with origins. After that milestone, traffic patterns leveled off into a steady plateau, reflecting a more stable baseline of secure connections across the network. There is some drop in encrypted traffic which may represent some of the originally upgraded domains manually turning off Automatic SSL/TLS.</p><p>But the story doesn’t end there. In the past two months (July and August 2025), we’ve observed another noticeable uptick in encrypted origin traffic. This likely reflects customers upgrading outdated origin packages and enabling stronger TLS support—evidence that Automatic SSL/TLS not only raised the floor on encryption but continues nudging the long tail of domains toward better security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nJe12swMSMXBQsgzEhXtq/78debf8e0c3efbaf66bce8cf6e623c80/4.png" />
          </figure><p>To further explore the “encrypted” line above, we wanted to see what the delta was between TLS 1.2 and 1.3. Originally we wanted to include all TLS versions we support but the levels of 1.0 and 1.1 were so small that they skewed the graph and were taken out. We see a noticeable rise in the support for both TLS 1.2 and 1.3 between Cloudflare and origin servers. What is also interesting to note here is the network-wide decrease in TLS 1.2 but for the domains that have been automatically upgraded a generalized increase, potentially also signifying origin TLS stacks that could be updated further.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BoRlq4irKWuvuXs5E4e8l/3971165f5029a03ae64dac79235a8671/5.png" />
          </figure><p>Finally, for Full (Strict) mode,<b> </b>we wanted to investigate the number of successful certificate validations we performed. This line shows a dramatic, approximately 40%, increase in successful certificate validations performed for customers upgraded by Automatic SSL/TLS. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nNyiMNQ4xtOubbrhnDuRY/af16c0792a73de71fa0176e6c1cfeb0b/6.png" />
          </figure><p>We’ve seen a largely successful rollout of Automatic SSL/TLS so far, with millions of domains upgraded to stronger encryption by default. We’ve seen help Automatic SSL/TLS improve origin-facing security, safely pushing connections to stronger modes whenever possible, without risking site breakage. Looking ahead, we’ll continue to expand this capability to more customer use cases as we help to build a more encrypted Internet.</p>
    <div>
      <h2>What will we build next for Automatic SSL/TLS? </h2>
      <a href="#what-will-we-build-next-for-automatic-ssl-tls">
        
      </a>
    </div>
    <p>We’re expanding Automatic SSL/TLS with new features that give customers more visibility and control, while keeping the system safe by default. First, we’re building an <b>ad-hoc scan</b> option that lets you rescan your origin earlier than the standard monthly cadence. This means if you’ve just rotated certificates, upgraded your origin’s TLS configuration, or otherwise changed how your server handles encryption, you won’t need to wait for the next scheduled pass—Cloudflare will be able to re-evaluate and move you to a stronger mode right away.</p><p>In addition, we’re working on <b>error surfacing</b> that will highlight origin connection problems directly in the dashboard and provide actionable guidance for remediation. Instead of discovering after the fact that an upgrade failed, or a change on the origin resulted in a less secure setting than what was set previously, customers will be able to see where the issue lies and how to fix it. </p><p>Finally, for <b>newly onboarded domains</b>, we plan to add clearer guidance on when to finish configuring the origin before Cloudflare runs its first scan and sets an encryption mode. Together, these improvements are designed to reduce surprises, give customers more agency, and ensure smoother upgrades. We expect all three features to roll out by June 2026.</p>
    <div>
      <h2>Post Quantum Era</h2>
      <a href="#post-quantum-era">
        
      </a>
    </div>
    <p>Looking ahead, quantum computers introduce a serious risk: data encrypted today can be harvested and decrypted years later once quantum attacks become practical. To counter this <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now, decrypt-later</u></a> threat, the industry is moving towards post-quantum cryptography (PQC)—algorithms designed to withstand quantum attacks. We have extensively written on this subject <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>in our previous blogs</u></a>.</p><p>In August 2024, NIST finalized its PQC standards: <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>ML-KEM</u></a> for key agreement, and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>ML-DSA</u></a> and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>SLH-DSA</u></a> for digital signatures. In collaboration with industry partners, Cloudflare has helped drive the development and deployment of PQC. We have deployed the hybrid key agreement, combining ML-KEM (post-quantum secure) and X25519 (classical), to secure TLS 1.3 traffic to our servers and internal systems. As of mid-September 2025, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>around 43%</u></a> of human-generated connections to Cloudflare are already protected with the hybrid post-quantum secure key agreement – a huge milestone in preparing the Internet for the quantum era.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hgIUNO8TM50kvAOvzQ8rg/cdbe5b3d64390fc4b946036e2f37471d/6.png" />
          </figure><p>But things look different on the other side of the network. When Cloudflare connects to origins, we act as the client, navigating a fragmented landscape of hosting providers, software stacks, and middleboxes. Each origin may support a different set of cryptographic features, and not all are ready for hybrid post-quantum handshakes.</p><p>To manage this diversity without the risk of breaking connections, we relied on <i>HelloRetryRequest</i>. Instead of sending post-quantum keyshare immediately in the <i>ClientHello</i>, we only advertise support for it. If the origin server supports the post-quantum key agreement, it uses <i>HelloRetryRequest</i> to request it from Cloudflare, and creates the post-quantum connection. The downside is this extra round trip (from the retry) cancels out the performance gains of TLS 1.3 and makes the connection feel closer to TLS 1.2 for uncached requests.</p><p>Back in 2023, <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-to-origin/"><u>we launched an API endpoint</u></a>, so customers could manually opt their origins into preferring post-quantum connections. If set, we avoid the extra roundtrip and try to create a post-quantum connection at the start of the TLS session. Similarly, we extended post-quantum protection to <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>Cloudflare tunnel</u></a>, making it one of the easiest ways to get origin-facing PQ today.</p><p><b>Starting Q4 2025, we’re taking the next step – making it </b><b><i>automatic</i></b><b>. </b>Just as we’ve done with SSL/TLS upgrades, Automatic SSL/TLS will begin testing, ramping, and enabling post-quantum handshakes with origins—without requiring customers to change a thing, as long as their origins support post-quantum key agreement.</p><p>Behind the scenes, we’re already scanning active origins about every 24 hours to test support and preferences for both classical and post-quantum key agreements. We’ve worked directly with vendors and customers to identify compatibility issues, and this new scanning system will be fully integrated into <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>Automatic SSL/TLS</u></a>.</p><p>And the benefits won't stop at post-quantum. Even for classical handshakes, optimization matters. Today, the X25519 algorithm is used by default, but <b>our scanning data shows that more than 6% of origins currently prefer a different key agreement algorithm, </b>which leads to unnecessary <i>HelloRetryRequests </i>and wasted round trips<b>.</b> By folding this scanning data into Automatic SSL/TLS, we’ll improve connection establishment for classical TLS as well—squeezing out extra speed and reliability across the board.</p><p>As enterprises and hosting providers adopt PQC, our preliminary scanning pipeline has already found that <b>around 4% of origins could benefit from a post-quantum-preferred key agreement even today</b>, as shown below. This is an 8x increase since <a href="https://blog.cloudflare.com/post-quantum-to-origins/"><u>we started our scans in 2023</u></a>. We expect this number to grow at a steady pace as the industry continues to migrate to post-quantum protocols.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g2Um1vTz6cuCtoYWtMU4C/7551fb50305a8512fa7cc22844024b99/8.png" />
          </figure><p>As part of this change, we will also<b> phase out</b> support for the pre-standard version X25519Kyber768 to support the final ML-KEM standard, again using a hybrid, from edge to origin connections.</p><p>With Automatic SSL/TLS, we will soon by default scan your origins proactively to directly send the most preferred keyshare to your origin removing the need for any extra roundtrip, improving both security and performance of your origin connections collectively.</p><p>At Cloudflare, we’ve always believed security is a right, not a privilege. From Universal SSL to post-quantum cryptography, our mission has been to make the strongest protections free and available to everyone. <b>Automatic SSL/TLS</b> is the next step—upgrading every domain to the best protocols automatically. Check the SSL/TLS section of your dashboard to ensure it’s enabled and join the millions of sites already secured for today and ready for tomorrow.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[CDN]]></category>
            <guid isPermaLink="false">7nO4wFW304Eh2r48934ugz</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[The RUM Diaries: enabling Web Analytics by default]]></title>
            <link>https://blog.cloudflare.com/the-rum-diaries-enabling-web-analytics-by-default/</link>
            <pubDate>Wed, 17 Sep 2025 19:21:27 GMT</pubDate>
            <description><![CDATA[ On October 15th 2025, Cloudflare is enabling Web Analytics for all free domains by default—helping you see how your site performs around the world in real time, without ever collecting personal data. ]]></description>
            <content:encoded><![CDATA[ <p>Measuring and improving performance on the Internet can be a daunting task because it spans multiple layers: from the user’s device and browser, to DNS lookups and the network routes, to edge configurations and origin server location. Each layer introduces its own variability such as last-mile bandwidth constraints, third-party scripts, or limited CPU resources, that are often invisible unless you have robust <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability tooling</a> in place. Even if you gather data from most of these Internet hops, performance engineers still need to correlate different metrics like front-end events, network processing times, and server-side logs in order to pinpoint where and why elusive “latency” occurs to understand how to fix it. </p><p>We want to solve this problem by providing a powerful, in-depth monitoring solution that helps you debug and optimize applications, so you can understand and trace performance issues across the Internet, end to end.</p><p>That’s why we’re excited to announce the <b><i>start</i></b> of a major upgrade to Cloudflare’s performance analytics suite: Web Analytics as part of our real user monitoring (RUM) tools will soon be combined with network-level insights to help you pinpoint performance issues anywhere on a packet’s journey — from a visitor’s browser, through Cloudflare’s network, to your origin.</p><p>Some popular web performance monitoring tools have also sacrificed user privacy in order to achieve depth of visibility. We’re also going to remove that tradeoff. By correlating client-side metrics (like <a href="https://web.dev/articles/vitals#core_web_vitals"><u>Core Web Vitals</u></a>) with detailed network and origin data, developers can see where slowdowns occur — and why —  all while preserving end user privacy (by dropping client-specific information and aggregating data by visits as explained in greater detail below).</p><p>Over the next several months we’ll share:</p><ul><li><p>How Web Analytics work</p></li><li><p>Real-world debugging examples from across the Internet</p></li><li><p>Tips to get the most value from Cloudflare’s analytics tools</p></li></ul><p>The journey starts on <b>October 15, 2025</b>, when Cloudflare will enable <a href="https://www.cloudflare.com/web-analytics/"><u>Web Analytics</u></a> <b>for all free domains by default</b> — helping you see how your site actually performs for visitors around the world in real time, without ever collecting any personal data (not applicable to traffic originating from the EU or UK, <a href="#what-does-privacy-first-mean">see below</a>). By the middle of 2026, we’ll deliver something nobody has ever had before: a comprehensive, <a href="https://blog.cloudflare.com/privacy-first-web-analytics/"><u>privacy-first platform</u></a> for performance monitoring and debugging. Unlike many other tools, this platform won’t just show you where latency lives, it will help you fix it, all in one place. From untangling the trickiest bottlenecks, to getting a crystal-clear view of global performance, this new tool will change how you see your web application and experiment with new performance features. And we’re not building it behind closed doors, we want to bring you along as we launch it in public. Follow along in this series, <i>The RUM Diaries</i>, as we share the journey.</p>
    <div>
      <h2>Why this matters</h2>
      <a href="#why-this-matters">
        
      </a>
    </div>
    <p>Performance monitoring is only as good as the detail you can see — and the trust your users have that while you’re watching traffic performance, you aren’t watching <i>them</i>. As we explain below, by combining <b>real user metrics</b> with <b>deep, in-network instrumentation</b>, we’ll give developers the visibility to debug any layer of the stack while maintaining Cloudflare’s zero-compromise stance on privacy.</p>
    <div>
      <h2>What problem are we solving? </h2>
      <a href="#what-problem-are-we-solving">
        
      </a>
    </div>
    <p>Many performance monitoring solutions provide only a narrow slice of the performance layer cake, focusing on either the client or the origin while lumping everything in between under a vague “processing time” due to lack of visibility. But as web applications get more complex and user expectations continue to rise, traditional analytics alone don’t cut it. Knowing <i>what</i> happened is just the tip of the iceberg; modern teams need to understand <i>why</i> a bottleneck occurred and <i>how</i> network conditions, code changes, or even a single external script can degrade load times. Moreover, often the tools available can only <i>observe</i> performance rather than helping to optimize it, which leaves teams unable to understand what to try to move the needle on latency.</p><p>We want to pull back the curtain so you can understand performance implications of the services you use on our platform and how you can make sure you’re getting the best performance possible. </p><p>Consider Shannon in Detroit, Michigan. She operates an e-commerce site selling hard-to-find watches to horology enthusiasts around the globe. Shannon knows that her customers are impatient (she pictures them frequently checking their wrists). If her site loads slowly, she loses sales, her SEO drops, and her customers go to a different store where they have a better online shopping experience. </p><p>As a result, Shannon continually monitors her site performance, but she frequently runs into problems trying to understand how her site is experienced by customers in different parts of the world. After updating her site, she frequently spot checks its performance using her browser on her office wifi in Detroit, but she continually hears complaints about slow load from her customers in Germany. So Shannon shops around for a solution that monitors performance around the globe. </p><p>This off-the-shelf performance monitoring solution offers her the ability to run similar tests from virtual machines situated around the world across various desktops, mobile devices, and even ISPs, close to her customers. Shannon receives data from these tests, ranging from how fast these synthetic clients’ DNS resolved, how quickly they connected to a particular server, and even when a response was on its way back to a client. Thankfully for Shannon, the off-the-shelf performance monitoring solution identified “server processing time” as the latency culprit in Germany. However, she can’t help but wonder, is it my server that is slow or the transit connection of my users in Germany? Can I make my site faster by adding another server in Germany, or updating my CDN configuration? It’s a three option head-scratcher: is it a networking problem, a server problem, or something else?</p><p>Cloudflare can help Shannon (and others!) because we sit in a unique place to provide richer performance analytics. As a reverse proxy positioned between the client and the origin, we are often the first web server a user connects to when requesting content. In addition to moving what’s important closer to your customers, our product suite can generate responses at our edge (e.g. <a href="https://developers.cloudflare.com/learning-paths/workers/get-started/first-worker/"><u>Workers</u></a>), steer traffic through our <a href="https://blog.cloudflare.com/backbone2024/"><u>dedicated backbone</u></a> (e.g. cloudflared and more), and route around Internet traffic jams (e.g. <a href="https://blog.cloudflare.com/argo-v2/"><u>Argo</u></a>). By tailoring a solution that brings together: </p><ul><li><p>client performance data, </p></li><li><p>real-time network metrics,</p></li><li><p>customer configuration settings, and</p></li><li><p>origin performance measurements</p></li></ul><p>we can provide more insightful information about what’s happening in the vague “processing time.” This will allow developers like Shannon to understand what they should tweak to make their site more performant, build her business and her customers happier. </p>
    <div>
      <h2>What is Web Analytics? </h2>
      <a href="#what-is-web-analytics">
        
      </a>
    </div>
    <p>Turning back to what’s happening on <b>October 15, 2025</b>: We’re enabling Web Analytics so teams can track down performance bottlenecks. Web Analytics works by adding a lightweight JavaScript snippet to your website, which helps monitor performance metrics from visitors to your site. In the Web Analytics dashboard you can see aggregate performance data related to: how a browser has painted the page (via <a href="https://web.dev/articles/lcp"><u>LCP</u></a>, <a href="https://web.dev/articles/inp"><u>INP</u></a>, and <a href="https://web.dev/articles/cls"><u>CLS</u></a>), general load time metrics associated with server processing, as well as aggregate counts of visitors.</p><p>If you’ve ever popped open DevTools in your browser and stared at the waterfall chart of a slow-loading page, you’ve had a taste of what Web Analytics is doing, except instead of measuring <i>your</i> load times from <i>your</i> laptop, it’s measuring it directly from the browsers of real visitors.</p><p>Here’s the high-level architecture:</p><p><b>A lightweight beacon in the browser
</b>Every page that you track with Cloudflare’s Web Analytics includes a tiny JavaScript snippet, optimized to load asynchronously so it won’t block rendering.</p><ul><li><p>This snippet hooks into modern browser APIs like the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Performance"><u>Performance API</u></a>, Resource Timing, etc</p></li><li><p>This is how Cloudflare collects Core Web Vital metrics like <b>Largest Contentful Paint</b> and <b>Interaction to Next Paint</b>, plus data about resource load times, TLS handshake duration from the perspective of the client.</p></li></ul><p><b>Aggregation at the edge
</b>When the browser sends performance data, it goes to the nearest Cloudflare data center. Instead of pushing raw events straight to a database, we pre-process at the edge. This reduces storage needs, minimizes latency, and removes personal information like IP addresses. After this pre-processing, it is sent to a core datacenter to be processed and queried by users.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QLjAwnkmYM5tXv9hbVv79/98684d34b3555532b3c2bc94039aacc2/BLOG-2675_2.png" />
          </figure><p><b>Web Analytics </b>sits under the <b>Analytics &amp; Logs</b> section of the dashboard (at both the account and domain level of the dashboard). Starting on October 15, 2025, free domains will begin to see Web Analytics enabled by default and will be able to view the performance of their visitors in their dashboard. Pro, Biz and ENT accounts can enable Web Analytics by selecting the hostname of the website to add the snippet to and selecting <b>Automatic Setup</b>. Alternatively, you can manually paste the JavaScript beacon before the closing <code>&lt;/body&gt;</code> tag on any HTML page you’d like to track from your origin. Just select “manage site” from the Web Analytics tab in the dashboard. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ucGMd53CtM2Y5pGVPpaSa/8444898164ee7c45afa7755960000d38/BLOG-2675_3.png" />
          </figure><p>Once enabled, the JS snippet works with visitors’ browsers to measure how the user experienced page load times and reports on critical client-side metrics. Below these metrics are resource attribution tables that help users understand which assets are taking the most time per metrics to load so that users can better optimize their site performance. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RrhjEuT91lp4OfEKi9dxm/490f270eebebd5cbd648c315d222d3d6/BLOG-2675_4.png" />
          </figure>
    <div>
      <h2>What does privacy-first mean?</h2>
      <a href="#what-does-privacy-first-mean">
        
      </a>
    </div>
    <p>From the beginning, our Web Analytics tools have centered on providing insights without compromising privacy. Being privacy-first means we don’t track individual users for analytics. We don’t use any client-side state (like cookies or localStorage) for analytics purposes, and we don’t track users over time by IP address, User Agent, or any other fingerprinting technique.</p><p>Moreover, when enabling Web Analytics, you can choose to drop requests from European and UK visitors if you so desire (listed <a href="https://developers.cloudflare.com/speed/speed-test/rum-beacon/#rum-excluding-eeaeu"><u>here</u></a> specifically), meaning we will not collect any RUM metrics from traffic that passes through our European and UK data centers. <b>The version of Web Analytics that will be enabled by default excludes data from EU visitors (this can be changed in the dashboard if you want). </b></p><p>The concept of a <i>visit</i> is key to our privacy approach. Rather than count unique IP addresses (requiring storing state about each visitor), we simply count page views that originate from a distinct referral or navigation event, avoiding the need to store information that might be considered personal data. We believe this same concept that we’ve used for years in providing our privacy-first Web Analytics can be logically extended to network and origin metrics. This will allow customers to gain the insights they need to debug and solve performance issues while ensuring they are not collecting unneeded data on visitors.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UdLc8qugqv29lZUYyB41d/c4def741c23a6cbf2937d3b05a804c03/BLOG-2675_5.png" />
          </figure>
    <div>
      <h2>Opting-out</h2>
      <a href="#opting-out">
        
      </a>
    </div>
    <p>We built our Web Analytics service to give you the insights you need to run your website, all while maintaining a privacy-first approach. However, if you do want to opt-out, here are the steps to do so.</p>
    <div>
      <h3>Via Dashboard</h3>
      <a href="#via-dashboard">
        
      </a>
    </div>
    <p>If you have a free domain and do not want Web Analytics automatically enabled for your zone you should do the following before October 15, 2025: </p><ol><li><p>Navigate to the zone in the Cloudflare dashboard</p></li><li><p>In the list on the left of the screen, navigate to Web Analytics
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lWwBak29Cmv1UijeKGhH6/14c3980ddcf9845cd4e97571b362a8e4/Screenshot_2025-09-17_at_11.48.13%C3%A2__AM.png" />
          </figure><p></p></li><li><p>On the next page, select either `Enable Globally` or `Exclude EU` to activate the feature
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4M8Gb1cqDkCmC1u45Xn1iG/bda1ffe64212b3a2e10befd7a01c9eb3/BLOG-2675_7.png" />
          </figure><p></p></li><li><p>Once Web Analytics has been activated, navigate to `Manage RUM Settings` in the Web Analytics dashboard
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LXl9FnYS2JRnfl4fsMXle/a5e74ed39dfd888514ed6e489db911f0/Screenshot_2025-09-17_at_11.47.46%C3%A2__AM.png" />
          </figure><p></p></li><li><p>Then, on the next page, select `Disable` to disable Web Analytics for the zone
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JCslLOmHqnqw7BXR4JHZf/fa9a391f399e70c525c2b947a8ed16a0/BLOG-2675_9.png" />
          </figure><p></p></li><li><p>OR, to remove Web Analytics from the zone entirely, delete the configs by clicking <code>Advanced Options</code> and then <code>Delete
</code></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/GYyPsNL6mXt1SIVWsrm5M/ecd627e14ab398db1e1cc87edbb66030/BLOG-2675_10.png" />
          </figure><p>Once you have disabled the product once, we will not re-enable it again. You can choose to enable it whenever you want, however.</p></li></ol>
    <div>
      <h3>Via API</h3>
      <a href="#via-api">
        
      </a>
    </div>
    <ol><li><p>Create a Web Analytics configuration with the following API call:
</p>
            <pre><code>curl https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/rum/site_info \
    -H 'Content-Type: application/json' \
    -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
    -H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
    -d '{
          "auto_install": false,
          "host": "example.com",
          "zone_tag": "023e105f4ecef8ad9ca31a8372d0c353"
        }'
</code></pre>
            <p><sub><i>Note: This will not cause your zone to collect RUM data because auto_install is set to `false`</i></sub></p></li><li><p>Collect the <code>site_tag</code> and <code>zone_tag</code> fields from the response to this call</p><ol><li><p><code>site_tag</code> in this response will correspond to <code>$SITE_ID</code> in the following calls</p></li></ol></li><li><p>EITHER Disable the Web Analytics configuration with the following API call:
</p>
            <pre><code>curl https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/rum/site_info/$SITE_ID \
    -X PUT \
    -H 'Content-Type: application/json' \
    -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
    -H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
    -d '{
          "auto_install": true,
          "enabled": false,
          "host": "example.com",
          "zone_tag": "023e105f4ecef8ad9ca31a8372d0c353"
        }'

</code></pre>
            <p></p></li><li><p>OR Delete the Web Analytics configuration with the following API call:
</p>
            <pre><code>curl https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/rum/site_info/$SITE_ID \
    -X DELETE \
    -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
    -H "X-Auth-Key: $CLOUDFLARE_API_KEY"</code></pre>
            <p></p></li></ol>
    <div>
      <h2>Where We’re Going Next</h2>
      <a href="#where-were-going-next">
        
      </a>
    </div>
    <p>Today, Web Analytics gives you visibility into how <i>people</i> experience your site in the browser. Next, we’re expanding that lens to show <i>what’s happening across the entire request path</i>, from the click in a user’s browser, through Cloudflare’s global network, to your origin servers, and back.</p><p>Here’s what’s coming:</p><ol><li><p><b>Correlating Across Layers
</b>We’ll match RUM data from the client with network timing, Cloudflare edge processing, and origin response latency, allowing you to pinpoint whether a spike in TTFB comes from a slow script, a cache miss, or an origin bottleneck.</p></li><li><p><b>Proactive Alerting
</b>Configurable alerts will tell you when performance regresses in specific geographies, when a data center underperforms, or when origin latency spikes.</p></li><li><p><b>Actionable Insights
</b>We’ll go beyond “processing time” as a single number, breaking it into the real-world steps that make up the journey: proxy routing, security checks, cache lookups, origin fetches, and more.</p></li><li><p><b>Unified View
</b>All of this will live in one place (your Cloudflare dashboard) alongside your analytics, logs, firewall events, and configuration settings, so you can see cause and effect in one workflow.</p></li></ol>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Stay tuned as we work alongside you, in public, to build the most comprehensive, privacy-focused performance analytics platform. Together, we will illuminate every corner of the request journey so you can optimize, innovate, and deliver the best experiences to your users, every time.</p><p>The next chapters of this journey will unlock proactive alerts, cross-layer correlation, and actionable insights you can’t get anywhere else. Follow along as the RUM Diaries are just getting started.</p> ]]></content:encoded>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">6R0B3dMIIePvBoBb8TzKNG</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Tim Kadlec</dc:creator>
        </item>
        <item>
            <title><![CDATA[“You get Instant Purge, and you get Instant Purge!” — all purge methods now available to all customers]]></title>
            <link>https://blog.cloudflare.com/instant-purge-for-all/</link>
            <pubDate>Tue, 01 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Following up on having the fastest purge in the industry, we have now increased Instant Purge quotas across all Cloudflare plans.  ]]></description>
            <content:encoded><![CDATA[ <p>There's a tradition at Cloudflare of launching real products on April 1, instead of the usual joke product announcements circulating online today. In previous years, we've introduced impactful products like <a href="https://blog.cloudflare.com/announcing-1111/"><u>1.1.1.1</u></a> and <a href="https://blog.cloudflare.com/introducing-1-1-1-1-for-families/"><u>1.1.1.1 for Families</u></a>. Today, we're excited to continue this tradition by <b>making every purge method available to all customers, regardless of plan type.</b></p><p>During Birthday Week 2024, we <a href="https://blog.cloudflare.com/instant-purge/"><u>announced our intention</u></a> to bring the full suite of purge methods — including purge by URL, purge by hostname, purge by tag, purge by prefix, and purge everything — to all Cloudflare plans. Historically, methods other than "purge by URL" and "purge everything" were exclusive to Enterprise customers. However, we've been openly rebuilding our purge pipeline over the past few years (hopefully you’ve read <a href="https://blog.cloudflare.com/part1-coreless-purge/"><u>some of our</u></a> <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>blog</u></a> <a href="https://blog.cloudflare.com/instant-purge/"><u>series</u></a>), and we're thrilled to share the results more broadly. We've spent recent months ensuring the new Instant Purge pipeline performs consistently under 150 ms, even during increased load scenarios, making it ready for every customer.  </p><p>But that's not all — we're also significantly raising the default purge rate limits for Enterprise customers, allowing even greater purge throughput thanks to the efficiency of our newly developed <a href="https://blog.cloudflare.com/instant-purge/"><u>Instant Purge</u></a> system.</p>
    <div>
      <h2>Building a better purge: a two-year journey</h2>
      <a href="#building-a-better-purge-a-two-year-journey">
        
      </a>
    </div>
    <p>Stepping back, today's announcement represents roughly two years of focused engineering. Near the end of 2022, our team went heads down rebuilding Cloudflare’s purge pipeline with a clear yet challenging goal: dramatically increase our throughput while maintaining near-instant invalidation across our global network.</p><p>Cloudflare operates <a href="https://www.cloudflare.com/network"><u>data centers in over 335 cities worldwide</u></a>. Popular cached assets can reside across all of our data centers, meaning each purge request must quickly propagate to every location caching that content. Upon receiving a purge command, each data center must efficiently locate and invalidate cached content, preventing stale responses from being served. The amount of content that must be invalidated can vary drastically, from a single file, to all cached assets associated with a particular hostname. After the content has been purged, any subsequent requests will trigger retrieval of a fresh copy from the origin server, which will be stored in Cloudflare’s cache during the response. </p><p>Ensuring consistent, rapid propagation of purge requests across a vast network introduces substantial technical challenges, especially when accounting for occasional data center outages, maintenance, or network interruptions. Maintaining consistency under these conditions requires robust distributed systems engineering.</p>
    <div>
      <h2>How did we scale purge?</h2>
      <a href="#how-did-we-scale-purge">
        
      </a>
    </div>
    <p>We've <a href="https://blog.cloudflare.com/instant-purge/"><u>previously discussed</u></a> how our new Instant Purge system was architected to achieve sub-150 ms purge times. It’s worth noting that the performance improvements were only part of what our new architecture achieved, as it also helped us solve significant scaling challenges around storage and throughput that allowed us to bring Instant Purge to all users. </p><p>Initially, our purge system scaled well, but with rapid customer growth, the storage consumption from millions of daily purge keys that needed to be stored reduced available caching space. Early attempts to manage this storage and throughput demand involved <a href="https://www.boltic.io/blog/kafka-queue"><u>queues</u></a> and batching for smoothing traffic spikes, but this introduced latency and underscored the tight coupling between increased usage and rising storage costs.</p><p>We needed to revisit our thinking on how to better store purge keys and when to remove purged content so we could reclaim space. Historically, when a customer would purge by tag, prefix or hostname, Cloudflare would mark the content as expired and allow it to be evicted later. This is known as lazy-purge because nothing is actively removed from disk. Lazy-purge is fast, but not necessarily efficient, because it consumes storage for expired but not-yet-evicted content. After examining global or data center-level indexing for purge keys, we decided that wasn't viable due to increases in system complexity and the latency those indices could bring due to our network size. So instead, we opted for per-machine indexing, integrating indices directly alongside our cache proxies. This minimized network complexity, simplified reliability, and provided predictable scaling.</p><p>After careful analysis and benchmarking, we selected <a href="https://rocksdb.org/"><u>RocksDB</u></a>, an embedded key-value store that we could optimize for our needs, which formed the basis of <a href="https://blog.cloudflare.com/instant-purge/#putting-it-all-together"><u>CacheDB</u></a>, our Rust-based service running alongside each cache proxy. CacheDB manages indexing and immediate purge execution (active purge), significantly reducing storage needs and freeing space for caching.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4FZ0bQSx5MUhx3x3hwlRuk/91a27af7db5e629cd6d5fbe692397eaf/image2.png" />
          </figure><p>Local queues within CacheDB buffer purge operations to ensure consistent throughput without latency spikes, while the cache proxies consult CacheDB to guarantee rapid, active purges. Our updated distribution pipeline broadcasts purges directly to CacheDB instances across machines, dramatically improving throughput and purge speed.</p><p>Using CacheDB, we've reduced storage requirements 10x by eliminating lazy purge storage accumulation, instantly freeing valuable disk space. The freed storage enhances cache retention, boosting cache HIT ratios and minimizing origin egress. These savings in storage and increased throughput allowed us to scale to the point where we can offer Instant Purge to more customers.</p><p>For more information on how we designed the new Instant Purge system, please see the previous <a href="https://blog.cloudflare.com/instant-purge/"><u>installment</u></a> of our Purge series blog posts. </p>
    <div>
      <h2>Striking the right balance: what to purge and when</h2>
      <a href="#striking-the-right-balance-what-to-purge-and-when">
        
      </a>
    </div>
    <p>Moving on to practical considerations of using these new purge methods, it’s important to use the right method for what you want to invalidate. Purging too aggressively can overwhelm origin servers with unnecessary requests, driving up egress costs and potentially causing downtime. Conversely, insufficient purging leaves visitors with outdated content. Balancing precision and speed is vital.</p><p>Cloudflare supports multiple targeted purge methods to help customers achieve this balance.</p><ul><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-everything/"><b><u>Purge Everything</u></b></a>: Clears all cached content associated with a website.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge_by_prefix/"><b><u>Purge by Prefix</u></b></a>: Targets URLs sharing a common prefix.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-hostname/"><b><u>Purge by Hostname</u></b></a>: Invalidates content by specific hostnames.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-single-file/"><b><u>Purge by URL (single-file purge</u></b></a><b>)</b>: Precisely targets individual URLs.</p></li><li><p><a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/"><b><u>Purge by Tag</u></b></a>: Uses <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers"><u>Cache-Tag</u></a> headers to invalidate grouped assets, offering flexibility for complex cache management scenarios.</p></li></ul><p>Starting today, all of these methods are available to every Cloudflare customer.    </p>
    <div>
      <h2>How to purge </h2>
      <a href="#how-to-purge">
        
      </a>
    </div>
    <p>Users can select their purge method directly in the Cloudflare dashboard, located under the Cache tab in the <a href="https://dash.cloudflare.com/?to=/:account/:zone/caching/configuration"><u>configurations section</u></a>, or via the <a href="https://developers.cloudflare.com/api/resources/cache/"><u>Cloudflare API</u></a>. Each purge request should clearly specify the targeted URLs, hostnames, prefixes, or cache tags relevant to the selected purge type (known as purge keys). For instance, a prefix purge request might specify a directory such as example.com/foo/bar. To maximize efficiency and throughput, batching multiple purge keys in a single request is recommended over sending individual purge requests each with a single key.</p>
    <div>
      <h2>How much can you purge?</h2>
      <a href="#how-much-can-you-purge">
        
      </a>
    </div>
    <p>The new rate limits for Cloudflare's purge by tag, prefix, hostname, and purge everything are different for each plan type. We use a <a href="https://en.wikipedia.org/wiki/Token_bucket"><u>token bucket</u></a> rate limit system, so each account has a token bucket with a maximum size based on plan type. When we receive a purge request we first add tokens to the account’s bucket based on the time passed since the account’s last purge request divided by the refill rate for its plan type (which can be a fraction of a token). Then we check if there’s at least one whole token in the bucket, and if so we remove it and process the purge request. If not, the purge request will be rate limited. An easy way to think about this rate limit is that the refill rate represents the consistent rate of requests a user can send in a given period while the bucket size represents the maximum burst of requests available.</p><p>For example, a free user starts with a bucket size of 25 requests and a refill rate of 5 requests per minute (one request per 12 seconds). If the user were to send 26 requests all at once, the first 25 would be processed, but the last request would be rate limited. They would need to wait 12 seconds and retry their last request for it to succeed. </p><p>The current limits are applied per <b>account</b>: </p><table><tr><td><p><b>Plan</b></p></td><td><p><b>Bucket size</b></p></td><td><p><b>Request refill rate</b></p></td><td><p><b>Max keys per request</b></p></td><td><p><b>Total keys</b></p></td></tr><tr><td><p><b>Free</b></p></td><td><p>25 requests</p></td><td><p>5 per minute</p></td><td><p>100</p></td><td><p>500 per minute</p></td></tr><tr><td><p><b>Pro</b></p></td><td><p>25 requests</p></td><td><p>5 per second</p></td><td><p>100</p></td><td><p>500 per second</p></td></tr><tr><td><p><b>Biz</b></p></td><td><p>50 requests</p></td><td><p>10 per second</p></td><td><p>100</p></td><td><p>1,000 per second</p></td></tr><tr><td><p><b>Enterprise</b></p></td><td><p>500 requests</p></td><td><p>50 per second</p></td><td><p>100</p></td><td><p>5,000 per second</p></td></tr></table><p>More detailed documentation on all purge rate limits can be found in our <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/"><u>documentation</u></a>.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’ve spent a lot of time optimizing our purge platform. But we’re not done yet. Looking forward, we will continue to enhance the performance of Cloudflare’s single-file purge. The current P50 performance is around 250 ms, and we suspect that we can optimize it further to bring it under 200 ms. We will also build out our ability to allow for greater purge throughput for all of our systems, and will continue to find ways to implement filtering techniques to ensure we can continue to scale effectively and allow customers to purge whatever and whenever they choose. </p><p>We invite you to try out our new purge system today and deliver an instant, seamless experience to your visitors.</p> ]]></content:encoded>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Cache Purge]]></category>
            <guid isPermaLink="false">4LTq8Utw6K58W4ojKxsqw8</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator> Connor Harwood</dc:creator>
            <dc:creator>Zaidoon Abd Al Hadi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Speed Brain: helping web pages load 45% faster]]></title>
            <link>https://blog.cloudflare.com/introducing-speed-brain/</link>
            <pubDate>Wed, 25 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Speed Brain uses the Speculation Rules API to prefetch content for the user's likely next navigations. The goal is to download a web page to the browser before a user navigates to it. 
 ]]></description>
            <content:encoded><![CDATA[ <p>Each time a user visits your web page, they are initiating a race to receive content as quickly as possible. Performance is a critical <a href="https://www.speedcurve.com/blog/psychology-site-speed/"><u>factor</u></a> that influences how visitors interact with your site. Some might think that moving content across the globe introduces significant latency, but for a while, network transmission speeds have approached their <a href="https://blog.cloudflare.com/fastest-internet/"><u>theoretical limits</u></a>. To put this into perspective, data on Cloudflare can traverse the 11,000 kilometer round trip between New York and London in about 76 milliseconds – faster than the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4043155/#:~:text=one%20blink%20lasts%20about%201/3%20s"><u>blink of an eye</u></a>.</p><p>However, delays in loading web pages persist due to the complexities of processing requests, responses, and configurations. In addition to pushing advancements in <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>connection establishment</u></a>, <a href="https://blog.cloudflare.com/this-is-brotli-from-origin/"><u>compression</u></a>, <a href="https://blog.cloudflare.com/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor/"><u>hardware</u></a>, and <a href="https://blog.cloudflare.com/pingora-open-source/"><u>software</u></a>, we have built a new way to reduce page load latency by anticipating how visitors will interact with a given web page. </p><p>Today we are very excited to share the latest leap forward in speed: <b>Speed Brain</b>. It relies on the <a href="https://developer.chrome.com/docs/web-platform/prerender-pages"><u>Speculation Rules API </u></a>to <a href="https://developer.mozilla.org/en-US/docs/Glossary/Prefetch"><u>prefetch</u></a> the content of the user's likely next navigations. The main goal of Speed Brain is to download a web page to the browser cache before a user navigates to it, allowing pages to load almost instantly when the actual navigation takes place. </p><p>Our initial approach uses a <a href="https://developer.chrome.com/blog/speculation-rules-improvements#eagerness"><u>conservative</u></a> model that prefetches static content for the next page when a user starts a touch or <a href="https://developer.mozilla.org/en-US/docs/Web/API/Element/click_event"><u>click event</u></a>. Through the fourth quarter of 2024 and into 2025, we will offer more aggressive speculation models, such as speculatively <a href="https://developer.mozilla.org/en-US/docs/Glossary/Prerender"><u>prerendering</u></a> (not just fetching the page before the navigation happens but rendering it completely) for an even faster experience. Eventually, Speed Brain will learn how to eliminate latency for your static website, without any configuration, and work with browsers to make sure that it loads as fast as possible.  </p><p>To illustrate, imagine an ecommerce website selling clothing. Using the insights from our global request logs, we can predict with high accuracy that a typical visitor is likely to click on ‘Shirts’ when viewing the parent page ‘Mens &gt; Clothes’. Based on this, we can start delivering static content, like images, before the shopper even clicks the ‘Shirts’ link. As a result, when they inevitably click, the page loads instantly. <b>Recent lab testing of our aggressive loading model implementation has shown up to a 75% reduction in </b><a href="https://developer.mozilla.org/en-US/docs/Glossary/Largest_contentful_paint"><b><u>Largest Contentful Paint (LCP)</u></b></a>, the time it takes for the largest visible element (like an image, video, or text block) to load and render in the browser.</p><p>The best part? We are making Speed Brain available to all plan types immediately and at no cost. Simply toggle on the Speed Brain feature for your website from the <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/optimization/content"><u>dashboard</u></a> or the <a href="https://developers.cloudflare.com/api/operations/zone-settings-get-speed-brain-setting"><u>API</u></a>. It’ll feel like magic, but behind the scenes it's a lot of clever engineering. </p><p>We have already enabled Speed Brain by default on <u>all</u> free domains and are seeing a <b>reduction in LCP of 45% on successful prefetches.</b> Pro, Business, and Enterprise domains need to enable Speed Brain manually. If you have not done so already, we <b>strongly</b> recommend also <a href="https://developers.cloudflare.com/web-analytics/get-started/#sites-proxied-through-cloudflare"><u>enabling Real User Measurements (RUM)</u></a> via your dashboard so you can see your new and improved web page performance. As a bonus, enabling RUM for your domain will help us provide <b>improved</b> and <b>customized</b> prefetching and prerendering rules for your website in the near future!</p>
    <div>
      <h2>How browsers work at a glance</h2>
      <a href="#how-browsers-work-at-a-glance">
        
      </a>
    </div>
    <p>Before discussing how Speed Brain can help load content exceptionally fast, we need to take a step back to review the complexity of loading content on browsers. Every time a user navigates to your web page, a series of request and response cycles must be completed. </p><p>After the browser <a href="https://www.debugbear.com/blog/http-server-connections"><u>establishes a secure connection</u></a> with a server, it sends an HTTP request to retrieve the base document of the web page. The server processes the request, constructs the necessary HTML document and sends it back to the browser in the response. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6hxlOWcShKyU4y6sqNYQs1/5717e4ebc130887376d629f9926d7a98/BLOG-2422_2.png" />
          </figure><p>When the browser receives an HTML document, it immediately begins <a href="https://developer.mozilla.org/en-US/docs/Web/Performance/How_browsers_work#parsing"><u>parsing</u></a> the content. During this process, it may encounter references to external resources such as CSS files, JavaScript, images, and fonts. These subresources are essential for rendering the page correctly, so the browser issues additional HTTP requests to fetch them. However, if these resources are available in the browser’s cache, the browser can retrieve them locally, significantly reducing <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/"><u>network latency</u></a> and improving page load times.</p><p>As the browser processes HTML, CSS, and JavaScript, the rendering engine begins to display content on the screen. Once the page’s visual elements are displayed, user interactions — like clicking a link — prompt the browser to restart much of this process to fetch new content for the next page. This workflow is typical of every browsing session: as users navigate, the browser continually fetches and renders new or uncached resources, introducing a delay before the new page fully loads.</p><p>Take the example of a user navigating the shopping site described above. As the shopper moves from the homepage to the ‘men's’ section of the site to the ‘clothing’ section to the ‘shirts’ section, the time spent on retrieving each of those subsequent pages can add up and contribute to the shopper leaving the site before they complete the transaction.  </p><p>Ideally, having prefetched and prerendered pages present in the browser at the time each of those links are clicked would eliminate much of the network latency impact, allowing the browser to load content instantly and providing a smoother user experience. </p>
    <div>
      <h2>Wait, I’ve heard this story before (how did we get to Speed Brain?)</h2>
      <a href="#wait-ive-heard-this-story-before-how-did-we-get-to-speed-brain">
        
      </a>
    </div>
    <p>We know what you’re thinking. We’ve had prefetching for years. There have even been several speculative prefetching efforts in the past. You’ve heard this all before. How is this different now?</p><p>You’re right, of course. Over the years, there has been a constant effort by developers and browser vendors to optimize page load times and enhance user experience across the web. Numerous techniques have been developed, spanning various layers of the Internet stack — from optimizing network layer connectivity to preloading application content closer to the client.</p>
    <div>
      <h4>Early prefetching: lack of data and flexibility</h4>
      <a href="#early-prefetching-lack-of-data-and-flexibility">
        
      </a>
    </div>
    <p>Web prefetching has been one such technique that has existed for more than a decade. It is based on the assumption that certain subresources are likely to be needed in the near future, so why not fetch them proactively? This could include anything from HTML pages to images, stylesheets, or scripts that the user might need as they navigate through a website. In fact, the core concept of speculative execution is not new, as it's a general technique that's been employed in various areas of computer science for years, with <a href="https://en.wikipedia.org/wiki/Branch_predictor"><u>branch prediction</u></a> in CPUs as a prime example.</p><p>In the early days of the web, several custom prefetching solutions emerged to enhance performance. For example, in 2005, Google introduced the <a href="https://google.fandom.com/wiki/Google_Web_Accelerator"><u>Google Web Accelerator</u></a>, a client-side application aimed at speeding up browsing for broadband users. Though innovative, the project was short-lived due to privacy and compatibility issues (we will describe how Speed Brain is different below). Predictive prefetching at that time lacked the data insights and API support for capturing user behavior, especially those handling sensitive actions like deletions or purchases.</p>
    <div>
      <h4>Static lists and manual effort</h4>
      <a href="#static-lists-and-manual-effort">
        
      </a>
    </div>
    <p>Traditionally, prefetching has been accomplished through the use of the <code>&lt;link rel="prefetch"&gt;</code> attribute as one of the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/prefetch"><u>Resource Hints</u></a>. Developers had to manually specify the attribute on each page for each resource they wanted the browser to preemptively fetch and cache in memory. This manual effort has not only been laborious but developers often lacked insight into what resources should be prefetched, which reduced the quality of their specified hints.</p><p>In a similar vein, <a href="https://developers.cloudflare.com/speed/optimization/content/prefetch-urls/"><u>Cloudflare has offered a URL prefetching feature since 2015</u></a>. Instead of prefetching in browser cache, Cloudflare allows customers to prefetch a static list of resources into the CDN cache. The feature allows prefetching resources in advance of when they are actually needed, usually during idle time or when network conditions are favorable. However, similar concerns apply for CDN prefetching, since customers have to manually decide on what resources are good candidates for prefetching for each page they own. If misconfigured, static link prefetching can be a <a href="https://en.wiktionary.org/wiki/footgun"><u>footgun</u></a>, causing the web page load time to actually slow down.</p>
    <div>
      <h4>Server Push and its struggles</h4>
      <a href="#server-push-and-its-struggles">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc9113#name-server-push"><u>HTTP/2’s "server push"</u></a> was another attempt to improve web performance by pushing resources to the client before they were requested. In theory, this would reduce latency by eliminating the need for additional round trips for future assets. However, the server-centric dictatorial nature of "pushing" resources to the client raised significant challenges, primarily due to lack of context about what was already cached in the browser. This not only wasted bandwidth but had the potential to slow down the delivery of critical resources, like base HTML and CSS, due to race conditions on browser fetches when rendering the page. The <a href="https://datatracker.ietf.org/doc/html/draft-kazuho-h2-cache-digest-01"><u>proposed solution of cache digests</u></a>, which would have informed servers about client cache contents, never gained widespread implementation, leaving servers to push resources blindly. <a href="https://developer.chrome.com/blog/removing-push"><u>In October 2022, Google Chrome removed Server Push support</u></a>, and in September 2024, <a href="https://groups.google.com/a/mozilla.org/g/dev-platform/c/vU9hJg343U8/m/4cZsHz7TAQAJ"><u>Firefox followed suit</u></a>.</p>
    <div>
      <h4>A step forward with Early Hints</h4>
      <a href="#a-step-forward-with-early-hints">
        
      </a>
    </div>
    <p>As a successor, <a href="https://datatracker.ietf.org/doc/html/rfc8297"><u>Early Hints</u></a> was specified in 2017 but not widely adopted until 2022, when <a href="https://blog.cloudflare.com/early-hints"><u>we partnered with browsers and key customers to deploy it</u></a>. It offers a more efficient alternative by "hinting" to clients which resources to load, allowing better prioritization based on what the browser needs. Specifically, the server sends a 103 Early Hints HTTP status code with a list of key page assets that the browser should start loading while the main response is still being prepared. This gives the browser a head start in fetching essential resources and avoids redundant preloading if assets are already cached. Although Early Hints doesn't adapt to user behaviors or dynamic page conditions (yet), its use is primarily limited to preloading specific assets rather than full web pages — in particular, cases where there is a long server “think time” to produce HTML.</p><p>As the web evolves, tools that can handle complex, dynamic user interactions will become increasingly important to balance the performance gains of speculative execution with its potential drawbacks for end-users. For years Cloudflare has offered performance-based solutions that adapt to user behavior and work to balance the speed and correctness decisions across the Internet like <a href="https://www.cloudflare.com/application-services/products/argo-smart-routing/"><u>Argo Smart Routing</u></a>, <a href="https://blog.cloudflare.com/introducing-smarter-tiered-cache-topology-generation/"><u>Smart Tiered Cache</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a>. Today we take another step forward toward an adaptable framework for serving content lightning-fast. </p>
    <div>
      <h2>Enter Speed Brain: what makes it different?</h2>
      <a href="#enter-speed-brain-what-makes-it-different">
        
      </a>
    </div>
    <p>Speed Brain offers a robust approach for implementing predictive prefetching strategies directly within the browser based on the ruleset returned by our servers. By building on lessons from previous attempts, it shifts the responsibility for resource prediction to the client, enabling more dynamic and personalized optimizations based on user interaction – like hovering over a link, for example – and their device capabilities. Instead of the browser sitting idly waiting for the next web page to be requested by the user, it takes cues from how a user is interacting with a page and begins asking for the next web page before the user finishes clicking on a link.</p><p>Behind the scenes, all of this magic is made possible by the <a href="https://developer.chrome.com/docs/web-platform/prerender-pages#speculation-rules-api"><u>Speculation Rules API</u></a>, which is an emerging standard in the web performance space from Google. When Cloudflare’s Speed Brain feature is enabled, an HTTP header called Speculation-Rules is added to web page responses. The value for this header is a URL that hosts an opinionated Rules configuration. This configuration instructs the browser to initiate prefetch requests for future navigations. Speed Brain does not improve page load time for the first page that is visited on a website, but it can improve it for subsequent web pages that are visited on the same site.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Fu2ADDum0Pp9e5kq94Jd4/3eba0ffc4d94c42af67d4e9c1c708a29/BLOG-2422_3.png" />
          </figure><p>The idea seems simple enough, but <a href="https://developer.mozilla.org/en-US/docs/Web/API/Speculation_Rules_API#unsafe_prefetching"><u>prefetching comes with challenges</u></a>, as some prefetched content may never end up being used. With the initial release of Speed Brain, we have designed a solution with guardrails that addresses two important but distinct issues that limited previous speculation efforts — <i>stale prefetch configuration</i> and <i>incorrect prefetching. </i>The Speculation Rules API configuration we have chosen for this initial release has been carefully designed to balance safety of prefetching while still maintaining broad applicability of rules for the entire site.</p>
    <div>
      <h4>Stale prefetch configuration</h4>
      <a href="#stale-prefetch-configuration">
        
      </a>
    </div>
    <p>As websites inevitably change over time, static prefetch configurations often become outdated, leading to <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">inefficient or ineffective prefetching</a>. This has been especially true for techniques like the rel=prefetch attribute or static CDN prefetching URL sets, which have required developers to manually maintain relevant prefetchable URL lists for each page of their website. Most static prefetch lists are based on developer intuition rather than real user navigation data, potentially missing important prefetch opportunities or wasting resources on unnecessary prefetches. </p>
    <div>
      <h4>Incorrect prefetching</h4>
      <a href="#incorrect-prefetching">
        
      </a>
    </div>
    <p>Since prefetch requests are just like normal requests except with a `sec-purpose` HTTP request header, they incur the same overhead on the client, network, and server. However, the crucial difference is that prefetch requests anticipate user behavior and the response might not end up being used, <a href="https://developer.mozilla.org/en-US/docs/Web/API/Speculation_Rules_API#unsafe_prefetching"><u>so all that overhead might be wasted</u></a>. This makes prefetch accuracy extremely important — that is, maximizing the percentage of prefetched pages that end up being viewed by the user. Incorrect prefetching can lead to inefficiencies and unneeded costs, such as caching resources that aren't requested, or wasting bandwidth and network resources, which is especially critical on metered mobile networks or in low-bandwidth environments.</p>
    <div>
      <h3>Guardrails</h3>
      <a href="#guardrails">
        
      </a>
    </div>
    <p>With the initial release of Speed Brain, we have designed a solution with important side effect prevention guardrails that completely removes the chance of <i>stale prefetch configuration</i>, and minimizes the risk of<i> incorrect prefetching</i>. This opinionated configuration is achieved by leveraging the <a href="https://developer.chrome.com/blog/speculation-rules-improvements"><u>document rules</u></a> and <a href="https://developer.chrome.com/blog/speculation-rules-improvements#eagerness"><u>eagerness</u></a> settings from the <a href="https://developer.chrome.com/docs/web-platform/prerender-pages#speculation-rules-api"><u>Speculation Rules API</u></a>. Our chosen configuration looks like the following:</p>
            <pre><code>{
  "prefetch": [{
    "source": "document",
    "where": {
      "and": [
        { "href_matches": "/*", "relative_to": "document" },
      ]
    },
    "eagerness": "conservative"
  }]
}
</code></pre>
            
    <div>
      <h5>
Document Rules</h5>
      <a href="#document-rules">
        
      </a>
    </div>
    <p><a href="https://developer.chrome.com/blog/speculation-rules-improvements"><u>Document Rules</u></a>, indicated by "source": "document" and the "where" key in the configuration, allows prefetching to be applied dynamically over the entire web page. This eliminates the need for a predefined static URL list for prefetching. Hence, we remove the problem of <i>stale prefetch configuration</i> as prefetch candidate links are determined based on the active page structure.</p><p>Our use of "relative_to": "document" in the where clause instructs the browser to limit prefetching to same-site links. This has the added bonus of allowing our implementation to avoid cross-origin prefetches to avoid any privacy implications for users, as it doesn’t follow them around the web. </p>
    <div>
      <h5>Eagerness</h5>
      <a href="#eagerness">
        
      </a>
    </div>
    <p><a href="https://developer.chrome.com/docs/web-platform/prerender-pages#eagerness"><u>Eagerness</u></a> controls how aggressively the browser prefetches content. There are four possible settings:</p><ul><li><p><b><i>immediate</i></b><i>: Used as soon as possible on page load — generally as soon as the rule value is seen by the browser, it starts prefetching the next page.</i></p></li><li><p><b><i>eager</i></b><i>: Identical to immediate setting above, but the prefetch trigger additionally relies on slight user interaction events, such as moving the cursor towards the link (coming soon).</i></p></li><li><p><b><i>moderate</i></b><i>: Prefetches if you hold the pointer over a link for more than 200 milliseconds (or on the </i><a href="https://developer.mozilla.org/docs/Web/API/Element/pointerdown_event"><i><u>pointerdown</u></i></a><i> event if that is sooner, and on mobile where there is no hover event).</i></p></li><li><p><b><i>conservative</i></b><i>: Prefetches on pointer or touch down on the link.</i></p></li></ul><p>Our initial release of Speed Brain makes use of the <b><u>conservative</u></b> eagerness value to minimize the risk of incorrect prefetching, which can lead to unintended resource waste while making your websites noticeably faster. While we lose out on the potential performance improvements that the more aggressive eagerness settings offer, we chose this cautious approach to prioritize safety for our users. Looking ahead, we plan to explore more dynamic eagerness settings for sites that could benefit from a more liberal setting, and we'll also expand our rules to include <a href="https://developer.mozilla.org/en-US/docs/Glossary/Prerender"><u>prerendering</u></a>.</p><p>Another important safeguard we implement is to only accept prefetch requests for static content that is already stored in our CDN cache. If the content isn't in the cache, we reject the prefetch request. Retrieving content directly from our CDN cache for prefetching requests lets us bypass concerns about their cache eligibility. The rationale for this is straightforward: if a page is not eligible for caching, we don't want it to be prefetched in the browser cache, as it could lead to unintended consequences and increased origin load. For instance, prefetching a logout page might log the user out prematurely before the user actually finishes their action. Stateful prefetching or prerendering requests can have unpredictable effects, potentially altering the server's state for actions the client has not confirmed. By only allowing prefetching for pages already in our CDN cache, we have confidence those pages will not negatively impact the user experience.</p><p>These guardrails were implemented to work in performance-sensitive environments. We measured the impact of our baseline conservative deployment model on all pages across <a href="https://developers.cloudflare.com/"><u>Cloudflare’s developer documentation</u></a> in early July 2024. We found that we were able to prefetch the correct content, content that would in fact be navigated to by the users, <b>94</b>% of the time. We did this while improving the performance of the navigation by reducing LCP at p75 quantile by <b>40</b>% without inducing any unintended side effects. The results were amazing!</p>
    <div>
      <h2>Explaining Cloudflare’s implementation </h2>
      <a href="#explaining-cloudflares-implementation">
        
      </a>
    </div>
    <p>Our global <a href="https://www.cloudflare.com/network"><u>network</u></a> spans over 330 cities and operates within 50 milliseconds of 95% of the Internet-connected population. This extensive reach allows us to significantly improve the performance of cacheable assets for our customers. By leveraging this network for smart prefetching with Speed Brain, Cloudflare can serve prefetched content directly from the CDN cache, reducing network latency to practically instant.</p><p>Our unique position on the network provides us the leverage to automatically enable Speed Brain without requiring any changes from our customers to their origin server configurations. It's as simple as flipping a switch! Our first version of Speed Brain is now live.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/O3RwzUlOj1GlJuMSrq5TW/55d9ef31a2de21034eb036b8c029d5b6/BLOG-2422_4.png" />
          </figure><ul><li><p>Upon receiving a request for a web page with Speed Brain enabled, the Cloudflare server returns an additional "Speculation-Rules" HTTP response header. The value for this header is a URL that hosts an opinionated Rules configuration (as mentioned above).</p></li><li><p>When the browser begins parsing the response header, it fetches our Speculation-Rules configuration, and loads it as part of the web page.</p></li><li><p>The configuration guides the browser on when to prefetch the next likely page from Cloudflare that the visitor may navigate to, based on how the visitor is engaging with the page.</p></li><li><p>When a user action (such as mouse down event on the next page link) triggers the Rules application, the browser sends a prefetch request for that page with the "sec-purpose: prefetch" HTTP request header.</p></li><li><p>Our server parses the request header to identify the prefetch request. If the requested content is present in our cache, we return it; otherwise,<b> we return a 503 HTTP status </b>code and deny the prefetch request. This removes the risk of unsafe side-effects of sending requests to origins or Cloudflare Workers that are unaware of prefetching. Only content present exclusively in the cache is returned.</p></li><li><p>On a success response, the browser successfully prefetches the content in memory, and when the visitor navigates to that page, the browser directly loads the web page from the browser cache for immediate rendering.</p></li></ul>
    <div>
      <h2>Common troubleshooting patterns </h2>
      <a href="#common-troubleshooting-patterns">
        
      </a>
    </div>
    <p>Support for Speed Brain relies on the <a href="https://developer.chrome.com/docs/web-platform/prerender-pages"><u>Speculation Rules API</u></a>, an emerging web standard. As of September 2024, <a href="https://caniuse.com/mdn-http_headers_speculation-rules"><u>support for this emerging standard</u></a> is limited to <b>Chromium-based browsers (version 121 or later)</b>, such as Google Chrome and Microsoft Edge. As the web community reaches consensus on API standardization, we hope to see wider adoption across other browser vendors.</p><p>Prefetching by nature does not apply to dynamic content, as the state of such content can change, potentially leading to stale or outdated data being delivered to the end user as well as increased origin load. Therefore, Speed Brain will only work for non-dynamic pages of your website that are cached on our network. It has no impact on the loading of dynamic pages. To get the most benefit out of Speed Brain, we suggest making use of <a href="https://developers.cloudflare.com/cache/how-to/cache-rules/"><u>cache rules</u></a> to ensure that all static content (<b>especially HTML content</b>) on your site is eligible for caching.</p><p>When the browser receives a 503 HTTP status code in response to a speculative prefetch request (marked by the sec-purpose: prefetch header), it cancels the prefetch attempt. Although a 503 error appearing in the browser's console may seem alarming, it is completely harmless for prefetch request cancellation. In our early tests, the 503 response code has caused some site owners concern. We are working with our partners to iterate on this to improve the client experience, but for now follow the specification guidance, <a href="https://developer.mozilla.org/en-US/docs/Web/API/Speculation_Rules_API#:~:text=Using%20a%20non%2Dsuccess%20code%20(for%20example%20a%20503)%20is%20the%20easiest%20way%20to%20prevent%20speculative%20loading"><u>which suggests a 503 response</u></a> for the browser to safely discard the speculative request. We're in active discussions with Chrome, based on feedback from early beta testers, and believe a new non-error dedicated response code would be more appropriate, and cause less confusion. In the meantime, 503 response logs for prefetch requests related to Speed Brain are harmless. If your tooling makes ignoring these requests difficult, you can temporarily disable Speed Brain until we work out something better with the Chrome Team.</p><p>Additionally, when a website uses both its own custom Speculation Rules and Cloudflare's Speed Brain feature, both rule sets can operate simultaneously. Cloudflare’s guardrails will limit speculation rules to cacheable pages, which may be an unexpected limitation for those with existing implementations. If you observe such behavior, consider disabling one of the implementations for your site to ensure consistency in behavior. Note that if your origin server responses include the Speculation-Rules header, it will not be overridden. Therefore, the potential for ruleset conflicts primarily applies to predefined in-line speculation rules.</p>
    <div>
      <h2>How can I see the impact of Speed Brain?</h2>
      <a href="#how-can-i-see-the-impact-of-speed-brain">
        
      </a>
    </div>
    <p>In general, we suggest that you use Speed Brain and most other Cloudflare performance <a href="https://developers.cloudflare.com/speed/"><u>features</u></a> with our <a href="https://developers.cloudflare.com/web-analytics/get-started/#sites-proxied-through-cloudflare"><u>RUM performance measurement tool</u></a> enabled. Our RUM feature helps developers and website operators understand how their end users are experiencing the performance of their application, providing visibility into:</p><ul><li><p><b>Loading</b>: How long did it take for content to become available?</p></li><li><p><b>Interactivity</b>: How responsive is the website when users interact with it?</p></li><li><p><b>Visual stability</b>: How much does the page move around while loading?</p></li></ul><p>With RUM enabled, you can navigate to the Web Analytics section in the dashboard to see important information about how Speed Brain is helping reduce latency in your <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/"><u>core web vitals</u></a> metrics like Largest Contentful Paint (LCP) and load time. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zv2kRvsot12PSNwwaWCad/25e9e56d6f3769b04a8752f99c656f3b/BLOG-2422_5.png" />
          </figure><p><sub><i>Example RUM dashboard for a website with a high amount of prefetchable content that enabled Speed Brain around September 16.</i></sub></p>
    <div>
      <h2>What have we seen in our rollout so far? </h2>
      <a href="#what-have-we-seen-in-our-rollout-so-far">
        
      </a>
    </div>
    <p>We have enabled this feature by default on all free plans and have observed the following:</p>
    <div>
      <h3>Domains</h3>
      <a href="#domains">
        
      </a>
    </div>
    <p>Cloudflare currently has tens of millions of domains using Speed Brain. We have measured the LCP at the 75th quantile (p75) for these sites and found an improvement for these sites between 40% and 50% (average around 45%). </p><p>We found this improvement by comparing navigational prefetches to normal (non-prefetched) page loads for the same set of domains. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fNrRh84I3iZpHiEieNPg8/a789d9d9a736a8eb76fa6120a84cdf10/BLOG-2422_6.png" />
          </figure>
    <div>
      <h3>Requests</h3>
      <a href="#requests">
        
      </a>
    </div>
    <p>Before Speed Brain is enabled, the p75 of free websites on Cloudflare experience an LCP around 2.2 seconds. With Speed Brain enabled, these sites see significant latency savings on LCP. In aggregate, Speed Brain saves about 0.88 seconds on the low end and up to 1.1 seconds on each successful prefetch! </p>
    <div>
      <h3>Applicable browsers</h3>
      <a href="#applicable-browsers">
        
      </a>
    </div>
    <p>Currently, the Speculation Rules API is only available in Chromium browsers. From Cloudflare Radar, we can see that approximately <a href="https://radar.cloudflare.com/adoption-and-usage"><u>70% of requests</u></a> from visitors are from <a href="https://en.wikipedia.org/wiki/Chromium_(web_browser)"><u>Chromium</u></a> (Chrome, Edge, etc) browsers.</p>
    <div>
      <h3>Across the network</h3>
      <a href="#across-the-network">
        
      </a>
    </div>
    <p>Cloudflare sees hundreds of billions of requests for HTML content each day. Of these requests, about half are cached (make sure your HTML is cacheable!). Around 1% of those requests are for navigational prefetching made by the visitors. This represents significant savings every day for visitors to websites with Speed Brain enabled. Every 24 hours, <b>Speed Brain can save more than 82 years worth of latency!</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CIpcdPC17LY2EGjNxCjNo/cf1a915d16384e32d88951a48febed47/BLOG-2422_7.png" />
          </figure>
    <div>
      <h2>What’s next? </h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>What we’re offering today for Speed Brain is only the beginning. Heading into 2025, we have a number of exciting additions to explore and ship. </p>
    <div>
      <h3>Leveraging Machine Learning</h3>
      <a href="#leveraging-machine-learning">
        
      </a>
    </div>
    <p>Our unique position on the Internet provides us valuable insights into web browsing patterns, which we can leverage for improving web performance while maintaining individual user privacy. By employing a generalized data-driven machine learning approach, we can define more accurate and site-specific prefetch predictors for users’ pages. </p><p>We are in the process of developing an adaptive speculative model that significantly improves upon our current conservative offering. This model uses a privacy-preserving method to generate a user traversal graph for each site based on same-site Referrer headers. For any two pages connected by a navigational hop, our model predicts the likelihood of a typical user moving between them, using insights extracted from our aggregated traffic data.</p><p>This model enables us to tailor rule sets with custom eagerness values to each relevant next page link on your site. For pages where the model predicts high confidence in user navigation, the system will aggressively prefetch or prerender them. If the model does not provide a rule for a page, it defaults to our existing conservative approach, maintaining the benefits of baseline Speed Brain model. These signals guide browsers in prefetching and prerendering the appropriate pages, which helps speed up navigation for users, while maintaining our current safety guardrails.</p><p>In lab tests, our ML model improved LCP latency by 75% and predicted visitor navigation with ~98% accuracy, ensuring the correct pages were being prefetched to prevent resource waste for users. As we move toward scaling this solution, we are focused on periodic training of the model to adapt to varying user behaviors and evolving websites. Using an online machine learning approach will drastically reduce the need for any manual update, and content drifts, while maintaining high accuracy — the Speed Brain load solution that gets smarter over time!</p>
    <div>
      <h3>Finer observability via RUM</h3>
      <a href="#finer-observability-via-rum">
        
      </a>
    </div>
    <p>As we’ve mentioned, we believe that our RUM tools offer the best insights for how Speed Brain is helping the performance of your website. In the future, we plan on offering the ability to filter RUM tooling by navigation type so that you can compare the browser rendering of prefetched content versus non-prefetched content. </p>
    <div>
      <h3>Prerendering</h3>
      <a href="#prerendering">
        
      </a>
    </div>
    <p>We are currently offering the ability for prefetching on cacheable content. Prefetching downloads the main document resource of the page before the user’s navigation, but it does not instruct the browser to prerender the page or download any additional subresources.</p><p>In the future, Cloudflare’s Speed Brain offering will prefetch content into our CDN cache and then work with browsers to know what are the best prospects for prerendering. This will help get static content even closer to instant rendering. </p>
    <div>
      <h3>Argo Smart Browsing: Speed Brain &amp; Smart Routing</h3>
      <a href="#argo-smart-browsing-speed-brain-smart-routing">
        
      </a>
    </div>
    <p>Speed Brain, in its initial implementation, provides an incredible performance boost whilst still remaining conservative in its implementation; both from an eagerness, and a resource consumption perspective.</p><p>As was outlined earlier in the post, lab testing of a more aggressive model, powered by machine-learning and a higher eagerness, yielded a <b>75% reduction in LCP.</b> We are investigating bundling this more aggressive, additional implementation of Speed Brain with Argo Smart Routing into a product called <b>“Argo Smart Browsing”. </b></p><p>Cloudflare customers will be free to continue using Speed Brain, however those who want even more performance improvement will be able to enable Argo Smart Browsing with a single button click.  With Argo Smart Browsing, not only will cacheable static content load up to 75% faster in the browser, thanks to the more aggressive models, however in times when content can’t be cached, and the request must go forward to an origin server, it will be sent over the most performant network path resulting in an average <b>33% performance increase.</b> Performance optimizations are being applied to almost every segment of the request lifecycle regardless if the content is static or dynamic, cached or not. </p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>To get started with Speed Brain, navigate to <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/optimization/content"><b><u>Speed</u></b><u> &gt; Optimization &gt; Content Optimization &gt; </u><b><u>Speed Brain</u></b></a> in the Cloudflare Dashboard and enable it. That's all! The feature can also be enabled via <a href="https://developers.cloudflare.com/api/operations/zone-settings-get-speed-brain-setting"><u>API</u></a>.  Free plan domains have had Speed Brain enabled by default.</p><p>We strongly recommend that customers also <a href="https://developers.cloudflare.com/web-analytics/get-started/#sites-proxied-through-cloudflare"><b><u>enable RUM</u></b></a>, found in the same section of the dashboard, to give visibility into the performance improvements provided by Speed Brain and other Cloudflare features and products. </p><p>We’re excited to continue to build products and features that make web performance reliably fast. If you’re an engineer interested in improving the performance of the web for all, <a href="http://cloudflare.com/jobs">come join us</a>!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/OaN8mrrAKAatOUCqGAsqu/15ac363626180a0f0ed1a0cdb8146e5f/BLOG-2422_8.png" />
          </figure>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed Brain]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">sC3G9YR2M9IIoRMg8slDl</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>William Woodhead</dc:creator>
        </item>
        <item>
            <title><![CDATA[Instant Purge: invalidating cached content in under 150ms]]></title>
            <link>https://blog.cloudflare.com/instant-purge/</link>
            <pubDate>Tue, 24 Sep 2024 23:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve built the fastest cache purge in the industry by offering a global purge latency for purge by tags, hostnames, and prefixes of less than 150ms on average (P50), representing a 90% improvement.  ]]></description>
            <content:encoded><![CDATA[ <p><sup>(part 3 of the Coreless Purge </sup><a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><sup>series</sup></a><sup>)</sup></p><p>Over the past 14 years, Cloudflare has evolved far beyond a Content Delivery Network (CDN), expanding its offerings to include a comprehensive <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Zero Trust</u></a> security portfolio, network security &amp; performance <a href="https://www.cloudflare.com/network-services/products/"><u>services</u></a>, application security &amp; performance <a href="https://www.cloudflare.com/application-services/products/"><u>optimizations</u></a>, and a powerful <a href="https://www.cloudflare.com/developer-platform/products/"><u>developer platform</u></a>. But customers also continue to rely on Cloudflare for caching and delivering static website content. CDNs are often judged on their ability to return content to visitors as quickly as possible. However, the speed at which content is removed from a CDN's global cache is just as crucial.</p><p>When customers frequently update content such as news, scores, or other data, it is essential they<a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/"> avoid serving stale, out-of-date information</a> from cache to visitors. This can lead to a <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">subpar experience</a> where users might see invalid prices, or incorrect news. The goal is to remove the stale content and cache the new version of the file on the CDN, as quickly as possible. And that starts by issuing a “purge.”</p><p>In May 2022, we released the <a href="https://blog.cloudflare.com/part1-coreless-purge/"><u>first part</u><b><u> </u></b></a>of the series detailing our efforts to rebuild and publicly document the steps taken to improve the system our customers use, to purge their cached content. Our goal was to increase scalability, and importantly, the speed of our customer’s purges. In that initial post, we explained how our purge system worked and the design constraints we found when scaling. We outlined how after more than a decade, we had outgrown our purge system and started building an entirely new purge system, and provided purge performance benchmarking that users experienced at the time. We set ourselves a lofty goal: to be the fastest.</p><p><b>Today, we’re excited to share that we’ve built the fastest cache purge in the industry.</b>  We now offer a global purge latency for purge by tags, hostnames, and prefixes of less than 150ms on average (P50), representing a 90% improvement since May 2022. Users can now purge from anywhere, (almost) <i>instantly</i>. By the time you hit enter on a purge request and your eyes blink, the file is now removed from our global network — including data centers in <a href="https://www.cloudflare.com/network/"><u>330 cities</u></a> and <a href="https://blog.cloudflare.com/backbone2024/"><u>120+ countries</u></a>.</p><p>But that’s not all. It wouldn’t be Birthday Week if we stopped at just being the fastest purge. We are <b><i>also</i></b><i> </i>announcing that we’re opening up more purge options to Free, Pro, and Business plans. Historically, only Enterprise customers had access to the full arsenal of <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/"><u>cache purge methods</u></a> supported by Cloudflare, such as purge by cache-tags, hostnames, and URL prefixes. As part of rebuilding our purge infrastructure, we’re not only fast but we are able to scale well beyond our current capacity. This enables more customers to use different types of purge. We are excited to offer these new capabilities to all plan types once we finish rolling out our new purge infrastructure, and expect to begin offering additional purge capabilities to all plan types in early 2025. </p>
    <div>
      <h3>Why cache and purge? </h3>
      <a href="#why-cache-and-purge">
        
      </a>
    </div>
    <p>Caching content is like pulling off a spectacular magic trick. It makes loading website content lightning-fast for visitors, slashes the load on origin servers and the cost to operate them, and enables global scalability with a single button press. But here's the catch: for the magic to work, caching requires predicting the future. The right content needs to be cached in the right data center, at the right moment when requests arrive, and in the ideal format. This guarantees astonishing performance for visitors and game-changing scalability for web properties.</p><p>Cloudflare helps make this caching magic trick easy. But regular users of our cache know that getting content into cache is only part of what makes it useful. When content is updated on an origin, it must also be updated in the cache. The beauty of caching is that it holds content until it expires or is evicted. To update the content, it must be actively removed and updated across the globe quickly and completely. If data centers are not uniformly updated or are updated at drastically different times, visitors risk getting different data depending on where they are located. This is where cache “purging” (also known as “cache invalidation”) comes in.</p>
    <div>
      <h3>One-to-many purges on Cloudflare</h3>
      <a href="#one-to-many-purges-on-cloudflare">
        
      </a>
    </div>
    <p>Back in <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>part 2 of the blog series</u></a>, we touched on how there are multiple ways of purging cache: by URL, cache-tag, hostname, URL prefix, and “purge everything”, and discussed a necessary distinction between purging by URL and the other four kinds of purge — referred to as flexible purges — based on the scope of their impact.</p><blockquote><p><i>The reason flexible purge isn’t also fully coreless yet is because it’s a more complex task than “purge this object”; flexible purge requests can end up purging multiple objects – or even entire zones – from cache. They do this through an entirely different process that isn’t coreless compatible, so to make flexible purge fully coreless we would have needed to come up with an entirely new multi-purge mechanism on top of redesigning distribution. We chose instead to start with just purge by URL, so we could focus purely on the most impactful improvements, revamping distribution, without reworking the logic a data center uses to actually remove an object from cache.</i></p></blockquote><p>We said our next steps included a redesign of flexible purges at Cloudflare, and today we’d like to walk you through the resulting system. But first, a brief history of flexible cache purges at Cloudflare and elaboration on why the old flexible purge system wasn’t “coreless compatible”.</p>
    <div>
      <h3>Just in time</h3>
      <a href="#just-in-time">
        
      </a>
    </div>
    <p>“Cache” within a given data center is made up of many machines, all contributing disk space to store customer content. When a request comes in for an asset, the URL and headers are used to calculate a <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>cache key</u></a>, which is the filename for that content on disk and also determines which machine in the datacenter that file lives on. The filename is the same for every data center, and every data center knows how to use it to find the right machine to cache the content. A <a href="https://developers.cloudflare.com/cache/how-to/purge-cache/"><u>purge request</u></a> for a URL (plus headers) therefore contains everything needed to generate the cache key — the pointer to the response object on disk — and getting that key to every data center is the hardest part of carrying out the purge.</p><p>Purging content based on response properties has a different hardest part. If a customer wants to purge all content with the cache-tag “foo”, for example, there’s no way for us to generate all the cache keys that will point to the files with that cache-tag at request time. Cache-tags are response headers, and the decision of where to store a file is based on request attributes only. To find all files with matching cache-tags, we would need to look at every file in every cache disk on every machine in every data center. That’s thousands upon thousands of machines we would be scanning for each purge-by-tag request. There are ways to avoid actually continuously scanning all disks worldwide (foreshadowing!) but for our first implementation of our flexible purge system, we hoped to avoid the problem space altogether.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7n56ZDJwdBbaTNPJII6s2S/db998973efdca121536a932bc50dd842/image5.png" />
            
            </figure><p>An alternative approach to going to every machine and looking for all files that match some criteria to actively delete from disk was something we affectionately referred to as “lazy purge”. Instead of deleting all matching files as soon as we process a purge request, we wait to do so when we get an end user request for one of those files. Whenever a request comes in, and we have the file in cache, we can compare the timestamp of any recent purge requests from the file owner to the insertion timestamp of the file we have on disk. If the purge timestamp is fresher than the insertion timestamp, we pretend we didn’t find the file on disk. For this to work, we needed to keep track of purge requests going back further than a data center’s maximum cache eviction age to be sure that any file a customer sends a matching flex purge to clear from cache will either be <a href="https://developers.cloudflare.com/cache/concepts/retention-vs-freshness/#retention"><u>naturally evicted</u></a>, or forced to cache MISS and get refreshed from the origin. With this approach, we just needed a distribution and storage system for keeping track of flexible purges.</p>
    <div>
      <h3>Purge looks a lot like a nail</h3>
      <a href="#purge-looks-a-lot-like-a-nail">
        
      </a>
    </div>
    <p>At Cloudflare there is a lot of configuration data that needs to go “everywhere”: cache configuration, load balancer settings, firewall rules, host metadata — countless products, features, and services that depend on configuration data that’s managed through Cloudflare’s control plane APIs. This data needs to be accessible by every machine in every datacenter in our network. The vast majority of that data is distributed via <a href="https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale/"><u>a system introduced several years ago called Quicksilver</u></a>. The system works <i>very, very well</i> (sub-second p99 replication lag, globally). It’s extremely flexible and reliable, and reads are lightning fast. The team responsible for the system has done such a good job that Quicksilver has become a hammer that when wielded, makes everything look like a nail… like flexible purges.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BFntOlvapsYYjYTcxMs7o/d73ad47b63b9d4893b46aeddc28a8698/image2.png" />
            
            </figure><p><sup><i>Core-based purge request entering a data center and getting backhauled to a core data center where Quicksilver distributes the request to all network data centers (hub and spoke).</i></sup><sup> </sup></p><p>Our first version of the flexible purge system used Quicksilver’s spoke-hub distribution to send purges from a core data center to every other data center in our network. It took less than a second for flexible purges to propagate, and once in a given data center, the purge key lookups in the hot path to force cache misses were in the low hundreds of microseconds. We were quite happy with this system at the time, especially because of the simplicity. Using well-supported internal infrastructure meant we weren’t having to manage database clusters or worry about transport between data centers ourselves, since we got that “for free”. Flexible purge was a new feature set and the performance seemed pretty good, especially since we had no predecessor to compare against.</p>
    <div>
      <h3>Victims of our own success</h3>
      <a href="#victims-of-our-own-success">
        
      </a>
    </div>
    <p>Our first version of flexible purge didn’t start showing cracks for years, but eventually both our network and our customer base grew large enough that our system was reaching the limits of what it could scale to. As mentioned above, we needed to store purge requests beyond our maximum eviction age. Purge requests are relatively small, and compress well, but thousands of customers using the API millions of times a day adds up to quite a bit of storage that Quicksilver needed on each machine to maintain purge history, and all of that storage cut into disk space we could otherwise be using to cache customer content. We also found the limits of Quicksilver in terms of how many writes per second it could handle without replication slowing down. We bought ourselves more runway by putting <a href="https://www.boltic.io/blog/kafka-queue#:~:text=Apache%20Kafka%20Queues%3F-,Apache%20Kafka%20queues,-are%20a%20powerful"><u>Kafka queues</u></a> in front of Quicksilver to buffer and throttle ourselves to even out traffic spikes, and increased batching, but all of those protections introduced latency. We knew we needed to come up with a solution without such a strong correlation between usage and operational costs.</p><p>Another pain point exposed by our growing user base that we mentioned in <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>Part 2</u></a> was the excessive round trip times experienced by customers furthest away from our core data centers. A purge request sent by a customer in Australia would have to cross the Pacific Ocean and back before local customers would see the new content.</p><p>To summarize, three issues were plaguing us:</p><ol><li><p>Latency corresponding to how far a customer was from the centralized ingest point.</p></li><li><p>Latency due to the bottleneck for writes at the centralized ingest point.</p></li><li><p>Storage needs in all data centers correlating strongly with throughput demand.</p></li></ol>
    <div>
      <h3>Coreless purge proves useful</h3>
      <a href="#coreless-purge-proves-useful">
        
      </a>
    </div>
    <p>The first two issues affected all types of purge. The spoke-hub distribution model was problematic for purge-by-URL just as much as it was for flexible purges. So we embarked on the path to peer-to-peer distribution for purge-by-URL to address the latency and throughput issues, and the results of that project were good enough that we wanted to propagate flexible purges through the same system. But doing so meant we’d have to replace our use of Quicksilver; it was so good at what it does (fast/reliable replication network-wide, extremely fast/high read throughput) in large part because of the core assumption of spoke-hub distribution it could optimize for. That meant there was no way to write to Quicksilver from “spoke” data centers, and we would need to find another storage system for our purges.</p>
    <div>
      <h3>Flipping purge on its head</h3>
      <a href="#flipping-purge-on-its-head">
        
      </a>
    </div>
    <p>We decided if we’re going to replace our storage system we should dig into exactly what our needs are and find the best fit. It was time to revisit some of our oldest conclusions to see if they still held true, and one of the earlier ones was that proactively purging content from disk would be difficult to do efficiently given our storage layout.</p><p>But was that true? Or could we make active cache purge fast and efficient (enough)? What would it take to quickly find files on disk based on their metadata? “Indexes!” you’re probably screaming, and for good reason. Indexing files’ hostnames, cache-tags, and URLs would undoubtedly make querying for relevant files trivial, but a few aspects of our network make it less straightforward.</p><p>Cloudflare has hundreds of data centers that see trillions of unique files, so any kind of global index — even ignoring the networking hurdles of aggregation — would suffer the same type of bottlenecking issues with our previous spoke-hub system. Scoping the indices to the data center level would be better, but they vary in size up to several hundred machines. Managing a database cluster in each data center scaled to the appropriate size for the aggregate traffic of all the machines was a daunting proposition; it could easily end up being enough work on its own for a separate team, not something we should take on as a side hustle.</p><p>The next step down in scope was an index per machine. Indexing on the same machine as the cache proxy had some compelling upsides: </p><ul><li><p>The proxy could talk to the index over <a href="https://en.wikipedia.org/wiki/Unix_domain_socket"><u>UDS</u></a> (Unix domain sockets), avoiding networking complexities in the hottest paths.</p></li><li><p>As a sidecar service, the index just had to be running anytime the machine was accepting traffic. If a machine died, so would the index, but that didn’t matter, so there wasn’t any need to deal with the complexities of distributed databases.</p></li><li><p>While data centers were frequently adding and removing machines, machines weren’t frequently adding and removing disks. An index could reasonably count on its maximum size being predictable and constant based on overall disk size.</p></li></ul><p>But we wanted to make sure it was feasible on our machines. We analyzed representative cache disks from across our fleet, gathering data like the number of cached assets per terabyte and the average number of cache-tags per asset. We looked at cache MISS, REVALIDATED, and EXPIRED rates to estimate the required write throughput.</p><p>After conducting a thorough analysis, we were convinced the design would work. With a clearer understanding of the anticipated read/write throughput, we started looking at databases that could meet our needs. After benchmarking several relational and non-relational databases, we ultimately chose <a href="https://github.com/facebook/rocksdb"><u>RocksDB</u></a>, a high-performance embedded key-value store. We found that with proper tuning, it could be extremely good at the types of queries we needed.</p>
    <div>
      <h3>Putting it all together</h3>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>And so CacheDB was born — a service written in Rust and built on RocksDB, which operates on each machine alongside the cache proxy to manage the indexing and purging of cached files. We integrated the cache proxy with CacheDB to ensure that indices are stored whenever a file is cached or updated, and they’re deleted when a file is removed due to eviction or purging. In addition to indexing data, CacheDB maintains a local queue for buffering incoming purge operations. A background process reads purge operations in the queue, looking up all matching files using the indices, and deleting the matched files from disk. Once all matched files for an operation have been deleted, the process clears the indices and removes the purge operation from the queue.</p><p>To further optimize the speed of purges taking effect, the cache proxy was updated to check with CacheDB — similar to the previous lazy purge approach — when a cache HIT occurs before returning the asset. CacheDB does a quick scan of its local queue to see if there are any pending purge operations that match the asset in question, dictating whether the cache proxy should respond with the cached file or fetch a new copy. This means purges will prevent the cache proxy from returning a matching cached file as soon as a purge reaches the machine, even if there are millions of files that correspond to a purge key, and it takes a while to actually delete them all from disk.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5W0ZIGBBbG5Cnc3DSbCPGT/a1572b0b67d844d4e5b7cc7899d320b1/image3.png" />
            
            </figure><p><sup><i>Coreless purge using CacheDB and Durable Objects to distribute purges without needing to first stop at a core data center.</i></sup></p><p>The last piece to change was the distribution pipeline, updated to broadcast flexible purges not just to every data center, but to the CacheDB service running on every machine. We opted for CacheDB to handle the last-mile fan out of machine to machine within a data center, using <a href="https://www.consul.io/"><u>consul</u></a> to keep each machine informed of the health of its peers. The choice let us keep the Workers largely the same for purge-by-URL (more <a href="https://blog.cloudflare.com/rethinking-cache-purge-architecture/"><u>here</u></a>) and flexible purge handling, despite the difference in termination points.</p>
    <div>
      <h3>The payoff</h3>
      <a href="#the-payoff">
        
      </a>
    </div>
    <p>Our new approach reduced the long tail of the lazy purge, saving 10x storage. Better yet, we can now delete purged content immediately instead of waiting for the lazy purge to happen or expire. This new-found storage will improve cache retention on disk for all users, leading to improved cache HIT ratios and reduced egress from your origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6B0kVX9Q6qA2JshmcTZSrt/80a845904adf8ba69c121bb54923959e/image1.png" />
            
            </figure><p><sup><i>The shift from lazy content purging (</i></sup><sup><i><u>left</u></i></sup><sup><i>) to the new Coreless Purge architecture allows us to actively delete content (</i></sup><sup><i><u>right</u></i></sup><sup><i>). This helps reduce storage needs and increase cache retention times across our service.</i></sup></p><p>With the new coreless cache purge, we can now get a purge request into any datacenter, distribute the keys to purge, and instantly purge the content from the cache database. This all occurs in less than 150 milliseconds on P50 for tags, hostnames, and prefix URL, covering all <a href="https://www.cloudflare.com/network/"><u>330 cities</u></a> in <a href="https://blog.cloudflare.com/backbone2024/"><u>120+ countries</u></a>.</p>
    <div>
      <h3>Benchmarks</h3>
      <a href="#benchmarks">
        
      </a>
    </div>
    <p>To measure Instant Purge, we wanted to make sure that we were looking at real user metrics — that these were purges customers were actually issuing and performance that was representative of what we were seeing under real conditions, rather than marketing numbers.</p><p>The time we measure represents the period when a request enters the local datacenter, and ends with when the purge has been executed in every datacenter. When the local data center receives the request, one of the first things we do is to add a timestamp to the purge request. When all data centers have completed the purge action, another timestamp is added to “stop the clock.” Each purge request generates this performance data, and it is then sent to a database for us to measure the appropriate quantiles and to help us understand how we can improve further.</p><p>In August 2024, we took purge performance data and segmented our collected data by region based on where the local data center receiving the request was located.</p><table><tr><td><p><b>Region</b></p></td><td><p><b>P50 Aug 2024 (Coreless)</b></p></td><td><p><b>P50 May 2022 (Core-based)</b></p></td><td><p><b>Improvement</b></p></td></tr><tr><td><p>Africa</p></td><td><p>303ms</p></td><td><p>1,420ms</p></td><td><p>78.66%</p></td></tr><tr><td><p>Asia Pacific Region (APAC)</p></td><td><p>199ms</p></td><td><p>1,300ms</p></td><td><p>84.69%</p></td></tr><tr><td><p>Eastern Europe (EEUR)</p></td><td><p>140ms</p></td><td><p>1,240ms</p></td><td><p>88.70%</p></td></tr><tr><td><p>Eastern North America (ENAM)</p></td><td><p>119ms</p></td><td><p>1,080ms</p></td><td><p>88.98%</p></td></tr><tr><td><p>Oceania</p></td><td><p>191ms</p></td><td><p>1,160ms</p></td><td><p>83.53%</p></td></tr><tr><td><p>South America (SA)</p></td><td><p>196ms</p></td><td><p>1,250ms</p></td><td><p>84.32%</p></td></tr><tr><td><p>Western Europe (WEUR)</p></td><td><p>131ms</p></td><td><p>1,190ms</p></td><td><p>88.99%</p></td></tr><tr><td><p>Western North America (WNAM)</p></td><td><p>115ms</p></td><td><p>1,000ms</p></td><td><p>88.5%</p></td></tr><tr><td><p><b>Global</b></p></td><td><p><b>149ms</b></p></td><td><p><b>1,570ms</b></p></td><td><p><b>90.5%</b></p></td></tr></table><p><sup>Note: Global latency numbers on the core-based measurements (May 2022) may be larger than the regional numbers because it represents all of our data centers instead of only a regional portion, so outliers and retries might have an outsized effect.</sup></p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We are currently wrapping up the roll-out of the last throughput changes which allow us to efficiently scale purge requests. As that happens, we will revise our rate limits and open up purge by tag, hostname, and prefix to all plan types! We expect to begin rolling out the additional purge types to all plans and users beginning in early <b>2025</b>.</p><p>In addition, in the process of implementing this new approach, we have identified improvements that will shave a few more milliseconds off our single-file purge. Currently, single-file purges have a P50 of 234ms. However, we want to, and can, bring that number down to below 200ms.</p><p>If you want to come work on the world's fastest purge system, check out <a href="http://www.cloudflare.com/careers">our open positions</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">11EWaw0wCUNwPTM30w7oUN</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Tim Kornhammar</dc:creator>
            <dc:creator> Connor Harwood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Automatic SSL/TLS: securing and simplifying origin connectivity]]></title>
            <link>https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/</link>
            <pubDate>Thu, 08 Aug 2024 14:05:00 GMT</pubDate>
            <description><![CDATA[ This new Automatic SSL/TLS setting will maximize and simplify the encryption modes Cloudflare uses to communicate with origin servers by using the SSL/TLS Recommender. ]]></description>
            <content:encoded><![CDATA[ 
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YIQCIdM9Td1RJfdCkg3o5/6fc5cd824f819658e00007c61f69ce71/1885-1-Hero.png" />
          </figure><p>During Birthday Week 2022, we <a href="https://blog.cloudflare.com/securing-origin-connectivity"><u>pledged</u></a> to provide our customers with the most secure connection possible from Cloudflare to their origin servers automatically. I’m thrilled to announce we will begin rolling this experience out to customers who have the <a href="https://blog.cloudflare.com/ssl-tls-recommender"><u>SSL/TLS Recommender</u></a> enabled on <b>August 8, 2024. </b>Following this, remaining Free and Pro customers can use this feature beginning <b>September 16, 2024</b> with Business and Enterprise customers to follow<b>.</b></p><p>Although it took longer than anticipated to roll out, our priority was to achieve an automatic configuration both transparently and without risking any site downtime. Taking this additional time allowed us to balance enhanced security with seamless site functionality, especially since origin server security configuration and capabilities are beyond Cloudflare's direct control. The new Automatic SSL/TLS setting will maximize and simplify the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption modes</u></a> Cloudflare uses to communicate with origin servers by using the <a href="https://blog.cloudflare.com/ssl-tls-recommender"><u>SSL/TLS Recommender</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53WNT2fwr0HuN2L0M5PKnJ/c005f100b455fd699d32d2f602ebf447/1885-2.png" />
          </figure><p>We first talked about this process in <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>2014</u></a>: at that time, securing connections was hard to configure, prohibitively expensive, and required specialized knowledge to set up correctly. To help alleviate these pains, Cloudflare introduced Universal SSL, which allowed web properties to obtain a <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>free SSL/TLS certificate</u></a> to enhance the security of connections between browsers and Cloudflare. </p><p>This worked well and was easy because Cloudflare could manage the certificates and connection security from incoming browsers. As a result of that work, the number of encrypted HTTPS connections on the entire Internet <a href="https://blog.cloudflare.com/introducing-universal-ssl#:~:text=we%27ll%20have%20doubled%20that"><u>doubled</u></a> at that time. However, the connections made from Cloudflare to origin servers still required <i>manual</i> configuration of the encryption <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>modes</u></a> to let Cloudflare know the capabilities of the origin. </p><p>Today we’re excited to begin the sequel to Universal SSL and make security between Cloudflare and origins automatic and easy for everyone.</p>
    <div>
      <h2>History of securing origin-facing connections</h2>
      <a href="#history-of-securing-origin-facing-connections">
        
      </a>
    </div>
    <p>Ensuring that more bytes flowing across the Internet are automatically encrypted strengthens the barrier against interception, throttling, and censorship of Internet traffic by third parties. </p><p>Generally, two communicating parties (often a client and server) establish a secure connection using the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS</u></a> protocol. For a simplified breakdown: </p><ul><li><p>The client advertises the list of encryption parameters it supports (along with some metadata) to the server.</p></li><li><p>The server responds back with its own preference of the chosen encryption parameters. It also sends a digital certificate so that the client can authenticate its identity.</p></li><li><p>The client validates the server identity, confirming that the server is who it says it is.</p></li><li><p>Both sides agree on a <a href="https://www.cloudflare.com/learning/ssl/what-is-asymmetric-encryption/#:~:text=What%20is-,symmetric,-encryption%3F"><u>symmetric</u></a> secret key for the session that is used to encrypt and decrypt all transmitted content over the connection.</p></li></ul><p>Because Cloudflare acts as an intermediary between the client and our customer’s origin server, two separate TLS connections are established. One between the user’s browser and our network, and the other from our network to the origin server. This allows us to manage and optimize the security and performance of both connections independently.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6s0NxfVR5tCXuAzhI8pYdw/f1f48be437de48bf1b60495647016fbb/1885-3.png" />
          </figure><p>Unlike securing connections between clients and Cloudflare, the security capabilities of origin servers are not under our direct control. For example, we can manage the <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-an-ssl-certificate/"><u>certificate</u></a> (the file used to verify identity and provide context on establishing encrypted connections) between clients and Cloudflare because it’s our job in that connection to provide it to clients, but when talking to origin servers, Cloudflare <i>is</i> the client.</p><p>Customers need to <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>acquire and provision</u></a> an origin certificate on their host. They then have to configure Cloudflare to expect the new certificate from the origin when opening a connection. Needing to manually configure connection security across multiple different places requires effort and is prone to human error. </p><p>This issue was discussed in the original <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>Universal SSL blog</u></a>:</p><blockquote><p><i>For a site that did not have SSL before, we will default to our </i><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-Off-Flexible-SSL-Full-SSL-Full-SSL-Strict-mean-"><i><u>Flexible SSL mode</u></i></a><i>, which means traffic from browsers to Cloudflare will be encrypted, but traffic from Cloudflare to a site's origin server will not. We strongly recommend site owners install a certificate on their web servers so we can encrypt traffic to the origin … Once you've installed a certificate on your web server, you can enable the </i><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-Off-Flexible-SSL-Full-SSL-Full-SSL-Strict-mean-"><i><u>Full or Strict SSL modes</u></i></a><i> which encrypt origin traffic and provide a higher level of security.</i></p></blockquote><p>Over the years Cloudflare has introduced numerous products to help customers configure how Cloudflare should talk to their origin. These products include a <a href="https://blog.cloudflare.com/universal-ssl-encryption-all-the-way-to-the-origin-for-free/"><u>certificate authority</u></a> to help customers obtain a certificate to verify their origin server’s identity and encryption capabilities, <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>Authenticated Origin Pulls</u></a> that ensures only HTTPS (encrypted) requests from Cloudflare will receive a response from the origin server, and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnels</u></a> that can be configured to proactively establish secure and private tunnels to the nearest Cloudflare data center. Additionally, the <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>ACME</u></a> protocol and its corresponding <a href="https://certbot.eff.org/"><u>Certbot</u></a> tooling make it easier than ever to obtain and manage publicly-trusted certificates on customer origins. While these technologies help customers configure how Cloudflare should communicate with their origin server, they still require manual configuration changes on the origin and to Cloudflare settings. </p><p>Ensuring certificates are configured appropriately on origin servers and informing Cloudflare about how we should communicate with origins can be anxiety-inducing because misconfiguration can lead to downtime if something isn’t deployed or configured correctly. </p><p>To simplify this process and help identify the most secure options that customers could be using without any misconfiguration risk, <b>Cloudflare introduced the </b><a href="https://blog.cloudflare.com/ssl-tls-recommender"><b><u>SSL/TLS Recommender</u></b></a><b> in 2021.</b> The Recommender works by probing customer origins with different SSL/TLS settings to provide a recommendation whether the SSL/TLS <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the web property can be improved. The Recommender has been in production for three years and has consistently managed to provide high quality origin-security recommendations for Cloudflare’s customers. </p><p>The SSL/TLS Recommender system serves as the brain of the automatic origin connection service that we are announcing today. </p>
    <div>
      <h2>How does SSL/TLS Recommendation work?</h2>
      <a href="#how-does-ssl-tls-recommendation-work">
        
      </a>
    </div>
    <p>The Recommender works by actively comparing content on web pages that have been downloaded using different SSL/TLS modes to see if it is safe and risk-free to update the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>mode</u></a> Cloudflare uses to connect to origin servers.</p><p>Cloudflare currently offers five SSL/TLS modes:</p><ol><li><p><b>Off</b>: No encryption is used for traffic between browsers and Cloudflare or between Cloudflare and origins. Everything is cleartext HTTP.</p></li><li><p><b>Flexible</b>: Traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the origin server is not. This mode is common for origins that do not support TLS, though upgrading the origin configuration is recommended whenever possible. A guide for upgrading is available <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/#required-setup"><u>here</u></a>.</p></li><li><p><b>Full</b>: Cloudflare matches the browser request protocol when connecting to the origin. If the browser uses HTTP, Cloudflare connects to the origin via HTTP; if HTTPS, Cloudflare uses HTTPS without validating the origin’s certificate. This mode is common for origins that use self-signed or otherwise invalid certificates.</p></li><li><p><b>Full (Strict)</b>: Similar to Full Mode, but with added validation of the origin server’s certificate, which can be issued by a public CA like Let’s Encrypt or by Cloudflare Origin CA.</p></li><li><p><b>Strict (SSL-only origin pull)</b>: Regardless of whether the browser-to-Cloudflare connection uses HTTP or HTTPS, Cloudflare always connects to the origin over HTTPS with certificate validation.</p></li></ol><table><tr><th><p>
</p></th><th><p><b>HTTP from visitor</b></p></th><th><p><b>HTTPS from visitor</b></p></th></tr><tr><td><p><b>Off</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTP to origin</p></td></tr><tr><td><p><b>Flexible</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTP to origin</p></td></tr><tr><td><p><b>Full</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTPS without cert validation to origin</p></td></tr><tr><td><p><b>Full (strict)</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTPS with cert validation to origin</p></td></tr><tr><td><p><b>Strict (SSL-only origin pull)</b></p></td><td><p>HTTPS with cert validation to origin</p></td><td><p>HTTPS with cert validation to origin</p></td></tr></table><p>
The SSL/TLS Recommender works by crawling customer sites and collecting links on the page (like any web crawler). The Recommender downloads content over both HTTP and HTTPS, making GET requests to avoid modifying server resources. It then uses a content similarity algorithm, adapted from the research paper "<a href="https://www.cs.umd.edu/~dml/papers/https_tma20.pdf"><u>A Deeper Look at Web Content Availability and Consistency over HTTP/S"</u></a> (TMA Conference 2020), to determine if content matches. If the content does match, the Recommender makes a determination for whether the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>SSL/TLS mode</u></a> can be increased without misconfiguration risk. </p><p>The recommendations are currently delivered to customers via email. </p><p>When the Recommender is making security recommendations, it errs on the side of maintaining current site functionality to avoid breakage and usability issues. If a website is non-functional, blocks all bots, or has SSL/TLS-specific Page Rules or Configuration Rules, the Recommender may not complete its scans and provide a recommendation. It was designed to maximize <a href="https://www.cloudflare.com/application-services/solutions/domain-protection-services/">domain security</a>, but will not help resolve website or domain functionality issues.</p><p>The crawler uses the user agent "<code>Cloudflare-SSLDetector</code>" and is included in Cloudflare’s list of known <a href="https://bots-directory.cfdata.org/bot/cloudflare-ssl-detector"><u>good bots</u></a>. It ignores <code>robots.txt</code> (except for rules specifically targeting its user agent) to ensure accurate recommendations.</p><p>When downloading content from your origin server over both HTTP and HTTPS and comparing the content, the Recommender understands the current SSL/TLS encryption mode that your website uses and what risk there might be to the site functionality if the recommendation is followed.</p>
    <div>
      <h2>Using SSL/TLS Recommender to automatically manage SSL/TLS settings </h2>
      <a href="#using-ssl-tls-recommender-to-automatically-manage-ssl-tls-settings">
        
      </a>
    </div>
    <p>Previously, signing up for the SSL/TLS Recommender provided a good experience for customers, but only resulted in an email recommendation in the event that a zone’s current SSL/TLS modes could be updated. To Cloudflare, this was a positive signal that customers wanted their websites to have more secure connections to their origin servers – over 2 million domains have enabled the SSL/TLS Recommender. However, we found that a significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate over the upgraded settings. <b>Only 30% of the recommendations that the system provided were followed. </b></p><p>With the system designed to increase security while avoiding any breaking changes, we wanted to provide an option for customers to allow the Recommender to help upgrade their site security, without requiring further manual action from the customer. <b>Therefore, we are introducing a new option for managing SSL/TLS configuration on Cloudflare: Automatic SSL/TLS. </b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21q0D6rvhXHQxRe2ko4ITA/d5ca2f9a7139a2f55a16ca8bcf783ee0/1885-4.png" />
          </figure><p></p><p>Automatic SSL/TLS uses the SSL/TLS Recommender to make the determination as to what encryption mode is the most secure and safest for a website to be set to. If there is a <b>more secure</b> option for your website (based on your origin certification or capabilities), Automatic SSL/TLS will find it and apply it for your domain. The other option, <b>Custom SSL/TLS,</b> will work exactly like the setting the encryption mode does today. If you know what setting you want, just select it using Custom SSL/TLS, and we’ll use it. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jFTSsmxG2WH0FqTklAJwb/eff9f692cdec3d199d32996bb0111441/1885-5.png" />
          </figure><p></p><p>Automatic SSL/TLS is currently meant to service an entire website, which typically works well for those with a single origin. For those concerned that they have more complex setups which use multiple origin servers with different security capabilities, don’t worry. Automatic SSL/TLS will still avoid breaking site functionality by looking for the best setting that works for all origins serving a part of the site’s traffic. </p><p>If customers want to segment the SSL/TLS mode used to communicate with the numerous origins that service their domain, they can achieve this by using <a href="https://developers.cloudflare.com/rules/configuration-rules/"><u>Configuration Rules</u></a>. These rules allow you to set more precise modes that Cloudflare should respect (based on path or subdomain or even IP address) to maximize the security of the domain based on your desired Rules criteria. If your site uses SSL/TLS-specific settings in a Configuration Rule or Page rule, those settings will <b>override the zone-wide Automatic and Custom settings.</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PCXOjFBtEucRUOP3BoMGQ/6ba2700c18cf4c49782bdf2d0ee33435/1885-6.png" />
          </figure><p></p><p>The goal of Automatic SSL/TLS<b> </b>is to simplify and maximize the origin-facing security for customers on Cloudflare. We want this to be the new default for all websites on Cloudflare, but we understand that not everyone wants this new default, and we will respect your decision for how Cloudflare should communicate with your origin server. If you block the Recommender from completing its crawls, the origin server is non-functional or can’t be crawled, or if you want to opt out of this default and just continue using the same encryption mode you are using today, we will make it easy for you to tell us what you prefer. </p>
    <div>
      <h2>How to onboard to Automatic SSL/TLS</h2>
      <a href="#how-to-onboard-to-automatic-ssl-tls">
        
      </a>
    </div>
    <p>To improve the security settings for everyone by default, we are making the following default changes to how Cloudflare configures the SSL/TLS level for all zones: </p><p>Starting on <b>August 8, 2024</b> websites with the <b>SSL/TLS Recommender currently enabled</b> will have the Automatic SSL/TLS setting enabled by default. Enabling does not mean that the Recommender will begin scanning and applying new settings immediately though. There will be a <b><u>one-month grace period</u></b> before the first scans begin and the recommended settings are applied. Enterprise (ENT) customers will get a <b><u>six-week grace period</u></b>. Origin scans will start getting scheduled by <b>September 9, 2024, for non-Enterprise </b>customers<b> </b>and<b> September 23rd for ENT customers with the SSL Recommender enabled</b>. This will give customers the ability to opt out by removing Automatic SSL/TLS and selecting the Custom mode that they want to use instead.</p><p>Further, during the second week of September <b>all new zones signing up for Cloudflare</b> will start seeing the Automatic SSL/TLS setting enabled by default.</p><p>Beginning <b>September 16, 2024, </b>remaining <b>Free and Pro</b> customers will start to see the new Automatic SSL/TLS setting. They will also have a one-month grace period to opt out before the scans start taking effect. </p><p>Customers in the cohort having the new Automatic SSL/TLS setting applied will receive an email communication regarding the date that they are slated for this migration as well as a banner on the dashboard that mentions this transition as well. If they do not wish for Cloudflare to change anything in their configurations, the process for opt-out of this migration is outlined below. </p><p>Following the successful migration of Free and Pro customers, we will proceed to Business and Enterprise customers with a similar cadence. These customers will get email notifications and information in the dashboard when they are in the migration cohort.</p><p>The Automatic SSL/TLS setting will not impact users that are already in Strict or Full (strict) mode nor will it impact websites that have opted-out. </p>
    <div>
      <h2>Opting out</h2>
      <a href="#opting-out">
        
      </a>
    </div>
    <p>There are a number of reasons why someone might want to configure a lower-than-optimal security setting for their website. Some may want to set a lower security setting for testing purposes or to debug some behavior. Whatever the reason, the options to opt-out of the Automatic SSL/TLS setting during the migration process are available in the dashboard and API.</p><p>To opt-out, simply select <b>Custom SSL/TLS</b> in the dashboard (instead of the enabled Automatic SSL/TLS) and we will continue to use the previously set encryption mode that you were using prior to the migration. Automatic and Custom SSL/TLS modes can be found in the <b>Overview</b> tab of the SSL/TLS section of the dashboard. To enable your preferred mode, select <b>configure</b>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4meNmREGaXd1FJfxUKr5NN/bdbe1e07a2121d2f9ec2a11e64c77b7f/1885-7.png" />
          </figure><p></p><p>If you want to opt-out via the API you can make this API call on or before the grace period expiration date. </p>
            <pre><code>    curl --request PATCH \
        --url https://api.cloudflare.com/client/v4/zones/&lt;insert_zone_tag_here&gt;/settings/ssl_automatic_mode \
        --header 'Authorization: Bearer &lt;insert_api_token_here&gt;' \
        --header 'Content-Type: application/json' \
        --data '{"value":"custom"}'
</code></pre>
            <p></p><p>If an opt-out is triggered, there will not be a change to the currently configured SSL/TLS setting. You are also able to change the security level at any time by going to the SSL/TLS section of the dashboard and choosing the Custom setting you want (similar to how this is accomplished today). </p><p>If at a later point you’d like to opt-in to Automatic SSL/TLS, that option is available by changing your setting from Custom to Automatic.</p>
    <div>
      <h2>What if I want to be more secure now?</h2>
      <a href="#what-if-i-want-to-be-more-secure-now">
        
      </a>
    </div>
    <p>We will begin to roll out this change to customers with the SSL/TLS Recommender enabled on <b>August 8, 2024</b>. If you want to enroll in that group, we recommend enabling the Recommender as soon as possible. </p><p>If you read this and want to make sure you’re at the highest level of backend security already, we recommend <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict mode</u></a>. Directions on how to make sure you’re correctly configured in either of those settings are available <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/#required-setup"><u>here</u></a> and <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/#required-setup"><u>here</u></a>. </p><p>If you prefer to wait for us to automatically upgrade your connection to the maximum encryption mode your origin supports, please watch your inbox for the date we will begin rolling out this change for you.</p> ]]></content:encoded>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2lhAhlWMei6M2NkhzAuULC</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>J Evans</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cache Reserve goes GA: enhanced control to minimize egress costs]]></title>
            <link>https://blog.cloudflare.com/cache-reserve-goes-ga/</link>
            <pubDate>Wed, 25 Oct 2023 13:00:11 GMT</pubDate>
            <description><![CDATA[ We're excited to announce the graduation of Cache Reserve from beta to GA, accompanied by the introduction of several exciting new features. These new features include adding Cache Reserve into the analytics shown on the Cache overview section of the Cloudflare dashboard ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oBNAX3rdlN340ASOgjLDD/027b66abb0ecc1d9d742796a6dcdbdfb/Cache-Reserve.png" />
            
            </figure><p>Everyone is chasing the highest cache ratio possible. Serving more content from Cloudflare’s cache means it loads faster for visitors, saves website operators money on <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> from origins, and provides multiple layers of resiliency and protection to make sure that content is available to be served and websites scale effortlessly. A year ago we introduced <a href="/introducing-cache-reserve/">Cache Reserve</a> to help customer’s serve as much content as possible from Cloudflare’s cache.</p><p>Today, we are thrilled to announce the <b>graduation of Cache Reserve from beta to General Availability (GA)</b>, accompanied by the introduction of several exciting new features. These new features include adding Cache Reserve into the analytics shown on the <a href="http://dash.cloudflare.com/caching"><i>Cache overview</i></a> section of the Cloudflare dashboard, giving customers the ability to see how they are using Cache Reserve over time. We have also added the ability for customers to delete all data in Cache Reserve without losing content in the edge cache. This is useful for customers who are no longer using Cache Reserve storage.</p><p>We’re also introducing new tools that give organizations more granular control over which files are saved to Cache Reserve, based on valuable feedback we received during the beta. The default configuration of Cache Reserve is to cache all available cacheable files, but some beta customers reported that they didn’t want certain rapidly-changing files cached. Based on their feedback, we’ve added the ability to define Cache Reserve eligibility within <a href="/cache-rules-go-ga/">Cache Rules</a>. This new rule lets users be very specific about which traffic is admitted to Cache Reserve.</p><p>To experience Cache Reserve firsthand visit the <a href="http://dash.cloudflare.com/caching/cache-reserve">Cache Reserve</a> section on the Cloudflare dashboard, press a single button to enable Cache Reserve, and experience cost-efficient, high-performance content delivery.</p>
    <div>
      <h3>Caching background</h3>
      <a href="#caching-background">
        
      </a>
    </div>
    <p>Content delivery begins when a client or browser makes a request, be it for a webpage, video, application, or even a cat picture. This request travels to an origin server, aka the host of the requested content. The origin assembles the necessary data, packages it, and dispatches it back to the client. It's at this moment that website operators often incur a fee for transferring the content from their host to the requesting visitor. This per-GB of data “transferred” is a frequent line item on monthly hosting bills for website operators; we refer to them as <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> or an “egress tax,” and have blogged previously on why we think it is <a href="/aws-egregious-egress/">bad practice</a>.</p><p>During its return voyage to the client, Cloudflare has the ability to cache the origin’s response. Caching enables subsequent visitors, who are requesting the same content, to receive it from one of our cache servers rather than the origin server. Since the file is now served from Cloudflare's servers it saves the website operator from egress fees. It also means better performance, due to Cloudflare’s cache servers typically being physically situated much closer to end users than the customer’s own origin servers.</p><p>Serving files from cache is a fundamental, and often essential strategy for delivering content over the Internet efficiently. We can evaluate the efficacy of a cache by looking at its “hit/miss” ratio: when website content is served from a cache server it’s known as a cache <b>hit</b>. But when content is not in cache, and we need to go back to the origin server to get a fresh copy of the content, we call it a cache <b>miss</b>.</p>
    <div>
      <h3>Why cache misses happen</h3>
      <a href="#why-cache-misses-happen">
        
      </a>
    </div>
    <p>Sometimes eligible content may not be served from cache for a variety of reasons. One scenario occurs when Cloudflare must <a href="/introducing-smart-edge-revalidation/#:~:text=So%20What%20Is%20Revalidation%3F">revalidate</a> with the origin to see if a fresh copy is available. This situation arises when a customer has configured a resource’s <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/">time-to-live (TTL)</a> to specify how long cached content should be served to visitors, and when to consider it outdated (stale). How long a <i>user</i> specifies something is safe to be served from cache is only a part of the story, though. <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">Content delivery networks (CDNs)</a> also need to consider how to best utilize storage for all of their customers and perform network optimizations to ensure the right assets are cached in the right locations.</p><p>CDNs must decide whether to evict content before their specified TTL to optimize storage for other assets when cache space nears full capacity. At Cloudflare, our eviction strategy prioritizes content based on its popularity, employing an algorithm known as "least recently used" or LRU. This means that even if the content’s TTL specifies that content should be cached for a long time, we may still need to evict it earlier if it's less frequently requested than other resources, to make room for more frequently accessed content.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dzhq6o3KQPVXLIzMFVdJ9/90d489a541d8921ff9607f9c201c2c9c/Cache-Reserve-Response-Flow.png" />
            
            </figure><p>This approach can sometimes perplex users who wonder why a cache miss occurs unexpectedly. Without eviction, we'd be forced to store content in data centers farther from the requesting visitors, hindering asset performance and introducing inefficiencies into Cloudflare's network operations.</p><p>Some customers, however, possess large content libraries that may not all be requested very frequently but which they’d still like to shield from being served by their origin. In a traditional caching setup, these assets might be evicted as they become less popular and, when requested again, fetched from the origin, resulting in egress fees. Cache Reserve is the solution for scenarios like this one, allowing customers to deliver assets from Cloudflare’s network, rather than their origin server — avoiding any associated egress tax, and providing better performance.</p>
    <div>
      <h3>Cache Reserve basics</h3>
      <a href="#cache-reserve-basics">
        
      </a>
    </div>
    <p>Cache Reserve combines several Cloudflare technologies, including <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/">tiered cache</a> and <a href="https://developers.cloudflare.com/r2/">R2</a> storage, to seamlessly provide organizations with a way to ensure their assets are never evicted from Cloudflare’s network, even if they are infrequently accessed by users. Once <a href="https://developers.cloudflare.com/cache/about/cache-reserve/#cache-reserve-asset-eligibility">admitted</a> to Cache Reserve, content can be stored for a much longer period of time — 30 days by <a href="https://developers.cloudflare.com/cache/about/cache-reserve/">default</a> — without being subjected to LRU eviction. If another request for the content arrives during that period, it can be extended for another 30-day period (and so on) or until the TTL signifies that we should no longer serve that content from cache. Cache Reserve serves as a safety net to backstop all cacheable content, so customers can sleep well at night without having to worry about unwanted cache eviction and origin egress fees.</p><p>Configuration of Cache Reserve is simple and efficient, on average taking seconds to configure and start seeing hit ratios increase dramatically. By simply pressing a <a href="http://dash.cloudflare.com/caching/cache-reserve">single button</a> in the Cache Reserve section of Cloudflare’s dashboard, all <a href="https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/#cache-reserve-asset-eligibility">eligible content</a> will be written to Cache Reserve on a miss and retrieved before Cloudflare would otherwise ask the origin for the resource. For more information about what’s required to use Cache Reserve, please review the <a href="https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/">documentation</a>.</p><p>Customers are also seeing significant savings when using Cache Reserve, often seeing it cost only a fraction of what they would otherwise pay for the egress from their hosting provider. As <a href="https://www.cloudflare.com/case-studies/docker/">Docker</a> put it,</p><blockquote><p>“The 2% cache hit ratio improvement enabled by Cache Reserve has eliminated roughly two-thirds of our S3 egress. The reduction in egress charges is almost an order of magnitude larger than the price we paid for Cache Reserve.”<b>Brett Inman</b>, Docker | Senior Manager of Engineering</p></blockquote>
    <div>
      <h3>What’s new with Cache Reserve?</h3>
      <a href="#whats-new-with-cache-reserve">
        
      </a>
    </div>
    <p>Since we’ve last <a href="/cache-reserve-open-beta/">blogged</a> about Cache Reserve we have made three important updates to the product that improve the quality of life for users.</p>
    <div>
      <h4>New analytics</h4>
      <a href="#new-analytics">
        
      </a>
    </div>
    <p>Previously, Cache Reserve analytics provided views of how much storage had been used by a particular website and estimates of the number of operations used in a particular time period. We’ve improved analytics to be more similar to traditional cache analytics, allowing customers to view storage and operations in a customized time series from the cache analytics dashboard.</p><p>Additionally, the updated Cache Reserve analytics will provide you an estimate of how much egress you’re saving by using the product.</p><p>In the coming months we will also provide greater visibility into the largest and most requested items being served from Cache Reserve.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iCtq5lq4aebz2tPsNfn1g/032549ab5220306641ed6177f9572507/Screenshot-2023-10-08-at-7.45.35-PM.png" />
            
            </figure>
    <div>
      <h4>Cache Reserve delete storage</h4>
      <a href="#cache-reserve-delete-storage">
        
      </a>
    </div>
    <p>Cache Reserve users who want to change, remove or stop using their Reserve altogether have asked for a simple way to wipe their storage without impacting their use of Cloudflare’s traditional edge cache. Previously clearing Cache Reserve would be achieved by purging content. This could be problematic because purging also wipes content cached in the traditional edge cache which could lead to additional origin fetches and egress fees.</p><p>We’ve built in a new way for customers to completely remove their Cache Reserve storage with the push of a button, which can be found in the Cache Reserve <a href="http://dash.cloudflare.com/caching/cache-reserve">dashboard</a>. When performing this action you will need to wait until Cache Reserve is cleared before re-enabling. This period can differ depending on how much is stored in your Cache Reserve, but in general can take around 24 hours.  </p><p>The Cache Reserve delete button differs from purging. <b>Purge</b> will still allow for you to invalidate resources across <i>all</i> of Cloudflare’s Caches — including both Cache Reserve and the edge cache with a single request. The Cache Reserve delete button will actively remove the entire storage in the Reserve only. Currently, this action can be performed for the entire Cache Reserve storage associated with a zone.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3wDOQHqMtcZNilIEwbhM6e/52d8fe60c0aec1ddc97cbe0d53b98ce6/Screenshot-2023-10-08-at-7.46.10-PM.png" />
            
            </figure>
    <div>
      <h4>Integration into Cache Rules</h4>
      <a href="#integration-into-cache-rules">
        
      </a>
    </div>
    <p>One of the most requested Cache Reserve features we heard from early adopters is the ability to specify what parts of their website should be eligible for storage in Cache Reserve. Previously, when a user enabled Cache Reserve, all of a website’s assets that were <a href="https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/#cache-reserve-asset-eligibility">eligible</a> for Cache Reserve could be stored in the Reserve. For egress sensitive customers, this is the path we still recommend. However, for customers that really want to customize what is eligible for Cache Reserve, you can now use <a href="https://dash.cloudflare.com/caching/cache-rules">Cache Rules</a> to specify assets that should be stored in Cache Reserve based on the usual Cache Rules fields (hostnames, paths, URLs, etc.) and also by using specific new rules configurations like the minimum size of a resource. For example, you can specify that all assets that should be written to Cache Reserve have a minimum size of 100kb. By using the new rules functionality, Cache Reserve customers can customize how their Reserve is built while still maintaining utilization of the edge cache, and saving even more money.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FHCh7P9iTCDWWvnMqXqXr/bdda7a73e6bb9613fa5d564e5675d753/Screenshot-2023-10-08-at-4.14.48-PM--1-.png" />
            
            </figure>
    <div>
      <h3>Try out Cache Reserve today!</h3>
      <a href="#try-out-cache-reserve-today">
        
      </a>
    </div>
    <p>You can easily sign up for Cache Reserve in the Cloudflare Dashboard by navigating to the Cache section, clicking on <a href="https://dash.cloudflare.com/caching/cache-reserve">Cache Reserve</a>, and pushing enable storage sync. Try it out and let us know what you think!</p> ]]></content:encoded>
            <category><![CDATA[Cache Reserve]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">3w0ytpRXFrTiKqDD7VRxHk</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cache Rules are now GA: precision control over every part of your cache]]></title>
            <link>https://blog.cloudflare.com/cache-rules-go-ga/</link>
            <pubDate>Tue, 24 Oct 2023 13:00:40 GMT</pubDate>
            <description><![CDATA[ Today, we're thrilled to share that Cache Rules, along with several other Rules products, are generally available (GA). But that’s not all — we're also introducing new configuration options for Cache Rules ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23yqkja301zV4TfUtTrL0D/1619ec8778be0841f28ddd0fc07a03ba/Cache-Rules-GA-1.png" />
            
            </figure><p>One year ago we introduced Cache Rules, a new way to customize cache settings on Cloudflare. Cache Rules provide greater flexibility for how users cache content, offering precise controls, a user-friendly API, and seamless Terraform integrations. Since it was released in late September 2022, over 100,000 websites have used Cache Rules to fine-tune their cache settings.</p><p>Today, we're thrilled to announce that Cache Rules, along with several other <a href="https://developers.cloudflare.com/rules/">Rules products</a>, are <b>generally available (GA)</b>. But that’s not all — we're also introducing new configuration options for Cache Rules that provide even more options to customize how you cache on Cloudflare. These include functionality to define what resources are eligible for <a href="https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/">Cache Reserve</a>, what <a href="https://developers.cloudflare.com/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-524-a-timeout-occurred">timeout values</a> should be respected when receiving data from your origin server, which <a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy">custom ports</a> we should use when we cache content, and whether we should bypass Cloudflare’s cache in the absence of a <a href="https://developers.cloudflare.com/cache/concepts/cache-control/#cache-control-directives">cache-control</a> header.</p><p>Cache Rules give users full control and the ability to tailor their content delivery strategy for almost any use case, without needing to write code. As Cache Rules go GA, we are incredibly excited to see how fast customers can achieve their perfect cache strategy.</p>
    <div>
      <h3>History of Customizing Cache on Cloudflare</h3>
      <a href="#history-of-customizing-cache-on-cloudflare">
        
      </a>
    </div>
    <p>The journey of <a href="https://www.cloudflare.com/learning/cdn/what-is-caching/">cache</a> customization on Cloudflare began more than a decade ago, right at the beginning of the company. From the outset, one of the most frequent requests from our customers involved simplifying their configurations. Customers wanted to easily implement precise cache policies, apply robust security measures, manipulate headers, set up redirects, and more for any page on their website. Using Cloudflare to set these controls was especially crucial for customers utilizing origin servers that only provided convoluted configuration options to add headers or policies to responses, which could later be applied downstream by <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a>.</p><p>In response to this demand, we introduced Page Rules, a product that has since witnessed remarkable growth in both its popularity and functionality. Page Rules became the preferred choice for customers seeking granular control over how Cloudflare caches their content. Currently, there are over 5 million active cache-related Page Rules, assisting websites in tailoring their content delivery strategies.</p><p>However, behind the scenes, Page Rules encountered a scalability issue.</p><p>Whenever a Page Rule is encountered by Cloudflare we must transform all rule conditions for a customer into a single regex pattern. This pattern is then applied to requests for the website to achieve the desired cache configuration. When thinking about how all the regexes from all customers are then compared against tens of millions of requests per second, spanning across more than 300 data centers worldwide, it’s easy to see that the computational demands for applying Page Rules can be immense. This pressure is directly tied to the number of rules we could offer our users. For example, Page Rules would only allow for 125 rules to be deployed on a given website.</p><p>To address this challenge, we rebuilt all the Page Rule functionality on the new <a href="https://developers.cloudflare.com/ruleset-engine/">Rulesets Engine</a>. Not only do ruleset engine-based products give users more rules to play with, they also offer greater flexibility on when these rules should run. Part of the magic of the Rulesets engine is that rather than combine all of a page's rules into a single regular expression, rule logic can be evaluated on a conditional basis. For example, if <a href="https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-subdomain/">subdomain</a> A and B have different caching policies, a request from subdomain A can be evaluated using regex logic specific to A (while omitting any logic that applies to B). This yields meaningful benefits to performance, and reduces the computational demands of applying Page Rules across Cloudflare's network.</p><p>Over the past year, Cache Rules, along with Origin Rules, Configuration Rules, and Single Redirect Rules, have been in beta. Thanks to the invaluable support of our early adopters, we have successfully fine-tuned our product, reaching a stage where it is ready to transition from beta to GA. These products can now accomplish everything that Page Rules could and more. This also marks the beginning of the <a href="https://en.wikipedia.org/wiki/End-of-life_product">EOL</a> process for Page Rules. In the coming months we will announce timelines and information regarding how customers will replace their Page Rules with specific Rules products. We will automate this as much as possible and provide simple steps to ensure a smooth transition away from Page Rules for everyone.</p>
    <div>
      <h3>How to use Cache Rules and What’s New</h3>
      <a href="#how-to-use-cache-rules-and-whats-new">
        
      </a>
    </div>
    <p>Those that have used Cache Rules know that they are intuitive and work similarly to our other <a href="https://developers.cloudflare.com/ruleset-engine/">ruleset engine</a> products. User-defined criteria like URLs or request headers are evaluated, and if matching a specified value, the Cloudflare caching configuration is obeyed. Each Cache Rule depends on fields, operators, and values. For all the different options available, you should see our Cache Rules <a href="https://developers.cloudflare.com/cache/about/cache-rules/">documentation</a>.</p><p>Below are two examples of how to deploy different strategies to customize your cache. These examples only show the tip-of-the-iceberg of what’s possible with Cache Rules, so we encourage you to try them out and let us know what you think.</p>
    <div>
      <h4>Example: Cached content is updated at a regular cadence</h4>
      <a href="#example-cached-content-is-updated-at-a-regular-cadence">
        
      </a>
    </div>
    <p>As an example, let’s say that Acme Corp wants to update their caching strategy. They want to customize their cache to take advantage of certain request headers and use the presence of those request headers to be the criteria that decides when to apply different cache rules. The first thing they’d need to decide is what information should be used to trigger the specific rule. This is defined in the <a href="https://developers.cloudflare.com/ruleset-engine/rules-language/expressions/">expression</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kfN0DJOJ9myEPesXCu6fw/7218e39e9ffbaa96d6ae0e96e6a4862d/Screenshot-2023-10-18-at-6.27.59-PM.png" />
            
            </figure><p>Once the triggering criteria is defined Acme Corp should next determine how they want to customize their cache.</p>
    <div>
      <h4>Content changing quickly</h4>
      <a href="#content-changing-quickly">
        
      </a>
    </div>
    <p>The most common cache strategy is to update the <a href="https://developers.cloudflare.com/cache/how-to/cache-rules/#create-cache-rules-in-the-dashboard">Edge Cache TTL</a>. If Acme Corp thinks a particular piece of content on their website might change quickly, they can alter the time Cloudflare should consider a resource eligible to be served from cache to be shorter. This way Cloudflare would go back to the origin more frequently to <a href="https://developers.cloudflare.com/cache/concepts/cache-control/#:~:text=If%20the%20content%20is%20stale%20in%20Cloudflare%E2%80%99s%20cache%2C%20Cloudflare%20attempts%20to%20revalidate%20the%20content%20with%20the%20origin%20before%20serving%20the%20response%20to%20the%20client.">revalidate and update the content</a>. The Edge Cache TTL section is also where Acme Corp can define a resource’s TTL based on the status code Cloudflare gets back from their origin, and what Cloudflare should cache if there aren’t any cache-control instructions sent from Acme’s origin server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2swqQzRX7v3UropZSNRbjH/86bac883296e8c5502e6b5a08e5b5df9/Screenshot-2023-10-08-at-4.14.17-PM.png" />
            
            </figure>
    <div>
      <h4>Content changing slowly</h4>
      <a href="#content-changing-slowly">
        
      </a>
    </div>
    <p>On the other hand, if Acme Corp had a lot of content that did not change very frequently (like a favicon or logo) and they preferred to serve that from Cloudflare’s cache instead of their origin, they can define which content should be eligible for <a href="https://developers.cloudflare.com/cache/advanced-configuration/cache-reserve/">Cache Reserve</a> with a new Cache Rule. Cache Reserve reduces egress fees by storing assets persistently in Cloudflare's cache for an extended period of time.</p><p>Traditionally when a user would enable Cache Reserve, their entire zone would be eligible to be written to Cache Reserve. For customers that care about saving origin <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> on all resources on their website, this is still the best path forward. But for customers that want to have additional control over precisely what assets should be part of their Cache Reserve or even what size of assets should be eligible, the Cache Reserve Eligibility Rule provides additional knobs so that customers can precisely increase their cache hits and reduce origin egress in a customized manner. Note that this rule requires a Cache Reserve subscription.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7b22znoAtsYBjMbufZhP9q/04b5ab28becee5d25d87f558efc1cb44/Screenshot-2023-10-08-at-4.14.48-PM.png" />
            
            </figure>
    <div>
      <h3>Example: Origin is slow</h3>
      <a href="#example-origin-is-slow">
        
      </a>
    </div>
    <p>Let’s consider a hypothetical example. Recently, Acme Corp has been seeing an increase in errors in their Cloudflare logs. These errors are related to a new report that Acme is providing its users based on Acme’s proprietary data. This report requires that their origin access several databases, perform some calculations and generate the report based on these calculations. The origin generating this report needs to wait to respond until all of this background work is completed. Acme’s report is a success, generating an influx of traffic from visitors wanting to see it. But their origin is struggling to keep up. A lot of the errors they are seeing are 524s which correlate to Cloudflare not seeing an origin response before a timeout occurred.</p><p>Acme has plans to improve this by scaling their origin infrastructure but it’s taking a long time to deploy. In the meantime, they can turn to Cache Rules to configure a timeout to be longer. Historically the timeout value between Cloudflare and two successive origin reads was 100 seconds, which meant that if an origin didn't successfully send a response for a period lasting longer than 100 seconds, it could lead to a 524 error. By using a Cache Rule to extend this timeout, Acme Corp can rely more heavily on Cloudflare's cache.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XqipB1JsifcQLUyGn40BB/5bc7e324c62ab607e2c30c665849f67b/Screenshot-2023-10-18-at-6.21.32-PM.png" />
            
            </figure><p>The above cache strategies focus on how often a resource is changed on an origin, and the origin’s performance. But there are numerous other rules that allow for other strategies, like <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/">custom cache keys</a> which allow for customers to determine how their cache should be defined on Cloudflare, respecting <a href="https://developers.cloudflare.com/cache/reference/etag-headers/">strong ETags</a> which help customers determine when Cloudflare should revalidate particular cached assets, and custom ports which allow for customers to define <a href="https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy">non-standard ports</a> that Cloudflare should use when making caching decisions about content.</p><p>The full list of Cache Rules can be found <a href="https://developers.cloudflare.com/cache/how-to/cache-rules/">here</a>.</p>
    <div>
      <h3>Try Cache Rules today!</h3>
      <a href="#try-cache-rules-today">
        
      </a>
    </div>
    <p>We will continue to build and release additional rules that provide powerful, easy to enable control for anyone using Cloudflare’s cache. If you have feature requests for additional Cache Rules, please let us know in the <a href="https://community.cloudflare.com/">Cloudflare Community</a>.</p><p>Go to the <a href="https://dash.cloudflare.com/caching/cache-rules">dashboard</a> and try Cache Rules out today!</p> ]]></content:encoded>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Cache Rules]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">4GOWR9PBxeYrH9OGO3zxjZ</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Speeding up your (WordPress) website is a few clicks away]]></title>
            <link>https://blog.cloudflare.com/speeding-up-your-website-in-a-few-clicks/</link>
            <pubDate>Thu, 22 Jun 2023 13:00:58 GMT</pubDate>
            <description><![CDATA[ In this blog, we will explain where the opportunities exist to improve website performance, how to check if a specific site can improve performance, and provide a small JavaScript snippet which can be used with Cloudflare Workers to do this optimization for you ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/h8hH7qWrYChGMYyrktaXj/163bb1bb8d730c069de5865285db1697/image1-31.png" />
            
            </figure><p>Every day, website visitors spend far too much time waiting for websites to load in their browsers. This waiting is partially due to browsers not knowing which resources are critically important so they can prioritize them ahead of less-critical resources. In this blog we will outline how millions of websites across the Internet can improve their performance by specifying which critical content loads first with Cloudflare Workers and what Cloudflare will do to make this easier by default in the future.</p><p>Popular Content Management Systems (CMS) like WordPress have made attempts to influence website resource priority, for example through techniques like <a href="https://make.wordpress.org/core/2020/07/14/lazy-loading-images-in-5-5/">lazy loading images</a>. When done correctly, the results are magical. Performance is optimized between the CMS and browser without needing to implement any changes or coding new prioritization strategies. However, we’ve seen that these default priorities have opportunities to improve greatly.</p><p>In this co-authored blog with Google’s Patrick Meenan we will explain where the opportunities exist to improve website performance, how to check if a specific site can improve performance, and provide a small JavaScript snippet which can be used with Cloudflare Workers to do this optimization for you.</p>
    <div>
      <h3>What happens when a browser receives the response?</h3>
      <a href="#what-happens-when-a-browser-receives-the-response">
        
      </a>
    </div>
    <p>Before we dive into where the opportunities are to <a href="https://www.cloudflare.com/learning/performance/speed-up-a-website/">improve website performance</a>, let’s take a step back to understand how browsers load website assets by default.</p><p>After the browser sends a <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP request</a> to a server, it receives a HTTP response containing information like status codes, headers, and the requested content. The browser carefully analyzes the response's status code and response headers to ensure proper handling of the content.</p><p>Next, the browser processes the content itself. For HTML responses, the browser extracts important information from the  section of the HTML, such as the page title, stylesheets, and scripts. Once this information is parsed, the browser moves on to the response  which has the actual page content. During this stage, the browser begins to present the webpage to the visitor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QXSvFomT1av14RMVga2Qf/e379b539ca63b484d47e9f14f128cada/image2-28.png" />
            
            </figure><p>If the response includes additional 3rd party resources like CSS, JavaScript, or other content, the browser may need to fetch and integrate them into the webpage. Typically, browsers like Google Chrome delay loading images until after the resources in the HTML  have loaded. This is also known as “<a href="https://developer.chrome.com/en/docs/lighthouse/performance/render-blocking-resources/">blocking</a>” the render of the webpage. However, developers can override this blocking behavior using <a href="https://web.dev/fetch-priority/">fetch priority</a> or other methods to boost other content’s priority in the browser. By adjusting an important image's fetch priority, it can be loaded earlier, which can lead to significant improvements in crucial performance metrics like LCP (<a href="https://web.dev/optimize-lcp/#:~:text=LCP%20measures%20the%20time%20from%20when%20the%20user%20initiates%20loading%20the%20page%20until%20the%20largest%20image">Largest Contentful Paint</a>).</p><p>Images are so central to web pages that they have become an essential element in measuring website performance from <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/">Core Web Vitals</a>. LCP measures the time it takes for the largest visible element, often an image, to be fully rendered on the screen. Optimizing the loading of critical images (like <a href="https://web.dev/optimize-lcp/#:~:text=LCP%20measures%20the%20time%20from%20when%20the%20user%20initiates%20loading%20the%20page%20until%20the%20largest%20image">LCP images</a>) can greatly enhance performance, improving the overall user experience and page performance.</p><p>But here's the challenge – a browser may not know which images are the most important for the visitor experience (like the LCP image) until rendering begins. If the developer can identify the LCP image or critical elements before it reaches the browser, its priority can be increased at the server to boost website performance instead of waiting for the browser to naturally discover the critical images.</p><p>In our Smart Hints <a href="/smart-hints">blog</a>, we describe how Cloudflare will soon be able to automatically prioritize content on behalf of website developers, but what happens if there’s a need to optimize the priority of the images right now? How do you know if a website is in a suboptimal state and what can you do to improve?</p><p>Using Cloudflare, developers should be able to improve image performance with heuristics that identify likely-important images before the browser parses them so these images can have increased priority and be loaded sooner.</p>
    <div>
      <h3>Identifying Image Priority opportunities</h3>
      <a href="#identifying-image-priority-opportunities">
        
      </a>
    </div>
    <p>Just increasing the fetch priority of all images won't help if they are lazy-loaded or not critical/LCP images. <a href="https://www.cloudflare.com/learning/performance/what-is-lazy-loading/">Lazy-loading</a> is a method that developers use to generally improve the initial load of a webpage if it includes numerous out-of-view elements. For example, on Instagram, when you continually scroll down the application to see more images, it would only make sense to load those images when the user arrives at them otherwise the performance of the page load would be needlessly delayed by the browser eagerly loading these out-of-view images. Instead the highest priority should be given to the LCP image in the viewport to improve performance.</p><p>So developers are left in a situation where they need to know which images are on users' screens/viewports to increase their priority and which are off their screens to lazy-load them.</p><p>Recently, we’ve seen attempts to influence image priority on behalf of developers. For example, by <a href="https://make.wordpress.org/core/2020/07/14/lazy-loading-images-in-5-5/">default</a>, in WordPress 5.5 all images with an <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img">IMG tag</a> and <a href="https://en.wikipedia.org/wiki/Aspect_ratio_(image)">aspect ratios</a> were directed to be lazy-loaded. While there are plugins and other methods WordPress developers can use to boost the priority of LCP images, lazy-loading all images in a default manner and not knowing which are LCP images can cause artificial performance delays in website performance (they’re <a href="https://make.wordpress.org/core/2023/05/02/proposal-for-enhancing-lcp-image-performance-with-fetchpriority/">working on this</a> though, and have partially resolved this for <a href="https://make.wordpress.org/core/2023/04/05/wordpress-6-2-performance-improvements-for-all-themes/">block themes</a>).</p><p><i>So how do we identify the LCP image and other critical assets before they get to the browser?</i></p><p>To evaluate the opportunity to improve image performance, we turned to the <a href="https://httparchive.org/">HTTP Archive</a>. Out of the approximately 22 million desktop pages tested in February 2023, 46% had an <a href="https://web.dev/optimize-lcp/">LCP element</a> with an <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img">IMG tag</a>. Meaning that for page load metrics, LCP had an image included about half the time. Though, among these desktop pages, 8.5 million had the image in the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML">static HTML</a> delivered with the page, indicating a <b>total potential improvement opportunity of approximately 39% of the desktop pages</b> within the dataset.</p><p>In the case of mobile pages, out of the ~28.5 million tested, 40% had an LCP element as an IMG tag. Among these mobile pages, 10.3 million had the image in the static HTML delivered with the page, suggesting a potential <b>improvement opportunity in around 36% of the mobile pages</b> within the dataset.</p><p>However, as previously discussed, prioritizing an image won't be effective if the image is lazy-loaded because the directives are contradictory. In the dataset,  approximately 1.8 million LCP desktop images and 2.4 million LCP mobile images were lazy-loaded.</p><p>Therefore, across the Internet, the opportunity to improve image performance would be about ~30% of pages that have an LCP image in the original HTML markup that weren’t lazy-loaded, but with a more advanced Cloudflare Worker, the additional 9% of lazy-loaded LCP images can also be improved improved by removing the lazy-load attribute.</p><p>If you’d like to determine which element on your website serves as the <a href="https://web.dev/lcp/#what-elements-are-considered">LCP element</a> so you can increase the priority or remove any lazy-loading, you can use browser <a href="https://developer.chrome.com/docs/devtools/">developer tools</a>, or <a href="https://www.cloudflare.com/learning/performance/test-the-speed-of-a-website/">speed tests</a> like <a href="https://www.webpagetest.org/">Webpagetest</a> or <a href="https://dash.cloudflare.com?to=/:account/:zone/speed/test">Cloudflare Observatory</a>.</p><p>39% of desktop images seems like a lot of opportunity to improve image performance. So the next question is how can Cloudflare determine the LCP image across our network and automatically prioritize them?</p>
    <div>
      <h3>Image Index</h3>
      <a href="#image-index">
        
      </a>
    </div>
    <p>We thought that how soon the LCP image showed up in the HTML would serve as a useful indicator. So we analyzed the HTTP Archive dataset to see where the cumulative percentage of LCP images are discovered based on their position in the HTML, including lazy-loaded images.</p><p>We found that approximately 25% of the pages had the LCP image as the first image in the HTML (around 10% of all pages). Another 25% had the LCP image as the second image. WordPress seemed to arrive at a similar conclusion and recently <a href="https://make.wordpress.org/core/2023/04/05/wordpress-6-2-performance-improvements-for-all-themes/">released</a> a development to remove the default lazy-load attribute from the first image on block themes, but there are opportunities to go further.</p><p>Our analysis revealed that implementing a straightforward rule like "do not lazy-load the first four images," either through the browser, a content management system (CMS), or a Cloudflare Worker could address approximately 75% of the issue of lazy-loading LCP images (example Worker below).</p>
    <div>
      <h3>Ignoring small images</h3>
      <a href="#ignoring-small-images">
        
      </a>
    </div>
    <p>In trying to find other ways to identify likely LCP images we next turned to the size of the image. To increase the likelihood of getting the LCP image early in the HTML, we looked into ignoring “small” images as they are unlikely to be big enough to be a LCP element. We explored several sizes and 10,000 pixels (less than 100x100) was a pretty reliable threshold that didn’t skip many LCP images and avoided a good chunk of the non-LCP images.</p><p>By ignoring small images (&lt;10,000px), we found that the first image became the LCP image in approximately 30-34% of cases. Adding the second image increased this percentage to 56-60% of pages.</p><p>Therefore, to improve image priority, a potential approach could involve assigning a higher priority to the first four "not-small" images.</p>
    <div>
      <h3>Chrome 114 Image Prioritization Experiment</h3>
      <a href="#chrome-114-image-prioritization-experiment">
        
      </a>
    </div>
    <p>An experiment running in Chrome 114 does exactly what we described above. Within the browser there are a few different prioritization knobs to play with that aren’t web-exposed so we have the opportunity to assign a “medium” priority to images that we want to boost automatically (directly controlling priority with “fetch priority” lets you set high or low). This will let us move the images ahead of other images, async scripts and parser-blocking scripts late in the body but still keep the boosted image priority below any high-priority requests, particularly dynamically-injected blocking scripts.</p><p>We are experimenting with boosting the priority of varying numbers of images (2, 5 and 10) and with allowing one of those medium-priority images to load at a time during Chromes “tight” mode (when it is loading the render-blocking resources in the head) to increase the likelihood that the LCP image will be available when the first paint is done.</p><p>The data is still coming in and no “ship” decisions have been made yet but the early results are very promising, improving the LCP time across the entire web for all arms of the experiment (not by massive amounts but moving the metrics of the whole web is notoriously difficult).</p>
    <div>
      <h3>How to use Cloudflare Workers to boost performance</h3>
      <a href="#how-to-use-cloudflare-workers-to-boost-performance">
        
      </a>
    </div>
    <p>Now that we’ve seen that there is a large opportunity across the Internet for helping prioritize images for performance and how to identify images on individual pages that are likely LCP images, the question becomes, what would the results be of implementing a network-wide rule that could boost image priority from this study?</p><p>We built a test worker and deployed it on some WordPress test sites with our friends at <a href="https://rocket.net/">Rocket.net</a>, a WordPress hosting platform focused on performance. This worker boosts the priority of the first four images while removing the lazy-load attribute, if present. When deployed we saw good performance results and the expected image prioritization.</p>
            <pre><code>export default {
  async fetch(request) {
    const response = await fetch(request);
 
    // Check if the response is HTML
    const contentType = response.headers.get('Content-Type');
    if (!contentType || !contentType.includes('text/html')) {
      return response;
    }
 
    const transformedResponse = transformResponse(response);
 
    // Return the transformed response with streaming enabled
    return transformedResponse;
  },
};
 
async function transformResponse(response) {
  // Create an HTMLRewriter instance and define the image transformation logic
  const rewriter = new HTMLRewriter()
    .on('img', new ImageElementHandler());
 
  const transformedBody = await rewriter.transform(response).text()
 
  const transformresponse = new Response(transformedBody, response)
 
  // Return the transformed response with streaming enabled
  return transformresponse
}
 
class ImageElementHandler {
  constructor() {
    this.imageCount = 0;
    this.processedImages = new Set();
  }
 
  element(element) {
    const imgSrc = element.getAttribute('src');
 
    // Check if the image is small based on Chrome's criteria
    if (imgSrc &amp;&amp; this.imageCount &lt; 4 &amp;&amp; !this.processedImages.has(imgSrc) &amp;&amp; !isImageSmall(element)) {
      element.removeAttribute('loading');
      element.setAttribute('fetchpriority', 'high');
      this.processedImages.add(imgSrc);
      this.imageCount++;
    }
  }
}
 
function isImageSmall(element) {
  // Check if the element has width and height attributes
  const width = element.getAttribute('width');
  const height = element.getAttribute('height');
 
  // If width or height is 0, or width * height &lt; 10000, consider the image as small
  if ((width &amp;&amp; parseInt(width, 10) === 0) || (height &amp;&amp; parseInt(height, 10) === 0)) {
    return true;
  }
 
  if (width &amp;&amp; height) {
    const area = parseInt(width, 10) * parseInt(height, 10);
    if (area &lt; 10000) {
      return true;
    }
  }
 
  return false;
}</code></pre>
            <p>When testing the Worker, we saw that default image priority was boosted into “high” for the first four images and the fifth image remained “low.” This resulted in an LCP range of “<a href="https://web.dev/lcp/#:~:text=first%20started%20loading.-,What%20is%20a%20good%20LCP%20score%3F,across%20mobile%20and%20desktop%20devices.">good</a>” from a speed test. While this initial test is not a dispositive indicator that the Worker will boost performance in every situation, the results are promising and we look forward to continuing to experiment with this idea.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4jHIrp0orKEbGAppkSWVXq/1d0b5704b4ae310010e25a99599dfa49/image3-21.png" />
            
            </figure><p>While we’ve experimented with WordPress sites to illustrate the issues and potential performance benefits, this issue is present across the Internet.</p><p>Website owners can help us experiment with the Worker above to improve the priority of images on their websites or edit it to be more specific by targeting likely LCP elements. Cloudflare will continue experimenting using a very similar process to understand how to safely implement a network-wide rule to ensure that images are correctly prioritized across the Internet and performance is boosted without the need to configure a specific Worker.</p>
    <div>
      <h3>Automatic Platform Optimization</h3>
      <a href="#automatic-platform-optimization">
        
      </a>
    </div>
    <p>Cloudflare’s <a href="https://developers.cloudflare.com/automatic-platform-optimization/">Automatic Platform Optimization</a> (APO) is a plugin for WordPress which allows Cloudflare to deliver your entire WordPress site from our network ensuring consistent, fast performance for visitors. By serving cached sites, APO can improve performance metrics. APO does not currently have a way to prioritize images over other assets to improve browser render metrics or dynamically rewrite HTML, techniques we’ve discussed in this post. Although this presents a potential opportunity for future development, it requires thorough testing to ensure safe and reliable support.</p><p>In the future we’ll look to include the techniques discussed today as part of APO, however in the meantime we recommend using <a href="/snippets-announcement/">Snippets</a> (and <a href="/performance-experiments-with-cloudflare/">Experiments</a>) to test with the code example above to see the performance impact on your website.</p>
    <div>
      <h3>Get in touch!</h3>
      <a href="#get-in-touch">
        
      </a>
    </div>
    <p>If you are interested in using the JavaScript above, we recommended testing with <a href="https://workers.cloudflare.com/">Workers</a> or using <a href="/snippets-announcement/">Cloudflare Snippets</a>. We’d love to hear from you on what your results were. Get in touch via social media and share your experiences.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Automatic Platform Optimization]]></category>
            <category><![CDATA[WordPress]]></category>
            <guid isPermaLink="false">5VIbkWZzUAIMJDqTVlKR8i</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Patrick Meenan (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[It's never been easier to migrate thanks to Cloudflare's new Migration Hub]]></title>
            <link>https://blog.cloudflare.com/turpentine-v2-migration-program/</link>
            <pubDate>Wed, 21 Jun 2023 13:00:21 GMT</pubDate>
            <description><![CDATA[ Today, we are thrilled to relaunch Turpentine, and introduce Cloudflare's new Migration Hub. The Migration Hub serves as a one-stop-shop for all migration needs, featuring brand-new migration guides that bring transparency and simplicity to the process ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ULoJnoexuvfEnYee3H6gB/f3792dffc4c66705f300de27c0d404d4/image1-21.png" />
            
            </figure><p>We understand the pain points associated with CDN migrations. That's why in late 2021 we introduced <a href="/announcing-turpentine/">Turpentine</a>, a project to perform the process of translating the old Varnish Configuration Language (VCL) into Cloudflare Workers with just a push of a button. After nearly two years of testing and user feedback, we’ve tailored the migration processes for different user groups.</p><p>Today, we are thrilled to relaunch Turpentine, and introduce Cloudflare's new <a href="https://www.cloudflare.com/migration-hub">Migration Hub</a>. The Migration Hub serves as a one-stop-shop for all migration needs, featuring brand-new migration guides that bring transparency and simplicity to the process.</p><p>We also know that a large number of customers aren't comfortable doing migrations themselves. Years of built up business logic makes unpacking and translating CDN configurations between different vendors difficult and locks businesses into subpar products and services. To help these customers we have established a Professional Services group to ensure smooth migrations for customers transitioning to Cloudflare’s first-class products. Going forward, we plan to continue to invest resources in Turpentine to ensure that moving to any part of Cloudflare is easy and you have the help you need.</p>
    <div>
      <h3>Why choose Cloudflare?</h3>
      <a href="#why-choose-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare has gained immense popularity among businesses seeking to improve website performance, security, and reliability. The demand for Cloudflare's CDN services has skyrocketed, with an ever-increasing number of companies wanting to use our services to help protect their web properties. It became evident that a more streamlined approach was needed to empower customers to self-guide through the onboarding process if they wanted.</p><p>That’s why we’ve shipped guides to help bring transparency to the migration process, compare Cloudflare's Rules or Workers to VCL or XML configurations, and provide mappings of different products between vendors. This resource serves as a repository of information and step-by-step guidance for those seeking to move to Cloudflare. These guides are designed to empower customers to take control of their onboarding journey by providing them with the tools and resources they need to understand how to successfully implement Cloudflare's first-class products without needing to talk to anyone.</p><p>As new features and enhancements are introduced to Cloudflare, the landing page will be updated to reflect these changes.</p><p>However, undertaking the onboarding process independently can be daunting for some businesses. We understand that every organization is unique, with specific requirements and challenges. To address this concern, Cloudflare has established a dedicated Professional Services team. This team of experts works closely with customers, taking the time to understand their environments, assess their needs, and provide tailored guidance and support throughout the migration process. With the help of the professional services team, businesses can transition to Cloudflare being guided by an experienced team to ensure a timely, smooth and successful migration. Using the Migration Hub, you can get in contact with the Professional Services team to help your migration journey.</p><p>Whether you prefer self-guided exploration or expert guidance, the Cloudflare Migration Hub has everything you need to make your migration journey a success.</p>
    <div>
      <h3>Self-serve guides</h3>
      <a href="#self-serve-guides">
        
      </a>
    </div>
    <p>Our commitment to transparency and empowering our customers led us to create comprehensive public-facing guides that provide valuable insights into how CDN products compare and overlap. With these guides, you can gain a clear understanding of the features and capabilities offered by Cloudflare, and how they map between CDN offerings you might be more familiar with.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14YS46rccPU0Q32IafWrkT/d1a04090a72d131170a126778696ef87/image2-14.png" />
            
            </figure><p>Example of mapping Fastly products to Cloudflare</p><p>The migration guides include product maps that show how you can <a href="https://www.cloudflare.com/cloudflare-vs-akamai/">match Cloudflare features to Akamai</a> or Fastly features and how to configure them. Using this information, migration should just be about matching up rules and implementing instead of translating feature names between vendors or fiddling with ChatGPT prompts to correctly (or incorrectly!) translate code from one vendor to the other. There are also numerous examples of how certain configurations have been accomplished with code examples that help customers configure and understand their current configuration and translate them into Cloudflare products, easily. Check them out <a href="https://www.cloudflare.com/migration-hub/">here</a>.</p><p>Not only that, but Cloudflare’s commitment to providing numerous free tools across our network means anyone can sign-up and get access to much of our platform without needing to talk to anyone. We believe in giving you the tools and knowledge you need to navigate the migration and testing process independently, while knowing that our support is just a click away whenever you need it.</p>
    <div>
      <h3>Let us do it for you with Professional Services</h3>
      <a href="#let-us-do-it-for-you-with-professional-services">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ru6CtXng70SQY47jptHh5/02f6eafa8695b6e33c01a06f64aefd28/image3-12.png" />
            
            </figure><p>We're also incredibly excited to introduce our dedicated team of migration experts, known as Professional Services, who are here to assist you throughout the entire process. The Professional Service team will work closely with you, offering their expertise and guiding you through each step to ensure a seamless transition onto Cloudflare’s products.</p><p>Too often, we meet with customers who have been intimidated by the complexity of their current CDN vendor. They had help setting it up by a third party and have experienced the nervousness of trying to change things without knowing what impacts it could have downstream. This is compounded by different CDNs using different terminology for essentially the same concepts.</p><p>Professional Services is here to help guide your onboarding experience and cut through that uncertainty.</p><p>From providing in-depth knowledge about the migration process and tooling to addressing any specific challenges you may encounter, our Professional Services team is committed to making your migration experience as smooth and efficient as possible. With Cloudflare's Professional Services, you can confidently embark on your migration journey, knowing that our experts will handle the complexities while empowering you to drive the migration process forward.</p>
    <div>
      <h3>Success Stories</h3>
      <a href="#success-stories">
        
      </a>
    </div>
    <p>By leveraging Cloudflare's migration solutions, numerous businesses have achieved remarkable results, including improved performance, enhanced security, and streamlined pricing. These success stories serve as a testament to the effectiveness and reliability of Cloudflare's migration offerings.</p>
    <div>
      <h3>Improve cost and performance by migrating to Cloudflare</h3>
      <a href="#improve-cost-and-performance-by-migrating-to-cloudflare">
        
      </a>
    </div>
    <p><i>A mobile communications leader successfully migrated its public website, </i><b><i>after 20 years with Akamai</i></b><i>, to Cloudflare for a better digital experience plus &gt;</i><b><i>20% cost savings</i></b><i>.</i></p><p>The company’s decision to decentralize purchasing of CDN services illuminated the high cost of using Akamai for its public-facing websites.</p><p>A short proof-of-concept of Cloudflare Application Performance suite resulted in measurable cost savings and performance improvements. It was also determined the flexibility to integrate additional Cloudflare tools, like Workers for serverless compute offerings, would enable the organization to scale further when ready.</p>
    <div>
      <h3>Avoid reliability concerns by migrating to Cloudflare</h3>
      <a href="#avoid-reliability-concerns-by-migrating-to-cloudflare">
        
      </a>
    </div>
    <p>A UK sporting giant with a devoted international fan community was deeply concerned about their spikey traffic associated with game days. Often these matches saw 10x the normal website traffic. Unfortunately, incumbent vendors weren’t up for the challenge of providing the performance and uptime reliability to their fans during these game day traffic spikes.</p><p>After migrating to Cloudflare, the results spoke for themselves. In one 24-hour match day, the site received over 11 million requests. Cloudflare’s cache served over 93% of them with eaze while providing a 100% uptime guarantee.</p>
    <div>
      <h3>Get started today</h3>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p>We invite you to visit our <a href="https://www.cloudflare.com/migration-hub/">Migration Hub</a> and explore our comprehensive offerings.</p><p>Migrating from one CDN to another can be a daunting task, but with Cloudflare's Migration Hub and Professional Services, the process becomes more straightforward and hassle-free. We are committed to empowering our customers with the resources, support, and expertise needed to transition smoothly to Cloudflare's advanced solutions.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Migration Hub]]></category>
            <category><![CDATA[Project Turpentine]]></category>
            <guid isPermaLink="false">3OdOvmn2cFOndmeYcJdpDB</guid>
            <dc:creator>Sam Marsh</dc:creator>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Smart Hints make code-free performance simple]]></title>
            <link>https://blog.cloudflare.com/smart-hints/</link>
            <pubDate>Mon, 19 Jun 2023 13:01:00 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce we’re making Early Hints and Fetch Priorities automatic using the power of Cloudflare’s network ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71HWydR0EwctKPYRncEFvy/254415bf6576a561d8639180f828b4cd/image4-3.png" />
            
            </figure><p>Today, we’re excited to announce how we’re making Early Hints and <a href="https://web.dev/fetch-priority/">Fetch Priorities</a> automatic using the power of Cloudflare’s network. Almost a <a href="/early-hints-performance/">year ago</a> we launched <a href="/early-hints/">Early Hints</a>. Early Hints are a method that allows web servers to asynchronously send instructions to the browser whilst the web server is getting the full response ready. This gives proactive suggestions to the browser on how to load the webpage faster for the visitor rather than idly waiting to receive the full webpage response.</p><p>In initial lab experiments, we observed page load improvements exceeding 30%. Since then, we have sent about two-trillion hints on behalf of over 150,000 websites using the product.</p><p>In order to effectively use Early Hints on a website, HTTP link headers or HTML link elements must be configured to specify which assets should be preloaded or which third-party servers should be preconnected. Making these decisions requires understanding how your website interacts with browsers, and identifying render-blocking assets to hint on without implementing prioritization strategies that <a href="/early-hints-performance/#:~:text=It%E2%80%99s%20quite%20possible,mobile%20connection%20settings.">saturate network bandwidth</a> on non-critical assets (i.e. you can’t just Early Hint everything and expect good results).</p><p>For users who possess this knowledge and can configure Early Hints at the origin (or via a Worker), it works seamlessly. However, for users who lack access to their origin server (e.g. SaaS platforms), or are unsure about the optimal assets to preload/prioritize, or simply prefer to focus on building their own application, the question arises: "<i>As an intermediary server, shouldn't Cloudflare know the best way to prioritize my website for performance</i>?"</p><p>The answer is <b>yes</b>, which is why we’re excited to start talking about how Smart Hints will determine the best priority for web assets without developers needing to configure anything. If you’re interested in helping us beta test this feature, you can sign up <a href="https://dash.cloudflare.com?to=/:account/:zone/speed/optimization">here</a> and we will contact you with further instructions on helping us test Smart Hints later this year.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>When you visit a webpage, your browser is actually requesting numerous individual resources from the server. These resources include everything from visible elements like images, text, and videos, to the behind-the-scenes logic (JavaScript, etc.) that powers the website analytics, functionality, and more. The order in which these resources are loaded by the browser plays a crucial role in determining how quickly users can view and interact with the page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Bg1lqmo1GvJuO45fUqPw4/262928332c4c731f6d3226658424634c/image3-2.png" />
            
            </figure><p>When your browser receives a response from the server, it parses the HTML response sequentially, from start to finish.When the HTML response arrives in the browser, it is split into two parts: the <code>&lt;head&gt;</code> and the <code>&lt;body&gt;</code>.</p><p>The <code>&lt;head&gt;</code> section appears at the beginning of the HTML response and contains essential elements like stylesheets, scripts, and other instructions for the browser. Stylesheets define how the page should look, while scripts provide the necessary logic for interactive features and functionality.<sup>1</sup></p><p>While stylesheets are important to load quickly as browsers will wait for them to know how to display content to the visitor, scripts are interesting because they can behave differently based on instructions provided to the browser. If a script lacks specific instructions (defer/async/inline for example), it can become a "blocking" resource. When the browser encounters a blocking script resource, it pauses processing the webpage and waits until the script is fully loaded and completely executed. This ensures that the script's functionality is available for the visitor to use. However, this blocking behavior can delay the display of content to the user, as the browser needs to wait for the script to finish before proceeding further.</p><p>Until the browser reaches the <code>&lt;body&gt;</code> section of the document, there is nothing visible to the visitor. That's why it's crucial to optimize the loading process of the <code>&lt;head&gt;</code> section as much as possible. By minimizing the time it takes for stylesheets and blocking scripts to load, the browser can start rendering the page content sooner, allowing visitors to see and interact with the webpage faster.</p><p>Achieving optimal web performance can be a complex challenge. While browsers are generally in charge of determining the order of loading different resources it needs to build a page, there have been a variety of tools that have been released recently (<a href="/early-hints-performance/">Early Hints</a>, <a href="https://web.dev/fetch-priority/">Fetch Priority</a>, <a href="https://www.cloudflare.com/learning/performance/what-is-lazy-loading/">Lazy-Loadin</a>g, <a href="/better-http-2-prioritization-for-a-faster-web/">H2</a> Priorities) to help developers specify unique resource priority for browsers to improve website load performance. Although these tools and methods for specifying resource priority can be effective, they require implementation and testing to make sure they are implemented correctly.</p>
    <div>
      <h3>Prioritization Tools</h3>
      <a href="#prioritization-tools">
        
      </a>
    </div>
    <p>Two methods that have gained a lot of popularity for improving website performance have been Early Hints and Fetch Priorities. These tools help give browsers information about how it should load resources in the correct order to improve performance of critical resources.</p><p><i>Early Hints</i></p><p>Early Hints allow the server to provide some information to the client before the final response is available.</p><p>When a client sends a request to a server, the server can respond with an "early hint" to provide a clue about the final response. This early hint is a separate response that includes headers related to the final response, such as important static objects that can be fetched early, and links to where to get related resources.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YUg3DWKVa8PZEB0YQ2zkU/fc6fcb5005c567f37f203326e6538885/image1-4.png" />
            
            </figure><p>The purpose of Early Hints is to allow the client to start processing the received information while waiting for the final response. The client can use the Early Hint to initiate early resource preloading, and preconnecting to servers that will have information for the final response, which can lead to faster page load times.</p><p><i>Fetch Priority</i></p><p>Another powerful tool in optimizing resource loading is Fetch Priorities, previously known as Priority Hints.</p><p>When analyzing a webpage, web browsers often prioritize the fetching of resources such as scripts and stylesheets to optimize the download sequence and efficiently use bandwidth. The priorities assigned to these resources are determined by browsers based on factors like resource type, placement within the webpage, and its location within the HTML response. For instance, images within the visible area for the visitor should be given higher priority, whereas essential scripts loaded early in the <code>&lt;head&gt;</code> section may be assigned a very high priority. Although browsers generally handle priority assignment effectively, it's worth noting that they may not always be optimal for every scenario.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4PnNA8LMQijlVtiw7VsVLo/c67e993e168e5b93fcc6d53c6c3d9a26/Fetch-Priorities.png" />
            
            </figure><p>By leveraging <a href="https://web.dev/fetch-priority/">Fetch Priorities</a>, developers gain additional control over resource loading and assign higher/lower priorities to different resources on their webpage, which can help optimize the overall performance of web pages.</p><p>While Early Hints and Fetch Priorities are all incredibly useful for optimizing web page performance, they often require access to the HTML resources in order to change their priorities and knowledge about how to best prioritize against other resources. While these tools working together allow for increasingly complex performance strategies to be implemented by developers, they also require a lot of testing, configuration, and management as web pages change over time. Smart Hints will make using these tools easier to manage by using our RUM data beacons and heuristics to better implement prioritization strategies without developers needing to lift a finger.</p>
    <div>
      <h3>How are we going to prioritize assets?</h3>
      <a href="#how-are-we-going-to-prioritize-assets">
        
      </a>
    </div>
    <p>Cloudflare's Smart Hints will leverage the capabilities of Early Hints and Fetch Priority features to optimize resource delivery by using our vast RUM data for websites across the Internet; we’re going to optimize resource prioritization on the fly. Smart Hints will dynamically determine the appropriate hints and priorities based on a specific response on the fly.</p><p>But how?</p><p>Cloudflare collects performance data in two ways - Synthetic testing and Real User Measurements (RUM). Synthetic testing collects performance data by loading a web page in an automated browser in a controlled environment. RUM also collects performance data, but from real users navigating to the web page on real browsers. RUM works by injecting a small piece of JavaScript, or beacon, into the web page. Cloudflare collects vast amounts of RUM data across thousands of sites.</p><p>From these two performance data sources, Cloudflare can intelligently analyze the loading bottlenecks of web pages. If the loading bottlenecks are caused by the download of render-blocking resources, Cloudflare can generate Smart Hints for the browser to prioritize the download of these resources.</p><p>As we roll out Smart Hints, we will explore the use of learning models to look for patterns that could be turned into heuristics. These heuristics could then be leveraged to improve performance for similar sites that do not use RUM or synthetic testing.</p><p>With Smart Hints, Cloudflare can revolutionize the way websites and applications are delivered, making the browsing experience faster, smoother, and more delightful. By inferring the right priority for a given client, Cloudflare will help customers find the best priorities for their websites’ performance while minimizing the time it takes to optimize an ever-changing webpage.</p>
    <div>
      <h3>How can I help Cloudflare do this?</h3>
      <a href="#how-can-i-help-cloudflare-do-this">
        
      </a>
    </div>
    <p>Before we roll this out more broadly, we will be performing large-scale beta tests of our systems to ensure that we’re making the best performance decisions for all kinds of content.</p><p>Over the next few months we’ll be building a beta cohort and working with them to ensure everyone has a great experience with Smart Hints. If you’d like to help us in this endeavor, please sign up to be part of the closed beta <a href="https://dash.cloudflare.com?to=/:account/:zone/speed/optimization">here</a> (located in the <b>Speed Tab</b> of the dashboard) and we will get in touch when we’re ready for you to enable it and how to provide feedback.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We’re looking forward to working with our community to build and optimize this no-code/configuration experience to bring massive improvements to page load to everyone.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p><sup>1</sup>Yes, scripts and stylesheets can also be placed within the <code>&lt;body&gt;</code> section, but their primary location is in the <code>&lt;head&gt;</code>.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Early Hints]]></category>
            <guid isPermaLink="false">3qE11RYK6usjrFnlN4mAEB</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Reduce latency and increase cache hits with Regional Tiered Cache]]></title>
            <link>https://blog.cloudflare.com/introducing-regional-tiered-cache/</link>
            <pubDate>Thu, 01 Jun 2023 13:00:27 GMT</pubDate>
            <description><![CDATA[ Regional Tiered Cache provides an additional layer of caching for Enterprise customers who have a global traffic footprint and want to serve content faster by avoiding network latency when there is a cache miss in a lower-tier, resulting in an upper-tier fetch in a data center located far away ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/A8lQLOj0zrB1oUjv2YTtX/c7982b28eb35d74286e2ca4e130ae1e7/image5-14.png" />
            
            </figure><p>Today we’re excited to announce an update to our <a href="https://developers.cloudflare.com/cache/about/tiered-cache">Tiered Cache</a> offering: Regional Tiered Cache.</p><p>Tiered Cache allows customers to organize Cloudflare data centers into tiers so that only some “<a href="https://developers.cloudflare.com/cache/about/tiered-cache/#:~:text=upper%2Dtier%20does%20not%20have%20the%20content%2C%20only%20the%20upper%2Dtier%20can%20ask%20the%20origin%20for%20content">upper-tier</a>” data centers can request content from an origin server, and then send content to “<a href="https://developers.cloudflare.com/cache/about/tiered-cache/#:~:text=lower%2Dtier%20data%20centers%20(generally%20the%20ones%20closest%20to%20a%20visitor)">lower-tiers</a>” closer to visitors. Tiered Cache helps content load faster for visitors, makes it cheaper to serve, and <a href="https://developers.cloudflare.com/cache/about/tiered-cache#:~:text=Tiered%20Cache%20concentrates%20connections%20to%20origin%20servers%20so%20they%20come%20from%20a%20small%20number%20of%20data%20centers%20rather%20than%20the%20full%20set%20of%20network%20locations.%20This%20results%20in%20fewer%20open%20connections%20using%20server%20resources.">reduces</a> origin resource consumption.</p><p>Regional Tiered Cache provides an additional layer of caching for Enterprise customers who have a global traffic footprint and want to serve content faster by avoiding <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">network latency</a> when there is a cache miss in a lower-tier, resulting in an upper-tier fetch in a data center located far away. In our trials, customers who have enabled Regional Tiered Cache have seen a 50-100ms improvement in tail <a href="https://developers.cloudflare.com/cache/about/default-cache-behavior/#cloudflare-cache-responses">cache hit</a> response times from Cloudflare’s CDN.</p>
    <div>
      <h2>What problem does Tiered Cache help solve?</h2>
      <a href="#what-problem-does-tiered-cache-help-solve">
        
      </a>
    </div>
    <p>First, a quick refresher on <a href="https://www.cloudflare.com/learning/cdn/what-is-caching/">caching</a>: a request for content is initiated from a visitor on their phone or computer. This request is generally routed to the closest Cloudflare data center. When the request arrives, we look to see if we have the content cached to respond to that request with. If it’s not in cache (it’s a miss), Cloudflare data centers must contact the <a href="https://www.cloudflare.com/learning/cdn/glossary/origin-server/">origin server</a> to get a new copy of the content.</p><p>Getting content from an origin server suffers from two issues: latency and increased origin egress and load.</p>
    <div>
      <h3>Latency</h3>
      <a href="#latency">
        
      </a>
    </div>
    <p>Origin servers, where content is hosted, can be far away from visitors. This is especially true the more global of an audience a particular piece of content has relative to where the origin is located. This means that content hosted in New York can be served in dramatically different amounts of time for visitors in London, Tokyo, and Cape Town. The farther away from New York a visitor is, the longer they must wait before the content is returned. Serving content from cache helps provide a uniform experience to all of these visitors because the content is served from a data center that’s close.</p>
    <div>
      <h3>Origin load</h3>
      <a href="#origin-load">
        
      </a>
    </div>
    <p>Even when using a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a>, many different visitors can be interacting with different data centers around the world and each data center, without the content visitors are requesting, will need to reach out to the origin for a copy. This can cost customers money because of egress fees origins charge for sending traffic to Cloudflare, and it places needless load on the origin by opening multiple connections for the same content, just headed to different data centers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5E2xM6zZw1uUJld0wbCn97/b7c2e5723ad36fe15e538aaa89dec94d/download-19.png" />
            
            </figure><p>When Tiered Cache is not enabled, all data centers in Cloudflare’s network can reach out to the origin in the event of a cache miss.</p><p>Performance improvements and origin load reductions are the promise of tiered cache.</p><p>Tiered Caching means that instead of every data center reaching out to the origin when there is a cache miss, the lower-tier data center that is closest to the visitor will reach out to a larger upper-tier data center to see if it has the requested content cached before the upper-tier asks the origin for the content. Organizing Cloudflare’s data centers into tiers means that fewer requests will make it back to the origin for the same content, preserving origin resources, reducing load, and saving the customer money in egress fees.</p>
    <div>
      <h2>What options are there to maximize the benefits of tiered caching?</h2>
      <a href="#what-options-are-there-to-maximize-the-benefits-of-tiered-caching">
        
      </a>
    </div>
    <p>Cloudflare customers are given access to different Tiered Cache topologies based on their plan level. There are currently two predefined Tiered Cache topologies to select from – Smart and Generic Global. If either of those don’t work for a particular customer’s traffic profile, Enterprise customers can also work with us to define a custom topology.</p><p>In <a href="/introducing-smarter-tiered-cache-topology-generation/">2021,</a> we announced that we’d allow all plans to access Smart Tiered Cache. Smart Tiered Cache dynamically finds the single closest data center to a customer’s origin server and chooses that as the upper-tier that all lower-tier data centers reach out to in the event of a cache miss. All other data centers go through that single upper-tier for content and that data center is the only one that can reach out to the origin. This helps to drastically boost cache hit ratios and reduces the connections to the origin. However, this topology can come at the cost of increased latency for visitors that are farther away from that single upper-tier.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4GmpStKUSWMPfYBgk2dAvb/976cee64422a296ed1e776557fe087cc/download--1--13.png" />
            
            </figure><p>When Smart Tiered Cache is enabled, a single upper tier data center can communicate with the origin, helping to conserve origin resources**.**</p><p>Enterprise customers may select additional tiered cache topologies like the Generic Global topology which allows all of Cloudflare’s large data centers on our network (about 40 data centers) to serve as upper-tiers. While this topology may help reduce the long tail latencies for far-away visitors, it does so at the cost of increased connections and load on a customer's origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PMcMuw7DkwI0GxxtLSzYF/616a53624e1392129f2716d40fb2a48a/download--2--11.png" />
            
            </figure><p>When Generic Global Tiered Cache is enabled, lower-tier data centers are mapped to all upper-tier data centers in Cloudflare’s network which can all reach out to the origin in the event of a cache miss. </p><p>To describe the latency problem with Smart Tiered Cache in more detail let’s use an example. Suppose the upper-tier data center is selected to be in New York using Smart Tiered Cache. The traffic profile for the website with the New York upper-tier is relatively global. Visitors are coming from London, Tokyo, and Cape Town. For every cache miss in a lower-tier it will need to reach out to the New York upper-tier for content. This means these requests from Tokyo will need to traverse the Pacific Ocean and most of the continental United States to check the New York upper-tier cache. Then turn around and go all the way back to Tokyo. This is a giant performance hit for visitors outside the US for the sake of improving origin resource load.</p>
    <div>
      <h2>Regional Tiered Cache brings the best of both worlds</h2>
      <a href="#regional-tiered-cache-brings-the-best-of-both-worlds">
        
      </a>
    </div>
    <p>With Regional Tiered Cache we introduce a middle-tier in each region around the world. When a lower-tier fetches on a cache miss it tries the regional-tier first if the upper-tier is in a different region. If the regional-tier does not have the asset then it asks the upper-tier for it. On the response the regional-tier writes to its cache so other lower-tiers in the same region benefit.</p><p>By putting an additional tier in the same region as the lower-tier, there’s an increased chance that the content will be available in the region before heading to a far-away upper-tier. This can drastically improve the performance of assets while still reducing the number of connections that will eventually need to be made to the customer’s origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7MIfujQ3l4iMBIQ5ZWdQiU/b8d70abe5ee256c4d95311665b840aca/download--3--7.png" />
            
            </figure><p>When Regional Tiered Cache is enabled, all lower-tier data centers will reach out to a regional tier close to them in the event of a cache miss. If the regional tier doesn’t have the content, the regional tier will then ask an upper-tier out of region for the content. This can help improve latency for Smart and Custom Tiered Cache topologies.</p>
    <div>
      <h2>Who will benefit from regional tiered cache?</h2>
      <a href="#who-will-benefit-from-regional-tiered-cache">
        
      </a>
    </div>
    <p>Regional Tiered Cache helps customers with Smart Tiered Cache or a Custom Tiered Cache topology with upper-tiers in one or two regions. Regional Tiered Cache is not beneficial for customers with many upper-tiers in many regions like Generic Global Tiered Cache .</p>
    <div>
      <h2>How to enable Regional Tiered Cache</h2>
      <a href="#how-to-enable-regional-tiered-cache">
        
      </a>
    </div>
    <p>Enterprise customers can enable Regional Tiered Cache via the Cloudflare Dashboard or the API:</p>
    <div>
      <h3>UI</h3>
      <a href="#ui">
        
      </a>
    </div>
    <ul><li><p>To enable Regional Tiered Cache, simply sign in to your account and select your website</p></li><li><p>Navigate to the Cache Tab of the dashboard, and select the Tiered Cache Section</p></li><li><p>If you have Smart or Custom Tiered Cache Topology Selected, you should have the ability to choose Regional Tiered Cache</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wj1OVOGbTWsvDy1rHavci/783d587871b8e0dc423e4f23bf18cb02/download--4--7.png" />
            
            </figure>
    <div>
      <h3>API</h3>
      <a href="#api">
        
      </a>
    </div>
    <p>Please see the <a href="https://developers.cloudflare.com/api/operations/zone-cache-settings-get-regional-tiered-cache-setting">documentation</a> for detailed information about how to configure Regional Tiered Cache from the API.</p><p><b>GET</b></p>
            <pre><code>curl --request GET \
 --url https://api.cloudflare.com/client/v4/zones/zone_identifier/cache/regional_tiered_cache \
 --header 'Content-Type: application/json' \
 --header 'X-Auth-Email: '</code></pre>
            <p><b>PATCH</b></p>
            <pre><code>curl --request PATCH \
 --url https://api.cloudflare.com/client/v4/zones/zone_identifier/cache/regional_tiered_cache \
 --header 'Content-Type: application/json' \
 --header 'X-Auth-Email: ' \
 --data '{
 "value": "on"
}'</code></pre>
            
    <div>
      <h2>Try Regional Tiered Cache out today!</h2>
      <a href="#try-regional-tiered-cache-out-today">
        
      </a>
    </div>
    <p>Regional Tiered Cache is the first of many planned improvements to Cloudflare’s Tiered Cache offering which are currently in development. We look forward to hearing what you think about Regional Tiered Cache, and if you’re interested in helping us improve our CDN, <a href="https://www.cloudflare.com/careers/jobs/?department=Engineering">we’re hiring</a>.</p> ]]></content:encoded>
            <category><![CDATA[Tiered Cache]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Latency]]></category>
            <guid isPermaLink="false">6s5rGN6B1pWPNhxDwZ3IRL</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Andrew Hauck</dc:creator>
        </item>
        <item>
            <title><![CDATA[Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve]]></title>
            <link>https://blog.cloudflare.com/cache-reserve-open-beta/</link>
            <pubDate>Tue, 15 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re extremely excited to announce that Cache Reserve is graduating to open beta – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ni8e1xP2TTTgsWfQun7u9/0c3e07191798bf7ac736d9b54350c2f0/Cache-Reserve-Open-Beta.png" />
            
            </figure><p>Earlier this year, we introduced Cache Reserve. Cache Reserve helps users serve content from Cloudflare’s cache for longer by using <a href="https://www.cloudflare.com/products/r2/">R2</a>’s persistent data storage. Serving content from Cloudflare’s cache benefits website operators by reducing their bills for <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> from origins, while also benefiting website visitors by having content load faster.</p><p>Cache Reserve has been in <a href="/introducing-cache-reserve/">closed beta</a> for a few months while we’ve collected feedback from our initial users and continued to develop the product. After several rounds of iterating on this feedback, today we’re extremely excited to announce that <b>Cache Reserve is graduating to open beta</b> – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting.</p><p>If you want to see the benefits of Cache Reserve for yourself and give us some feedback– you can go to the Cloudflare dashboard, navigate to the Caching section and enable Cache Reserve by pushing <a href="https://dash.cloudflare.com/caching/cache-reserve">one button</a>.</p>
    <div>
      <h2>How does Cache Reserve fit into the larger picture?</h2>
      <a href="#how-does-cache-reserve-fit-into-the-larger-picture">
        
      </a>
    </div>
    <p>Content served from Cloudflare’s cache begins its journey at an origin server, where the content is hosted. When a request reaches the origin, the origin compiles the content needed for the response and sends it back to the visitor.</p><p>The distance between the visitor and the origin can affect the performance of the asset as it may travel a long distance for the response. This is also where the user is charged a fee to move the content from where it’s stored on the origin to the visitor requesting the content. These fees, known as “bandwidth” or “egress” fees, are familiar monthly line items on the invoices for users that host their content on cloud providers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7p6Nnmjna0afwsk0flu98t/eba76d569819cc15ca1c803590caf7e3/Response-Flow.png" />
            
            </figure><p>Cloudflare’s CDN sits between the origin and visitor and evaluates the origin’s response to see if it can be cached. If it can be added to Cloudflare’s cache, then the next time a request comes in for that content, Cloudflare can respond with the cached asset, which means there's no need to send the request to the origin– reducing egress fees for our customers. We also cache content in data centers close to the visitor to improve the performance and cut down on the transit time for a response.</p><p>To help assets remain cached for longer, a few years ago we introduced <a href="/introducing-smarter-tiered-cache-topology-generation/">Tiered Cache</a> which organizes all of our 250+ global data centers into a hierarchy of lower-tiers (generally closer to visitors) and upper-tiers (generally closer to origins). When a request for content cannot be served from a lower-tier’s cache, the upper-tier is checked before going to the origin for a fresh copy of the content. Organizing our data centers into tiers helps us cache content in the right places for longer by putting multiple caches between the visitor’s request and the origin.</p><p><b>Why do cache misses occur?</b>Misses occur when Cloudflare cannot serve the content from cache and must go back to the origin to retrieve a fresh copy. This can happen when a customer sets the <a href="https://developers.cloudflare.com/cache/about/cache-control/">cache-control</a> time to signify when the content is out of date (stale) and needs to be <a href="https://developers.cloudflare.com/cache/about/cache-control/#revalidation">revalidated</a>. The other element at play – how long the network wants content to remain cached – is a bit more complicated and can fluctuate depending on eviction criteria.</p><p><a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> must consider whether they need to evict content early to optimize storage of other assets when cache space is full. At Cloudflare, we prioritize eviction based on how recently a piece of cached content was requested by using an algorithm called “least recently used” or <a href="https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)">LRU</a>. This means that even if cache-control signifies that a piece of content should be cached for many days, we may still need to evict it earlier (if it is least-requested in that cache) to cache more popular content.</p><p>This works well for most customers and website visitors, but is often a point of confusion for people wondering why content is unexpectedly displaying a miss. If eviction did not happen then content would need to be cached in data centers that were further away from visitors requesting that data, harming the performance of the asset and injecting inefficiencies into how Cloudflare’s network operates.</p><p>Some customers, however, have large libraries of content that may not be requested for long periods of time. Using the traditional cache, these assets would likely be evicted and, if requested again, served from the origin. Keeping assets in cache requires that they remain popular on the Internet which is hard given what’s popular or current is constantly changing. Evicting content that becomes cold means additional origin egress for the customer if that content needs to be pulled repeatedly from the origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iewRr0TSvgJQweqFGos7r/bc11bdf11f206f58767966cba190da20/Cache-Reserve-Response-Flow.png" />
            
            </figure><p><b>Enter Cache Reserve</b>This is where Cache Reserve shines. Cache Reserve serves as the ultimate upper-tier data center for content that might otherwise be evicted from cache. Once <a href="https://developers.cloudflare.com/cache/about/cache-reserve/#cache-reserve-asset-eligibility">admitted</a> to Cache Reserve, content can be stored for a much longer period of time– 30 days by <a href="https://developers.cloudflare.com/cache/about/cache-reserve/">default</a>. If another request comes in during that period, it can be extended for another 30 days (and so on) or until cache-control signifies that we should no longer serve that content from cache. Cache Reserve serves as a safety net to backstop all cacheable content, so customers don't have to worry about unwanted cache eviction and origin egress fees.</p>
    <div>
      <h2>How does Cache Reserve save egress?</h2>
      <a href="#how-does-cache-reserve-save-egress">
        
      </a>
    </div>
    <p>The promise of Cache Reserve is that hit ratios will increase and egress fees from origins will decrease for long tail content that is rarely requested and may be evicted from cache.</p><p>However, there are additional egress savings built into the product. For example, objects are written to Cache Reserve on misses. This means that when fetching the content from the origin on a cache miss, we both use that to respond to a request while also writing the asset to Cache Reserve, so customers won’t experience egress from serving that asset for a long time.</p><p>Cache Reserve is designed to be used with tiered cache enabled for maximum origin shielding. When there is a cache miss in both the lower and upper tiers, Cache Reserve is checked and if there is a hit, the response will be cached in both the lower and upper tier on its way back to the visitor without the origin needing to see the request or serve any additional data.</p><p>Cache Reserve accomplishes these origin egress savings for a low price, based on R2 costs. For more information on Cache Reserve prices and operations, please see the documentation <a href="https://developers.cloudflare.com/cache/about/cache-reserve/#pricing">here</a>.</p>
    <div>
      <h2>Scaling Cache Reserve on Cloudflare’s developer platform</h2>
      <a href="#scaling-cache-reserve-on-cloudflares-developer-platform">
        
      </a>
    </div>
    <p>When we first announced Cache Reserve, the response was overwhelming. Over 20,000 users wanted access to the beta, and we quickly made several interesting discoveries about how people wanted to use Cache Reserve.</p><p>The first big challenge we found was that users hated egress fees as much as we do and wanted to make sure that as much content as possible was in Cache Reserve. During the closed beta we saw usage above 8,000 PUT operations per second sustained, and objects served at a rate of over 3,000 GETs per second. We were also caching around 600Tb for some of our large customers. We knew that we wanted to open the product up to anyone that wanted to use it and in order to scale to meet this demand, we needed to make several changes quickly. So we turned to Cloudflare’s developer platform.</p><p>Cache Reserve stores data on R2 using its <a href="https://developers.cloudflare.com/r2/data-access/s3-api/api/">S3-compatible API</a>. Under the hood, R2 handles all the complexity of an <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> system using our performant and scalable developer primitives: <a href="https://developers.cloudflare.com/workers/">Workers</a> and <a href="https://developers.cloudflare.com/workers/runtime-apis/durable-objects/">Durable Objects</a>. We decided to use developer platform tools because it would allow us to implement different scaling strategies quickly. The advantage of building on the Cloudflare developer platform is that Cache Reserve was easily able to experiment to see how we could best distribute the high load we were seeing, all while shielding the complexity of how Cache Reserve works from users.  </p><p>With the single press of a button, Cache Reserve performs these functions:</p><ul><li><p>On a cache miss, <a href="/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/">Pingora</a> (our new L7 proxy) reaches out to the origin for the content and writes the response to R2. This happens while the content continues its trip back to the visitor (thereby avoiding needless latency).</p></li><li><p>Inside R2, a Worker writes the content to R2’s persistent data storage while also keeping track of the important metadata that Pingora sends about the object (like origin headers, freshness values, and retention information) using Durable Objects storage.</p></li><li><p>When the content is next requested, Pingora looks up where the data is stored in R2 by computing the cache key. The cache key’s hash determines both the object name in R2 and which bucket it was written to, as each zone’s assets are sharded across multiple buckets to distribute load.</p></li><li><p>Once found, Pingora attaches the relevant metadata and sends the content from R2 to the nearest upper-tier to be cached, then to the lower-tier and finally back to the visitor.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7A0KCP536ALqWhmE5xLR6g/5a4cab50d1d9cdad46a83456d7b21824/Screen-Shot-2022-11-14-at-4.31.20-PM.png" />
            
            </figure><p>This is magic! None of the above needs to be managed by the user. By bringing together R2, Workers, Durable Objects, Pingora, and Tiered Cache we were able to quickly build and make changes to Cache Reserve to scale as needed…</p>
    <div>
      <h2>What’s next for Cache Reserve</h2>
      <a href="#whats-next-for-cache-reserve">
        
      </a>
    </div>
    <p>In addition to the work we’ve done to scale Cache Reserve, opening the product up also opens the door to more features and integrations across Cloudflare. We plan on putting additional analytics and metrics in the hands of Cache Reserve users, so they know precisely what’s in Cache Reserve and how much egress it’s saving them. We also plan on building out more complex integrations with R2 so if customers want to begin managing their storage, they are able to easily make that transition. Finally, we’re going to be looking into providing more options for customers to control precisely what is eligible for Cache Reserve. These features represent just the beginning for how customers will control and customize their cache on Cloudflare.</p>
    <div>
      <h2>What’s some of the feedback been so far?</h2>
      <a href="#whats-some-of-the-feedback-been-so-far">
        
      </a>
    </div>
    <blockquote><p>As a long time Cloudflare customer, we were eager to deploy Cache Reserve to provide cost savings and improved performance for our end users. Ensuring our application always performs optimally for our global partners and delivery riders is a primary focus of Delivery Hero. With Cache Reserve our cache hit ratio improved by 5% enabling us to scale back our infrastructure and simplify what is needed to operate our global site and provide additional cost savings.<b>Wai Hang Tang</b>, Director of Engineering at <a href="https://www.deliveryhero.com/">Delivery Hero</a></p></blockquote><blockquote><p>Anthology uses Cloudflare's global cache to drastically improve the performance of content for our end users at schools and universities. By pushing a single button to enable Cache Reserve, we were able to provide a great experience for teachers and students and reduce two-thirds of our daily egress traffic.<b>Paul Pearcy</b>, Senior Staff Engineer at <a href="https://www.anthology.com/blackboard">Anthology</a></p></blockquote><blockquote><p>At Enjoei we’re always looking for ways to help make our end-user sites faster and more efficient. By using Cloudflare Cache Reserve, we were able to drastically improve our cache hit ratio by more than 10% which reduced our origin egress costs. Cache Reserve also improved the performance for many of our merchants’ sites in South America, which improved their SEO and discoverability across the Internet (Google, Criteo, Facebook, Tiktok)– and it took no time to set it up.<b>Elomar Correia</b>, Head of DevOps SRE | Enterprise Solutions Architect at <a href="https://www.enjoei.com.br/">Enjoei</a></p></blockquote><blockquote><p>In the live events industry, the size and demand for our cacheable content can be extremely volatile, which causes unpredictable swings in our egress fees. Additionally, keeping data as close to our users as possible is critical for customer experience in the high traffic and low bandwidth scenarios our products are used in, such as conventions and music festivals. Cache Reserve helps us mitigate both of these problems with minimal impact on our engineering teams, giving us more predictable costs and lower latency than existing solutions.<b>Jarrett Hawrylak</b>, VP of Engineering | Enterprise Ticketing at <a href="https://www.patrontechnology.com/">Patron Technology</a></p></blockquote>
    <div>
      <h2>How can I use it today?</h2>
      <a href="#how-can-i-use-it-today">
        
      </a>
    </div>
    <p>As of today, Cache Reserve is in open beta, meaning that it’s available to anyone who wants to use it.</p><p>To use the Cache Reserve:</p><ul><li><p>Simply go to the Caching tile in the dashboard.</p></li><li><p>Navigate to the <a href="https://dash.cloudflare.com/caching/cache-reserve">Cache Reserve page</a> and push the enable data sync button (or purchase button).</p></li></ul><p>Enterprise Customers can work with their Cloudflare Account team to access Cache Reserve.</p><p>Customers can ensure Cache Reserve is working by looking at the baseline metrics regarding how much data is cached and how many operations we’ve seen in the Cache Reserve section of the dashboard. Specific requests served by Cache Reserve are available by using <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/">Logpush v2</a> and finding HTTP requests with the field “CacheReserveUsed.”</p><p>We will continue to make sure that we are quickly triaging the feedback you give us and making improvements to help ensure Cache Reserve is easy to use, massively beneficial, and your choice for reducing egress fees for cached content.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AtiPruR4tHDw87XV1NdfE/0ac9a73b571bc43b41cc56449fd5b6eb/Screen-Shot-2022-11-10-at-12.00.31-PM.png" />
            
            </figure>
    <div>
      <h2>Try it out</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>We’ve been so excited to get Cache Reserve in more people’s hands. There will be more exciting developments to Cache Reserve as we continue to invest in giving you all the tools you need to build your perfect cache.</p><p>Try Cache Reserve today and <a href="https://discord.com/invite/aTsevRH3pG">let us know</a> what you think.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cache Reserve]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Egress]]></category>
            <guid isPermaLink="false">6tIlsBJozEhRpkQAVk3Axo</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatic (secure) transmission: taking the pain out of origin connection security]]></title>
            <link>https://blog.cloudflare.com/securing-origin-connectivity/</link>
            <pubDate>Mon, 03 Oct 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce that we will soon be offering a zero-configuration option for security on Cloudflare. If we find that we can automatically upgrade the security connection between Cloudflare and a user’s origin, we will ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/629MTo8freJ9xGG3KoHyvO/ac03192ec0ce5eb78c1396361a95d73b/image1-4.png" />
            
            </figure><p>In 2014, Cloudflare set out to encrypt the Internet by introducing <a href="/introducing-universal-ssl/">Universal SSL</a>. It made <a href="https://www.cloudflare.com/application-services/products/ssl/">getting an SSL/TLS certificate free and easy</a> at a time when doing so was neither free, nor easy. Overnight millions of websites had a secure connection between the user’s browser and Cloudflare.</p><p>But getting the connection encrypted from Cloudflare to the customer’s origin server was more complex. Since Cloudflare and all browsers supported SSL/TLS, the connection between the browser and Cloudflare could be instantly secured. But back in 2014 configuring an origin server with an SSL/TLS certificate was complex, expensive, and sometimes not even possible.</p><p>And so we relied on users to configure the best security level for their origin server. Later we added a service that detects and <a href="/ssl-tls-recommender/">recommends the highest level of security</a> for the connection between Cloudflare and the origin server. We also introduced free <a href="/cloudflare-ca-encryption-origin/">origin server certificates</a> for customers who didn’t want to get a certificate elsewhere.</p><p>Today, we’re going even further. Cloudflare will shortly find the most secure connection possible to our customers’ origin servers and use it, automatically. Doing this correctly, at scale, while not breaking a customer’s service is very complicated. This blog post explains how we are automatically achieving that highest level of security possible for those customers who don’t want to spend time configuring their SSL/TLS set up manually.</p>
    <div>
      <h3>Why configuring origin SSL automatically is so hard</h3>
      <a href="#why-configuring-origin-ssl-automatically-is-so-hard">
        
      </a>
    </div>
    <p>When we announced Universal SSL, we knew the <a href="/universal-ssl-encryption-all-the-way-to-the-origin-for-free/">backend security of the connection</a> between Cloudflare and the origin was a different and harder problem to solve.</p><p>In order to <a href="https://www.cloudflare.com/learning/security/glossary/website-security-checklist/">configure the tightest security</a>, customers had to procure a certificate from a third party and upload it to their origin. Then they had to indicate to Cloudflare that we should use this certificate to verify the identity of the server while also indicating the connection security capabilities of their origin. This could be an expensive and tedious process. To help alleviate this high set up cost, in 2015 Cloudflare <a href="/universal-ssl-encryption-all-the-way-to-the-origin-for-free/">launched a beta Origin CA service</a> in which we provided free limited-function certificates to customer origin servers. We also provided guidance on how to correctly configure and upload the certificates, so that secure connections between Cloudflare and a customer’s origin could be established quickly and easily.</p><p>What we discovered though, is that while this service was useful to customers, it still required a lot of configuration. We didn’t see the change we did with Universal SSL because customers still had to fight with their origins in order to upload certificates and test to make sure that they had configured everything correctly. And when you throw things like load balancers into the mix or servers mapped to different subdomains, handling server-side SSL/TLS gets even more complicated.</p><p>Around the same time as that announcement, <a href="https://letsencrypt.org/how-it-works/">Let’s Encrypt</a> and other services began offering certificates as a public CA for free, making TLS easier and paving the way for widespread adoption. Let’s Encrypt and Cloudflare had come to the same conclusion: by offering certificates for free, simplifying server configuration for the user, and working to streamline certificate renewal, they could make a tangible impact on the overall security of the web.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZURjaGV7W20VQb5GDkM0T/489e9677c3fb842e40273c8ad4563c52/image3-1.png" />
            
            </figure><p>The announcements of free and easy to configure certificates correlated with an increase in attention on origin-facing security. Cloudflare customers began requesting more documentation to configure origin-facing certificates and SSL/TLS communication that were performant and intuitive. In response, in 2016 we <a href="/cloudflare-ca-encryption-origin/">announced the GA of origin certificate authority</a> to provide cheap and easy origin certificates along with <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/">guidance on how to best configure backend security</a> for any website.</p><p>The increased customer demand and attention helped pave the way for additional features that focused on backend security on Cloudflare. For example, <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull">authenticated origin pull</a> ensures that only HTTPS requests from Cloudflare will receive a response from your origin, preventing an origin response from requests outside of Cloudflare. Another option, <a href="/tunnel-for-everyone/">Cloudflare Tunnel</a> can be set up to run on the origin servers, proactively establishing secure and private tunnels to the nearest Cloudflare data center. This configuration allows customers to completely lock down their origin servers to only receive requests routed through our network. For customers unable to lock down their origins using this method, we still encourage adopting the strongest possible security when configuring how Cloudflare should connect to an origin server.</p><p>Cloudflare currently offers five options for SSL/TLS configurability that we use when communicating with origins:</p><ul><li><p>In <b>Off</b> mode, as you might expect, traffic from browsers to Cloudflare and from Cloudflare to origins are not encrypted and will use plain text HTTP.</p></li><li><p>In <b>Flexible</b> mode, traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the site's origin server is not. This is a common selection for origins that cannot support TLS, even though we recommend upgrading this origin configuration wherever possible. A guide for upgrading can be found <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/#required-setup">here</a>.</p></li><li><p>In <b>Full</b> mode, Cloudflare follows whatever is happening with the browser request and uses that same option to connect to the origin. For example, if the browser uses HTTP to connect to Cloudflare, we’ll establish a connection with the origin over HTTP. If the browser uses HTTPS, we’ll use HTTPS to communicate with the origin; however we will not validate the certificate on the origin to prove the identity and trustworthiness of the server.</p></li><li><p>In <b>Full (strict)</b> mode, traffic between Cloudflare follows the same pattern as in Full mode, however Full (strict) mode adds validation of the origin server’s certificate. The origin certificate can either be issued by a public CA like Let’s Encrypt or by <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca">Cloudflare Origin CA.</a></p></li><li><p>In <b>Strict</b> mode, traffic from the browser to Cloudflare that is HTTP or HTTPS will always be connected to the origin over HTTPS with a validation of the origin server’s certificate.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WSh4V7JAEnrGqsw6z6y89/ddabb274f5ceb7c55930b8429d17f5dd/image2-3.png" />
            
            </figure><p>What we have found in a lot of cases is that when customers initially signed up for Cloudflare, the origin they were using could not support the most advanced versions of encryption, resulting in origin-facing communication using unencrypted HTTP. These default values persisted over time, even though the origin has become more capable. We think the time is ripe to re-evaluate the entire concept of default SSL/TLS levels.</p><p>That’s why we will reduce the configuration burden for origin-facing security by <b>automatically</b> managing this on behalf of our customers. Cloudflare will provide a zero configuration option for how we will communicate with origins: we will simply look at an origin and use the most-secure option available to communicate with it.</p><p>Re-evaluating default SSL/TLS modes is only the beginning. Not only will we automatically upgrade sites to their best security setting, <b>we will also open up all SSL/TLS modes to all plan levels</b>. Historically, Strict mode was reserved for enterprise customers only. This was because we released this mode in 2014 when few people had origins that were able to communicate over SSL/TLS, and we were nervous about customers breaking their configurations. But this is 2022, and we think that Strict mode should be available to anyone who wants to use it. So we will be opening it up to everyone with the launch of the automatic upgrades.</p>
    <div>
      <h3>How will automatic upgrading work?</h3>
      <a href="#how-will-automatic-upgrading-work">
        
      </a>
    </div>
    <p>To upgrade the origin-facing security of websites, we first need to determine the highest security level the origin can use. To make this determination, we will use the <a href="/ssl-tls-recommender/">SSL/TLS Recommender</a> tool that we released a year ago.</p><p>The recommender performs a series of requests from Cloudflare to the customer’s origin(s) to determine if the backend communication can be upgraded beyond what is currently configured. The recommender accomplishes this by:</p><ul><li><p>Crawling the website to collect links on different pages of the site. For websites with large numbers of links, the recommender will only examine a subset. Similarly, for sites where the crawl turns up an insufficient number of links, we augment our results with a sample of links from recent visitors requests to the zone. All of this is to get a representative sample to where requests are going in order to know how responses are served from the origin.</p></li><li><p>The crawler uses the user agent <code>Cloudflare-SSLDetector</code> and has been added to Cloudflare’s list of <a href="https://developers.cloudflare.com/firewall/known-issues-and-faq#bots-currently-detected">known “good bots</a>”.</p></li><li><p>Next, the recommender downloads the content of each link over both HTTP and HTTPS. The recommender makes only idempotent GET requests when scanning origin servers to avoid modifying server resource state.</p></li><li><p>Following this, the recommender runs a content similarity algorithm to determine if the content collected over HTTP and HTTPS matches.</p></li><li><p>If the content that is downloaded over HTTP matches the content downloaded over HTTPS, then it’s known that we can upgrade the security of the website without negative consequences.</p></li><li><p>If the website is already configured to Full mode, we will perform a certificate validation (without the additional need for crawling the site) to determine whether it can be updated to Full (strict) mode or higher.</p></li></ul><p>If it can be determined that the customer’s origin is able to be upgraded without breaking, we will upgrade the origin-facing security automatically.</p><p>But that’s not all. Not only are we removing the configuration burden for services on Cloudflare, but we’re also <b>providing more precise security settings by moving from per-zone SSL/TLS settings to per-origin SSL/TLS settings</b>.</p><p>The current implementation of the backend SSL/TLS service is related to an entire website, which works well for those with a single origin. For those that have more complex setups however, this can mean that origin-facing security is defined by the lowest capable origin serving a part of the traffic for that service. For example, if a website uses img.example.com and api.example.com, and these subdomains are served by different origins that have different security capabilities, we would not want to limit the SSL/TLS capabilities of both subdomains to the least secure origin. By using our new service, we will be able to set per-origin security more precisely to allow us to maximize the security posture of each origin.</p><p>The goal of this is to maximize the origin-facing security of everything on Cloudflare. However, if any origin that we attempt to scan blocks the SSL recommender, has a non-functional origin, or opts-out of this service, we will not complete the scans and will not be able to upgrade security. Details on how to opt-out will be provided via email announcements soon.</p>
    <div>
      <h3>Opting out</h3>
      <a href="#opting-out">
        
      </a>
    </div>
    <p>There are a number of reasons why someone might want to configure a lower-than-optimal security setting for their website. One common reason customers provide is a fear that having higher security settings will negatively impact the performance of their site. Others may want to set a suboptimal security setting for testing purposes or to debug some behavior. Whatever the reason, we will provide the tools needed to continue to configure the SSL/TLS mode you want, even if that’s different from what we think is the best.</p>
    <div>
      <h3>When is this going to happen?</h3>
      <a href="#when-is-this-going-to-happen">
        
      </a>
    </div>
    <p>We will begin to roll this change out before the end of the year. If you read this and want to make sure you’re at the highest level of backend security already, we recommend <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/">Full (strict)</a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/">Strict mode</a>. If you prefer to wait for us to automatically upgrade your origin security for you, please keep your eyes peeled to your inbox for the date we will begin rolling out this change for your group.</p><p>At Cloudflare, we believe that the Internet needs to be secure and private. If you’d like to help us achieve that, we’re hiring across the <a href="https://www.cloudflare.com/careers/jobs/?department=Engineering">engineering organization</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">74aHWDdU28HyUkx2We1M02</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Mikey Sleevi</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cache Rules: precision caching at your fingertips]]></title>
            <link>https://blog.cloudflare.com/introducing-cache-rules/</link>
            <pubDate>Tue, 27 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ We have spent the last ten years learning how customers use Page Rules to customize their cached content, and it’s clear the time is ripe for evolving rules-based caching on Cloudflare ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Ten years ago, in 2012, we released a product that put “a powerful new set of tools” in the hands of Cloudflare customers, allowing website owners to control how Cloudflare would cache, apply security controls, manipulate headers, implement redirects, and more on any page of their website. This product is called <a href="/introducing-pagerules-fine-grained-feature-co/">Page Rules</a> and since its introduction, it has grown substantially in terms of popularity and functionality.</p><p>Page Rules are a common choice for customers that want to have fine-grained control over how Cloudflare should cache their content. There are more than 3.5 million caching Page Rules currently deployed that help websites customize their content. We have spent the last ten years learning how customers use those rules to cache content, and it’s clear the time is ripe for evolving rules-based caching on Cloudflare. This evolution will allow for greater flexibility in caching different types of content through additional rule configurability, while providing more visibility into when and how different rules interact across Cloudflare’s ecosystem.</p><p>Today, we’ve <a href="/future-of-page-rules">announced</a> that Page Rules will be re-imagined into four product-specific rule sets: Origin Rules, Cache Rules, Configuration Rules, and Redirect Rules.</p><p>In this blog we’re going to discuss <b>Cache Rules</b>, and how we’re applying ten years of product iteration and learning from Page Rules to give you the tools and options to best optimize your cache.</p>
    <div>
      <h3>Activating Page Rules, then and now</h3>
      <a href="#activating-page-rules-then-and-now">
        
      </a>
    </div>
    <p>Adding a Page Rule is very simple: users either make an API call or navigate to the dashboard, enter a full or wildcard URL pattern (e.g. <code>example.com/images/scr1.png</code> or <code>example.com/images/scr*</code>), and tell us which actions to perform when we see that pattern. For example a Page Rule could tell browsers– keep a copy of the response longer via “<a href="https://developers.cloudflare.com/cache/about/edge-browser-cache-ttl/">Browser Cache TTL</a>”, or tell our cache that via “<a href="https://developers.cloudflare.com/cache/about/edge-browser-cache-ttl/">Edge Cache TTL</a>”. Low effort, high impact. All this is accomplished without fighting origin configuration or writing a single line of code.</p><p>Under the hood, a lot is happening to make that rule scale: we turn every rule condition into regexes, matching them against the tens of millions of requests per second across 275+ data centers globally. The compute necessary to process and apply new values on the fly across the globe is immense and corresponds directly to the number of rules we are able to offer to users. By moving cache actions from Page Rules to Cache Rules we can allow for users to not only set more rules, but also to trigger these rules more precisely.</p>
    <div>
      <h3>More than a URL</h3>
      <a href="#more-than-a-url">
        
      </a>
    </div>
    <p>Users of Page Rules are limited to specific URLs or URL patterns to define how browsers or Cloudflare cache their websites files. Cache Rules allows users to set caching behavior on additional criteria, such as the HTTP request headers or the requested file type. Users can continue to match on the requested URL also, as used in our Page Rules example earlier. With Cache Rules, users can now define this behavior on one or more <a href="https://developers.cloudflare.com/cache/about/cache-rules/">fields</a> available.</p><p>For example, if a user wanted to specify cache behavior for all <code>image/png</code> content-types, it’s now as simple as pushing a few buttons in the UI or writing a small expression in the API. Cache Rules give users precise control over when and how Cloudflare and browsers cache their content. Cache Rules allow for rules to be triggered on request header values that can be simply defined like</p><p><code>any(http.request.headers["content-type"][*] == "image/png")</code></p><p>Which triggers the Cache Rule to be applied to all <code>image/png</code> media types. Additionally, users may also leverage other request headers like cookie values, user-agents, or hostnames.</p><p>As a plus, these matching criteria can be stacked and configured with operators like <code>AND</code> and <code>OR</code>, providing additional simplicity in building complex rules from many discrete blocks, e.g. if you would like to target both <code>image/png</code> AND <code>image/jpeg</code>.</p><p>For the full list of fields available conditionals you can apply Cache Rules to, please refer to the <a href="https://developers.cloudflare.com/cache/about/cache-rules/">Cache Rules documentation</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LQO30zIpO4qewcrGtY9Gu/0d472c4b0a4f76513613c1e8b1c9e47e/image1-59.png" />
            
            </figure>
    <div>
      <h3>Visibility into how and when Rules are applied</h3>
      <a href="#visibility-into-how-and-when-rules-are-applied">
        
      </a>
    </div>
    <p>Our current offerings of Page Rules, Workers, and Transform Rules can all manipulate caching functionality for our users’ content. Often, there is some trial and error required to make sure that the confluence of several rules and/or Workers are behaving in an expected manner.</p><p>As part of upgrading Page Rules we have separated it into four new products:</p><ol><li><p>Origin Rules</p></li><li><p>Cache Rules</p></li><li><p>Configuration Rules</p></li><li><p>Redirect Rules</p></li></ol><p>This gives users a better understanding into how and when different parts of the Cloudflare stack are activated, reducing the spin-up and debug time. We will also be providing additional visibility in the dashboard for when rules are activated as they go through Cloudflare. As a sneak peek please see:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6f1aHsgVHcPsfCsLvqyzVH/91f25d4d2e9e3881b736bfac12855bdc/Screenshot-2022-09-27-at-13.03.15.png" />
            
            </figure><p>Our users may take advantage of this strict precedence by chaining the results of one product into another. For example, the output of URL rewrites in Transform Rules will feed into the actions of Cache Rules, and the output of Cache Rules will feed into IP Access Rules, and so on.</p><p>In the future, we plan to increase this visibility further to allow for inputs and outputs across the rules products to be observed so that the modifications made on our network can be observed before the rule is even deployed.</p>
    <div>
      <h3>Cache Rules. What are they? Are they improved? Let’s find out!</h3>
      <a href="#cache-rules-what-are-they-are-they-improved-lets-find-out">
        
      </a>
    </div>
    <p>To start, Cache Rules will have all the caching functionality currently available in Page Rules. Users will be able to:</p><ul><li><p>Tell Cloudflare to cache an asset or not,</p></li><li><p>Alter how long Cloudflare should cache an asset,</p></li><li><p>Alter how long a browser should cache an asset,</p></li><li><p>Define a custom cache key for an asset,</p></li><li><p>Configure how Cloudflare serves stale, revalidates, or otherwise uses header values to direct cache freshness and content continuity,</p></li></ul><p>And so much more.</p><p>Cache Rules are intuitive and work similarly to our other <a href="https://developers.cloudflare.com/ruleset-engine/">ruleset engine</a>-based products announced today: API or UI conditionals for URL or request headers are evaluated, and if matching, Cloudflare and browser caching options are configured on behalf of the user. For all the different options available, see our Cache Rules <a href="https://developers.cloudflare.com/cache/about/cache-rules/">documentation</a>.</p><p>Under the hood, Cache Rules apply targeted rule applications so that additional rules can be supported per user and across the whole engine. What this means for our users is that by consuming less CPU for rule evaluations, we’re able to support more rules per user. For specifics on how many additional Cache Rules you’ll be able to use, please see the <a href="/future-of-page-rules">Future of Rules Blog</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eNSfTxWDVu2FsRc8JKyIZ/f2a109d9a6b63eebfaa9dcd4f9160b1d/image2-49.png" />
            
            </figure>
    <div>
      <h3>How can you use Cache Rules today?</h3>
      <a href="#how-can-you-use-cache-rules-today">
        
      </a>
    </div>
    <p><b>Cache Rules</b> are available today in beta and can be configured via the <a href="https://developers.cloudflare.com/cache/about/cache-rules/#create-cache-rules-via-api">API</a>, Terraform, or UI in the Caching tab of the dashboard. We welcome you to try the functionality and provide us feedback for how they are working or what additional features you’d like to see via community posts, or however else you generally get our attention ?.</p><p>If you have Page Rules implemented for caching on the same path, Cache Rules will take precedence by design. For our more patient users, we plan on releasing a one-click migration tool for Page Rules in the near future.</p>
    <div>
      <h3>What’s in store for the future of Cache Rules?</h3>
      <a href="#whats-in-store-for-the-future-of-cache-rules">
        
      </a>
    </div>
    <p>In addition to granular control and increased visibility, the new rules products also opens the door to more complex features that can recommend rules to help customers achieve better cache hit ratios and reduce their egress costs, adding additional caching actions and visibility, so you can see precisely how Cache Rules will alter headers that Cloudflare uses to cache content, and allowing customers to run experiments with different rule configurations and see the outcome firsthand. These possibilities represent the tip of the iceberg for the next iteration of how customers will use rules on Cloudflare.</p>
    <div>
      <h3>Try it out!</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>We look forward to you trying Cache Rules and providing feedback on what you’d like to see us build next.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[CDN]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cache Rules]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">5F0VFOCMwG7EXVL6kqOXuK</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Crawler Hints supports Microsoft’s IndexNow in helping users find new content]]></title>
            <link>https://blog.cloudflare.com/crawler-hints-supports-microsofts-indexnow-in-helping-users-find-new-content/</link>
            <pubDate>Fri, 12 Aug 2022 16:30:20 GMT</pubDate>
            <description><![CDATA[ Cloudflare is uniquely positioned to help give crawlers hints about when they should recrawl, if new content has been added, or if content on a site has recently changed ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/h93ft6NJSg9cczGxRWGu9/eba65d160f1d0701c350a6bc92a21acb/image2-9.png" />
            
            </figure><p>The web is constantly changing. Whether it’s news or updates to your social feed, it’s a constant flow of information. As a user, that’s great. But have you ever stopped to think how search engines deal with all the change?</p><p>It turns out, they “index” the web on a regular basis — sending bots out, to constantly crawl webpages, looking for changes. Today, bot traffic accounts for about <a href="https://radar.cloudflare.com/?date_filter=last_30_days">30% of total traffic</a> on the Internet, and given how foundational search is to using the Internet, it should come as no surprise that search engine bots make up a large proportion of that what might come as a surprise is how inefficient the model is, though: we estimate that over <a href="/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/">50% of crawler traffic is wasted effort</a>.</p><p>This has a huge impact. There’s all the additional capacity that owners of websites need to bake into their site to absorb the bots crawling all over it. There’s the transmission of the data. There’s the CPU cost of running the bots. And when you’re running at the scale of the Internet, all of this has a pretty big environmental footprint.</p><p>Part of the problem, though, is nobody had really stopped to ask: maybe there’s a better way?</p><p>Right now, the model for indexing websites is the same as it has been since the 1990s: a “pull” model, where the search engine sends a crawler out to a website after a predetermined amount of time. During Impact Week last year, we asked: what about flipping the model on its head? What about moving to a push model, where a website could simply ping a search engine to let it know an update had been made?</p><p>There are a heap of advantages to such a model. The website wins: it’s not dealing with unnecessary crawls. It also makes sure that as soon as there’s an update to its content, it’s reflected in the search engine — it doesn’t need to wait for the next crawl. The website owner wins because they don't need to manage distinct search engine crawl submissions. The search engine wins, too: it saves money on crawl costs, and it can make sure it gets the latest content.</p><p>Of course, this needs work to be done on both sides of the equation. The websites need a mechanism to alert the search engines; and the search engines need a mechanism to receive the alert, so they know when to do the crawl.</p>
    <div>
      <h3>Crawler Hints — Cloudflare’s Solution for Websites</h3>
      <a href="#crawler-hints-cloudflares-solution-for-websites">
        
      </a>
    </div>
    <p>Solving this problem is why we <a href="/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/">launched Crawler Hints</a>. Cloudflare sits in a unique position on the Internet — we’re serving on average 36 million HTTP requests per second. That represents <i>a lot of websites</i>. It also means we’re uniquely positioned to help solve this problem:  to help give crawlers hints about when they should recrawl if new content has been added or if content on a site has recently changed.</p><p>With Crawler Hints, we send signals to web indexers based on cache data and origin status codes to help them understand when content has likely changed or been added to a site. The aim is to increase the number of relevant crawls as well as drastically reduce the number of crawls that don’t find fresh content, saving bandwidth and compute for both indexers and sites alike, and improving the experience of using the search engines.</p><p>But, of course, that’s just half the equation.</p>
    <div>
      <h3>IndexNow Protocol — the Search Engine Moves from Pull to Push</h3>
      <a href="#indexnow-protocol-the-search-engine-moves-from-pull-to-push">
        
      </a>
    </div>
    <p>Websites alerting the search engine about changes is useless if the search engines aren’t listening — and they simply continue to crawl the way they always have. Of course, search engines are incredibly complicated, and changing the way they operate is no easy task.</p><p>The IndexNow Protocol is a standard developed by Microsoft, Seznam.cz and Yandex, and it represents a major shift in the way search engines operate. Using IndexNow, search engines have a mechanism by which they can receive signals from Crawler Hints. Once they have that signal, they can shift their crawlers from a pull model to a push model.</p><p>In a recent update, <a href="https://blogs.bing.com/webmaster/august-2022/IndexNow-adoption-gains-momentum">Microsoft has announced</a> that millions of websites are now using IndexNow to signal to search engine crawlers when their content needs to be crawled and IndexNow was used to <b>index/crawl about 7% of all new URLs</b> <b>clicked</b> when someone is selecting from web search results.</p><p>On the Cloudflare side, since the release of Crawler Hints in October 2021, Crawler Hints has processed about <b>six-hundred-billion</b> signals to IndexNow.</p><p>That’s a lot of saved crawls.</p>
    <div>
      <h3>How to enable Crawler Hints</h3>
      <a href="#how-to-enable-crawler-hints">
        
      </a>
    </div>
    <p>By enabling Crawler Hints on your website, with the simple click of a button, Cloudflare will take care of signaling to these search engines when your content has changed via the <a href="https://www.indexnow.org/">IndexNow</a> API. You don’t need to do anything else!</p><p>Crawler Hints is free to use and available to all Cloudflare customers. If you’d like to see how Crawler Hints can benefit how your website is indexed by the world's biggest search engines, please feel free to opt-into the service by:</p><ol><li><p>Sign in to your Cloudflare Account.</p></li><li><p>In the dashboard, navigate to the Cache tab.</p></li><li><p>Click on the Configuration section.</p></li><li><p>Locate the Crawler Hints and enable.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gbqdPUtYZlLMw8aVmoygy/3b782370e5d898dd5606088c345f7d33/image1-15.png" />
            
            </figure><p>Upon enabling Crawler Hints, Cloudflare will share when content on your site has changed and needs to be re-crawled with search engines using the IndexNow protocol (<a href="/from-0-to-20-billion-how-we-built-crawler-hints/">this blog</a> can help if you’re interested in finding out more about how the mechanism works).</p>
    <div>
      <h3>What’s Next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Going forward, because the benefits are so substantial for site owners, search operators, and the environment, we plan to start defaulting Crawler Hints on for all our customers. We’re also hopeful that Google, the world’s largest search engine and most wasteful user of Internet resources, will adopt IndexNow or a similar standard and lower the burden of search crawling on the planet.</p><p>When we think of helping to build a better Internet, this is exactly what comes to mind: creating and supporting standards that make it operate better, greener, faster. We’re really excited about the work to date, and will continue to work to improve the signaling to ensure the most valuable information is being sent to the search engines in a timely manner. This includes incorporating additional signals such as etags, last-modified headers, and content hash differences. Adding these signals will help further inform crawlers when they should reindex sites, and how often they need to return to a particular site to check if it’s been changed. This is only the beginning. We will continue testing more signals and working with industry partners so that we can help crawlers run efficiently with these hints.</p><p>And finally: if you’re on Cloudflare, and you’d like to be part of this revolution in how search engines operate on the web (it’s free!), simply follow the instructions in the section above.</p> ]]></content:encoded>
            <category><![CDATA[Crawler Hints]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[SEO]]></category>
            <guid isPermaLink="false">5LL6jyHzqmppNptbCOrWuQ</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone]]></title>
            <link>https://blog.cloudflare.com/early-hints-performance/</link>
            <pubDate>Thu, 23 Jun 2022 15:59:08 GMT</pubDate>
            <description><![CDATA[ During a time of uncertainty due to the global pandemic, a time when everyone was more online than ever before, Cloudflare, Google, and Shopify all came together to build and test Early Hints ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A few months ago, we wrote a <a href="/early-hints/">post</a> focused on a product we were building that could vastly improve page load performance. That product, known as Early Hints, has seen wide adoption since that original post. In early benchmarking experiments with Early Hints, we saw performance improvements that were as high as 30%.</p><p>Now, with over 100,000 customers using Early Hints on Cloudflare, we are excited to talk about how much Early Hints have improved page loads for our customers in production, how customers can get the most out of Early Hints, and provide an update on the next iteration of Early Hints we’re building.</p>
    <div>
      <h3>What Are Early Hints again?</h3>
      <a href="#what-are-early-hints-again">
        
      </a>
    </div>
    <p>As a reminder, the browser you’re using right now to read this page needed instructions for what to render and what resources (like images, fonts, and scripts) need to be fetched from somewhere else in order to complete the loading of this (or any given) web page. When you decide you want to see a page, your browser sends a request to a server and the instructions for what to load come from the server’s response. These responses are generally composed of a multitude of <a href="https://developer.chrome.com/docs/devtools/resources/#browse">resources</a> that tell the browser what content to load and how to display it to the user. The servers sending these instructions to your browser often need time to gather up all of the resources in order to compile the whole webpage. This period is known as “server think time.” Traditionally, during the “server think time” the browser would sit waiting until the server has finished gathering all the required resources and is able to return the full response.</p><p>Early Hints was designed to take advantage of this “server think time” to send instructions to the browser to begin loading readily-available resources <i>while</i> the server finishes compiling the full response. Concretely, the server sends two responses: the first to instruct the browser on what it can begin loading right away, and the second is the full response with the remaining information. By sending these hints to a browser before the full response is prepared, the browser can figure out what it needs to do to load the webpage faster for the end user.</p><p>Early Hints uses the <a href="https://datatracker.ietf.org/doc/html/rfc8297">HTTP status code 103</a> as the first response to the client. The “hints” are HTTP headers attached to the 103 response that are likely to appear in the final response, indicating (with the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link">Link</a> header) resources the browser should begin loading while the server prepares the final response. Sending hints on which assets to expect before the entire response is compiled allows the browser to use this “think time” (when it would otherwise have been sitting idle) to fetch needed assets, prepare parts of the displayed page, and otherwise get ready for the full response to be returned.</p><p>Early Hints on Cloudflare accomplishes performance improvements in three ways:</p><ul><li><p>By sending a response where resources are directed to be <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preload">preloaded</a> by the browser. Preloaded resources direct the browser to begin loading the specified resources as they will be needed soon to load the full page. For example, if the browser needs to fetch a font resource from a third party, that fetch can happen before the full response is returned, so the font is already waiting to be used on the page when the full response returns from the server.</p></li><li><p>By using <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preconnect">preconnect</a> to initiate a connection to places where content will be returned from an origin server. For example, if a Shopify storefront needs content from a Shopify origin to finish loading the page, preconnect will warm up the connection which improves the performance for when the origin returns the content.</p></li><li><p>By caching and emitting Early Hints on Cloudflare, we make an efficient use of the full waiting period - not just server think time - which includes transit latency to the origin. Cloudflare sits within 50 milliseconds of 95% of the Internet-connected population globally. So while a request is routed to an origin and the final response is being compiled, Cloudflare can send an Early Hint from much closer and the browser can begin loading.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5X6KUywKFKIT4njcqG6oDx/6f2dba740fc457b264f93e5a0f84a0cb/image1-26.png" />
            
            </figure><p>Early Hints is like multitasking across the Internet - at the same time the origin is compiling resources for the final response and making calls to databases or other servers, the browser is already beginning to load assets for the end user.</p>
    <div>
      <h3>What’s new with Early Hints?</h3>
      <a href="#whats-new-with-early-hints">
        
      </a>
    </div>
    <p>While developing Early Hints, we’ve been fortunate to work with Google and Shopify to collect data on the performance impact. Chrome provided web developers with <a href="https://developer.chrome.com/en/docs/web-platform/origin-trials/">experimental access</a> to both <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preload">preload</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/preconnect">preconnect</a> support for Link headers in Early Hints. Shopify worked with us to guide the development by providing test frameworks which were invaluable to getting real performance data.</p><p>Today is a big day for Early Hints. Google <a href="https://developer.chrome.com/blog/early-hints/">announced</a> that Early Hints is available in Chrome version 103 with support for preload and preconnect to start. Previously, Early Hints was available via an <a href="https://developer.chrome.com/en/docs/web-platform/origin-trials">origin trial</a> so that Chrome could measure the full performance benefit (A/B test). Now that the data has been collected and analyzed, and we’ve been able to prove a substantial improvement to page load, we’re excited that Chrome’s full support of Early Hints will mean that many more requests will see the performance benefits.</p><p>That's not the only big news coming out about Early Hints. Shopify battle-tested Cloudflare’s implementation of Early Hints during <a href="https://blog.cloudflare.com/the-truth-about-black-friday-and-cyber-monday/">Black Friday/Cyber Monday</a> 2021 and is sharing the performance benefits they saw during the busiest shopping time of the year:</p><blockquote><p>Today, HTTP 103 Early Hints ships with Chrome 103!</p><p>Why is this important for <a href="https://twitter.com/hashtag/webperf?src=hash&amp;ref_src=twsrc%5Etfw">#webperf</a>? How did <a href="https://twitter.com/Shopify?ref_src=twsrc%5Etfw">@Shopify</a> help make all merchant sites faster? (LCP over 500ms faster at p50!) ?</p><p>Hint: A little collaboration w/ <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@cloudflare</a> &amp; <a href="https://twitter.com/googlechrome?ref_src=twsrc%5Etfw">@googlechrome</a> <a href="https://t.co/Dz7BD4Jplp">pic.twitter.com/Dz7BD4Jplp</a></p><p>— Colin Bendell (@colinbendell) <a href="https://twitter.com/colinbendell/status/1539322190541295616?ref_src=twsrc%5Etfw">June 21, 2022</a></p></blockquote><p>While talking to the audience at <a href="https://www.cloudflare.com/connect2022/">Cloudflare Connect London</a> last week, Colin Bendell, Director, Performance Engineering at Shopify summarized it best: "<i>when a buyer visits a website, if that first page that (they) experience is just 10% faster, on average there is a 7% increase in conversion</i>". The beauty of Early Hints is you can get that sort of speedup easily, and with Early Hints that can be one click away.</p><p>You can see a portion of his talk here:</p><div></div>
<p></p><p>The headline here is that during a time of vast uncertainty due to the global pandemic, a time when everyone was more online than ever before, when people needed their Internet to be reliably fast — Cloudflare, Google, and Shopify all came together to build and test Early Hints so that the whole Internet would be a faster, better, and more efficient place.</p><p>So how much did Early Hints improve performance of customers’ websites?</p>
    <div>
      <h3>Performance Improvement with Early Hints</h3>
      <a href="#performance-improvement-with-early-hints">
        
      </a>
    </div>
    <p>In our simple tests back in September, we were able to accelerate the <a href="https://web.dev/lcp/#what-is-lcp">Largest Contentful Paint (LCP)</a> by 20-30%. Granted, this result was on an artificial page with mostly large images where Early Hints impact could be maximized. As for Shopify, we also knew their storefronts were <a href="/early-hints/#how-can-we-speed-up-slow-dynamic-page-loads">particularly good candidates</a> for Early Hints. Each mom-and-pop.shop page depends on many assets served from cdn.shopify.com - speeding up a preconnect to that host should meaningfully accelerate loading those assets.</p><p>But what about other zones? We expected most origins already using Link preload and preconnect headers to see at least modest improvements if they turned on Early Hints. We wanted to assess performance impact for other uses of Early Hints beyond Shopify’s.</p><p>However, <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">getting good data on web page performance impact</a> can be tricky. Not every 103 response from Cloudflare will result in a subsequent request through our network. Some hints tell the browser to preload assets on important third-party origins, for example. And not every Cloudflare zone may have <a href="/introducing-browser-insights/">Browser Insights</a> enabled to gather Real User Monitoring data.</p><p>Ultimately, we decided to do some lab testing with <a href="https://www.webpagetest.org/">WebPageTest</a> of a sample of the most popular websites (top 1,000 by request volume) using Early Hints on their URLs with preload and preconnect Link headers. WebPageTest (which we’ve <a href="/workers-and-webpagetest/">written about in the past</a>) is an excellent tool to visualize and collect metrics on web page performance across a variety of device and connectivity settings.</p>
    <div>
      <h3>Lab Testing</h3>
      <a href="#lab-testing">
        
      </a>
    </div>
    <p>In our earlier blog post, we were mainly focused on Largest Contentful Paint (LCP), which is the time at which the browser renders the largest visible image or text block, relative to the start of the page load. Here we’ll focus on improvements not only to LCP, but also <a href="https://web.dev/fcp">FCP (First Contentful Paint)</a>, which is the time at which the browser first renders visible content relative to the start of the page load.</p><p>We compared test runs with Early Hints support off and on (in Chrome), across four different simulated environments: desktop with a cable connection (5Mbps download / 28ms <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">RTT</a>), mobile with 3G (1.6Mbps / 300ms RTT), mobile with low-latency 3G (1.6Mbps / 150ms RTT) and mobile with 4G (9Mbps / 170ms RTT). After running the tests, we cleaned the data to remove URLs with no visual completeness metrics or less than five DOM elements. (These usually indicated document fragments vs. a page a user might actually navigate to.) This gave us a final sample population of a little more than 750 URLs, each from distinct zones.</p><p>In the box plots below, we’re comparing FCP and LCP percentiles between the timing data control runs (no Early Hints) and the runs with Early Hints enabled. Our sample population represents a variety of zones, some of which load relatively quickly and some far slower, thus the long whiskers and string of outlier points climbing the y-axis. The y-axis is constrained to the max p99 of the dataset, to ensure 99% of the data are reflected in the graph while still letting us focus on the p25 / p50 / p75 differences.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1j17ZcUiVpUfYsuGE2INtM/42a00c9d5f429f2dbc712bf749ab9069/image2-26.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7h5iGHo8JCg7a6tYeBxaiV/80c501e9d52cb7deacaa43034b277fe9/image6-10.png" />
            
            </figure><p>The relative shift in the box plot quantiles suggest we should expect modest benefits for Early Hints for the majority of web pages. By comparing FCP / LCP percentage improvement of the web pages from their respective baselines, we can quantify what those median and p75 improvements would look like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MNoOPalsR0oildpZutzx6/fd9fbd6b10f9521a6466045a17c41012/image8-7.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BlauSorivIGRF5fZiu5Oy/45a0544dc24dab19fd23f2f238ce9f74/image4-14.png" />
            
            </figure><p>A couple observations:</p><ul><li><p>From the p50 values, we see that for 50% of web pages on desktop, Early Hints improved FCP by more than 9.47% and LCP by more than 6.03%. For the p75, or the upper 25%, FCP improved by more than 20.4% and LCP by more than 15.97%.</p></li><li><p>The sizable improvements in First Contentful Paint suggest many hints are for <a href="https://web.dev/render-blocking-resources/">render-blocking assets</a> (such as critical but dynamic stylesheets and scripts that can’t be embedded in the HTML document itself).</p></li><li><p>We see a greater percentage impact on desktop over cable and on mobile over 4G. In theory, the impact of Early Hints is bounded by the load time of the linked asset (i.e. ideally we could preload the entire asset before the browser requires it), so we might expect the FCP / LCP reduction to increase in step with latency. Instead, it appears to be the other way around. There could be many variables at play here - for example, the extra bandwidth the 4G connection provides seems to be more influential than the decreased latency between the two 3G connection settings. Likely that wider bandwidth pipe is especially helpful for URLs we observed that preloaded larger assets such as JS bundles or font files. We also found examples of pages that performed consistently worse on lower-grade connections (see our note on “over-hinting” below).</p></li><li><p>Quite a few sample zones cached their HTML pages on Cloudflare (~15% of the sample). For CDN cache hits, we’d expect Early Hints to be less influential on the final result (because the “server think time” is drastically shorter). Filtering them out from the sample, however, yielded almost identical relative improvement metrics.</p></li></ul><p>The relative distributions between control and Early Hints runs, as well as the per-site baseline improvements, show us Early Hints can be broadly beneficial for use cases beyond Shopify’s. As suggested by the p75+ values, we also still find plenty of case studies showing a more substantial potential impact to LCP (and FCP) like the one we observed from our artificial test case, as indicated from these WebPageTest waterfall diagrams:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4SXT10k2QdHrLaBfVW2ygc/c40ba4f5f31aa48aa5c59658382dc699/image3-18.png" />
            
            </figure><p>These diagrams show the network and rendering activity on the same web page (which, bucking the trend, had some of its best results over mobile – 3G settings, shown here) for its first ten resources. Compare the WebPageTest waterfall view above (with Early Hints disabled) with the waterfall below (Early Hints enabled). The first green vertical line in each indicates First Contentful Paint. The page configures Link preload headers for a few JS / CSS assets, as well as a handful of key images. When Early Hints is on, those assets (numbered 2 through 9 below) get a significant head start from the preload hints. In this case, FCP and LCP improved by 33%!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2zORzzS6Qvh3eX4KEvUW0P/854ed6d6fca1e26e5389e7736cf0b1e4/image5-9.png" />
            
            </figure>
    <div>
      <h3>Early Hints Best Practices and Strategies for Better Performance</h3>
      <a href="#early-hints-best-practices-and-strategies-for-better-performance">
        
      </a>
    </div>
    <p>The effect of Early Hints can vary widely on a case-by-case basis. We noticed particularly successful zones had one or more of the following:</p><ul><li><p>Preconnect Link headers to important third-party origins (e.g. an origin hosting the pages’ assets, or Google Fonts).</p></li><li><p>Preload Link headers for a handful of critical render-blocking resources.</p></li><li><p>Scripts and stylesheets split into chunks, enumerated in preload Links.</p></li><li><p>A preload Link for the LCP asset, e.g. the featured image on a blog post.</p></li></ul><p>It’s quite possible these strategies are already familiar to you if you work on web performance! Essentially the <a href="https://web.dev/uses-rel-preload/">best</a> <a href="https://web.dev/uses-rel-preconnect/">practices</a> that apply to using Link headers or  elements in the HTML  also apply to Early Hints. That is to say: if your web page is already using preload or preconnect Link headers, using Early Hints should amplify those benefits.</p><p>A cautionary note here: while it may be safer to aggressively send assets in Early Hints versus <a href="https://web.dev/performance-http2/">Server Push</a> (as the hints won’t arbitrarily send browser-cached content the way <a href="/early-hints/#didn-t-server-push-try-to-solve-this-problem">Server Push might</a>), it is still possible to _over-_hint non-critical assets and saturate network bandwidth in a similar manner to <a href="https://docs.google.com/document/d/1K0NykTXBbbbTlv60t5MyJvXjqKGsCVNYHyLEXIxYMv0/edit">overpushing</a>. For example, one page in our sample listed well over 50 images in its 103 response (but not one of its render-blocking JS scripts). It saw improvements over cable, but was consistently worse off in the higher latency, lower bandwidth mobile connection settings.</p><p>Google has great guidelines for configuring Link headers at your origin in their <a href="https://developer.chrome.com/blog/early-hints/">blog post</a>. As for emitting these Links as Early Hints, Cloudflare can take care of that for you!</p>
    <div>
      <h3>How to enable on Cloudflare</h3>
      <a href="#how-to-enable-on-cloudflare">
        
      </a>
    </div>
    <ul><li><p>To enable Early Hints on Cloudflare, simply sign in to your account and select the domain you’d like to enable it on.</p></li><li><p>Navigate to the <b>Speed Tab</b> of the dashboard.</p></li><li><p>Enable Early Hints.</p></li></ul><p>Enabling Early Hints means that we will harvest the preload and preconnect Link headers from your origin responses, cache them, and send them as 103 Early Hints for subsequent requests so that future visitors will be able to gain an even greater performance benefit.</p><p>For more information about our Early Hints feature, please refer to our <a href="/early-hints/">announcement post</a> or our <a href="https://developers.cloudflare.com/cache/about/early-hints/">documentation</a>.</p>
    <div>
      <h3>Smart Early Hints update</h3>
      <a href="#smart-early-hints-update">
        
      </a>
    </div>
    <p>In our <a href="/early-hints/">original blog post</a>, we also mentioned our intention to ship a product improvement to Early Hints that would generate the 103 on your behalf.</p><p>Smart Early Hints will generate Early Hints even when there isn’t a Link header present in the origin response from which we can harvest a 103. The goal is to be a no-code/configuration experience with massive improvements to page load. Smart Early Hints will infer what assets can be preloaded or <a href="https://web.dev/priority-hints/">prioritized</a> in different ways by analyzing responses coming from our customer’s origins. It will be your one-button web performance guru completely dedicated to making sure your site is loading as fast as possible.</p><p>This work is still under development, but we look forward to getting it built before the end of the year.</p>
    <div>
      <h3>Try it out!</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>The promise Early Hints holds has only started to be explored, and we’re excited to continue to build products and features and make the web performance reliably fast.</p><p>We’ll continue to update you along our journey as we develop Early Hints and look forward to your <a href="https://community.cloudflare.com/">feedback</a> (special thanks to the Cloudflare Community members who have already been invaluable) as we move to bring Early Hints to everyone.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Early Hints]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">5T0MhC5gTkcWdAymdWAovY</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Edward Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation]]></title>
            <link>https://blog.cloudflare.com/part1-coreless-purge/</link>
            <pubDate>Sat, 14 May 2022 14:50:50 GMT</pubDate>
            <description><![CDATA[ This is Part 1 of what will be a year-long series documenting our journey to re-architect our systems to be the best, fastest, most-scalable purge in the industry  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>There is a famous <a href="https://skeptics.stackexchange.com/questions/19836/has-phil-karlton-ever-said-there-are-only-two-hard-things-in-computer-science">quote</a> attributed to a Netscape engineer: “There are only two difficult problems in computer science: cache invalidation and naming things.” While naming things does oddly take up an inordinate amount of time, cache invalidation shouldn’t.</p><p>In the past we’ve written about Cloudflare’s <a href="/benchmarking-edge-network-performance/">incredibly fast</a> response times, whether content is cached on our global network or not. If content is cached, it can be served from a Cloudflare cache server, which are distributed across the globe and are generally a lot closer in physical proximity to the visitor. This saves the visitor’s request from needing to go all the way back to an origin server for a response. But what happens when a webmaster updates something on their origin and would like these caches to be updated as well? This is where cache “purging” (also known as “invalidation”) comes in.</p><p>Customers thinking about setting up a CDN and caching infrastructure consider questions like:</p><ul><li><p>How do different caching invalidation/purge mechanisms compare?</p></li><li><p>How many times a day/hour/minute do I expect to purge content?</p></li><li><p>How quickly can the cache be purged when needed?</p></li></ul><p>This blog will discuss why invalidating cached assets is hard, what Cloudflare has done to make it easy (because we care about your experience as a developer), and the engineering work we’re putting in this year to make the performance and scalability of our purge services <b>the best in the industry</b>.</p>
    <div>
      <h3>What makes purging difficult also makes it useful</h3>
      <a href="#what-makes-purging-difficult-also-makes-it-useful">
        
      </a>
    </div>
    <p><b>(i) Scale</b>The first thing that complicates cache invalidation is doing it at scale. With data centers in over 270 cities around the globe, our most popular users’ assets can be replicated at every corner of our network. This also means that a purge request needs to be distributed to all data centers where that content is cached. When a data center receives a purge request, it needs to locate the cached content to ensure that subsequent visitor requests for that content are not served stale/outdated data. Requests for the purged content should be forwarded to the origin for a fresh copy, which is then re-cached on its way back to the user.</p><p>This process repeats for every data center in Cloudflare’s fleet. And due to Cloudflare’s massive network, maintaining this consistency when certain data centers may be unreachable or go offline, is what makes purging at scale difficult.</p><p>Making sure that every data center gets the purge command and remains up-to-date with its content logs is only part of the problem. Getting the purge request to data centers quickly so that content is updated uniformly is the next reason why cache invalidation is hard.  </p><p><b>(ii) Speed</b>When purging an asset, race conditions abound. Requests for an asset can happen at any time, and may not follow a pattern of predictability. Content can also change unpredictably. Therefore, when content changes and a purge request is sent, it must be distributed across the globe quickly. If purging an individual asset, say an image, takes too long, some visitors will be served the new version, while others are served outdated content. This data inconsistency degrades user experience, and can lead to confusion as to which version is the “right” version. Websites can sometimes even break in their entirety due to this purge latency (e.g. by upgrading versions of a non-backwards compatible JavaScript library).</p><p>Purging at speed is also difficult when combined with Cloudflare’s massive global footprint. For example, if a purge request is traveling at the speed of light between Tokyo and Cape Town (both cities where Cloudflare has data centers), just the transit alone (no authorization of the purge request or execution) would take <a href="https://wondernetwork.com/pings/Cape%20Town/Tokyo">over 180ms on average</a> based on submarine cable placement. Purging a smaller network footprint may reduce these speed concerns while making purge times appear faster, but does so at the expense of worse performance for customers who want to make sure that their cached content is fast for everyone.</p><p><b>(iii) Scope</b>The final thing that makes purge difficult is making sure that only the unneeded web assets are invalidated. Maintaining a cache is important for egress cost savings and response speed. Webmasters’ origins could be knocked over by a thundering herd of requests, if they choose to purge all content needlessly. It’s a delicate balance of purging just enough: too much can result in both monetary and downtime costs, and too little will result in visitors receiving outdated content.</p><p>At Cloudflare, what to invalidate in a data center is often dictated by the type of purge. <a href="https://developers.cloudflare.com/cache/how-to/purge-cache#purge-everything"><b>Purge everything</b></a>, as you could probably guess, purges all cached content associated with a website. <a href="https://developers.cloudflare.com/cache/how-to/purge-cache#purge-cache-by-prefix-enterprise-only"><b>Purge by prefix</b></a> purges content based on a URL prefix. <a href="https://developers.cloudflare.com/cache/how-to/purge-cache"><b>Purge by hostname</b></a> can invalidate content based on a hostname. <a href="https://developers.cloudflare.com/cache/how-to/purge-cache#purge-by-single-file-by-url"><b>Purge by URL</b></a> or single file purge focuses on purging specified URLs. Finally, <a href="https://developers.cloudflare.com/cache/how-to/purge-cache#purge-using-cache-tags"><b>Purge by tag</b></a> purges assets that are marked with <a href="https://developers.cloudflare.com/cache/how-to/purge-cache#add-cache-tag-http-response-headers">Cache-Tag headers</a>. These markers offer webmasters flexibility in grouping assets together. When a purge request for a tag comes into a data center, all assets marked with that tag will be invalidated.</p><p>With that overview in mind, the remainder of this blog will focus on putting each element of invalidation together to benchmark the performance of Cloudflare’s purge pipeline and provide context for what performance means in the real-world. We’ll be reviewing how fast Cloudflare can invalidate cached content across the world. This will provide a baseline analysis for how quick our purge systems are presently, which we will use to show how much we will improve by the time we launch our new purge system later this year.</p>
    <div>
      <h3>How does purge work currently?</h3>
      <a href="#how-does-purge-work-currently">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/28UAdSGwGwCAnskZpb2wfm/511150372628a278ee2910105ebcc2e2/Cored-Purge.png" />
            
            </figure><p>In general, purge takes the following route through Cloudflare’s data centers.</p><ul><li><p>A purge request is initiated via the API or UI. This request specifies how our data centers should identify the assets to be purged. This can be accomplished via cache-tag header(s), URL(s), entire hostnames, and much more.</p></li><li><p>The request is received by any Cloudflare data center and is identified to be a purge request. It is then routed to a Cloudflare core data center (a set of a few data centers responsible for network management activities).</p></li><li><p>When a core data center receives it, the request is processed by a number of internal services that (for example) make sure the request is being sent from an account with the appropriate authorization to purge the asset. Following this, the request gets fanned out globally to all Cloudflare data centers using our distribution service.</p></li><li><p>When received by a data center, the purge request is processed and all assets with the matching identification criteria are either located and removed, or marked as stale. These stale assets are not served in response to requests and are instead re-pulled from the origin.</p></li><li><p>After being pulled from the origin, the response is written to cache again, replacing the purged version.</p></li></ul><p>Now let’s look at this process in practice. Below we describe Cloudflare’s purge benchmarking that uses real-world performance data from our purge pipeline.</p>
    <div>
      <h3>Benchmarking purge performance design</h3>
      <a href="#benchmarking-purge-performance-design">
        
      </a>
    </div>
    <p>In order to understand how performant Cloudflare’s purge system is, we measured the time it took from sending the purge request to the moment that the purge is complete and the asset is no longer served from cache.  </p><p>In general, the process of measuring purge speeds involves: (i) ensuring that a particular piece of content is cached, (ii) sending the command to invalidate the cache, (iii) simultaneously checking our internal system logs for how the purge request is routed through our infrastructure, and (iv) measuring when the asset is removed from cache (first miss).</p><p>This process measures how quickly cache is invalidated from the perspective of an average user.</p><ul><li><p><b>Clock starts</b>As noted above, in this experiment we’re using sampled RUM data from our purge systems. The goal of this experiment is to benchmark current data for how long it can take to purge an asset on Cloudflare across different regions. Once the asset was cached in a region on Cloudflare, we identify when a purge request is received for that asset. At that same instant, the clock started for this experiment. We include in this time any retrys that we needed to make (due to data centers missing the initial purge request) to ensure that the purge was done consistently across our network. The clock continues as the request transits our purge pipeline  (data center &gt; core &gt; fanout &gt; purge from all data centers).  </p></li><li><p><b>Clock stops</b>What caused the clock to stop was the purged asset being removed from cache, meaning that the data center is no longer serving the asset from cache to visitor’s requests. Our internal logging measures the precise moment that the cache content has been removed or expired and from that data we were able to determine the following benchmarks for our purge types in various regions.  </p></li></ul>
    <div>
      <h4>Results</h4>
      <a href="#results">
        
      </a>
    </div>
    <p>We’ve divided our benchmarks in two ways: by purge type and by region.</p><p>We singled out Purge by URL because it identifies a single target asset to be purged. While that asset can be stored in multiple locations, the amount of data to be purged is strictly defined.</p><p>We’ve combined all other types of purge (everything, tag, prefix, hostname) together because the amount of data to be removed is highly variable. Purging a whole website or by assets identified with cache tags could mean we need to find and remove a multitude of content from many different data centers in our network.</p><p>Secondly, we have segmented our benchmark measurements by regions and specifically we confined the benchmarks to specific data center servers in the region because we were concerned about clock skews between different data centers. This is the reason why we limited the test to the same cache servers so that even if there was skew, they’d all be skewed in the same way.  </p><p>We took the latency from the representative data centers in each of the following regions and the global latency. Data centers were not evenly distributed in each region, but in total represent about 90 different cities around the world:  </p><ul><li><p>Africa</p></li><li><p>Asia Pacific Region (APAC)</p></li><li><p>Eastern Europe (EEUR)</p></li><li><p>Eastern North America (ENAM)</p></li><li><p>Oceania</p></li><li><p>South America (SA)</p></li><li><p>Western Europe (WEUR)</p></li><li><p>Western North America (WNAM)</p></li></ul><p>The <b>global</b> latency numbers represent the purge data from all Cloudflare data centers in over 270 cities globally. In the results below, global latency numbers may be larger than the regional numbers because it represents all of our data centers instead of only a regional portion so outliers and retries might have an outsized effect.</p><p>Below are the results for how quickly our current purge pipeline was able to invalidate content by purge type and region. All times are represented in seconds and divided into P50, P75, and P99 <a href="https://en.wikipedia.org/wiki/Quantile">quantiles</a>. Meaning for “P50” that 50% of the purges were at the indicated latency or faster.  </p><p><b>Purge By URL</b></p><table>
<thead>
<tr>
<th></th>
<th>P50</th>
<th>P75</th>
<th>P99</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>AFRICA</strong></td>
<td>0.95s</td>
<td>1.94s</td>
<td>6.42s</td>
</tr>
<tr>
<td><strong>APAC</strong></td>
<td>0.91s</td>
<td>1.87s</td>
<td>6.34s</td>
</tr>
<tr>
<td><strong>EEUR</strong></td>
<td>0.84s</td>
<td>1.66s</td>
<td>6.30s</td>
</tr>
<tr>
<td><strong>ENAM</strong></td>
<td>0.85s</td>
<td>1.71s</td>
<td>6.27s</td>
</tr>
<tr>
<td><strong>OCEANIA</strong></td>
<td>0.95s</td>
<td>1.96s</td>
<td>6.40s</td>
</tr>
<tr>
<td><strong>SA</strong></td>
<td>0.91s</td>
<td>1.86s</td>
<td>6.33s</td>
</tr>
<tr>
<td><strong>WEUR</strong></td>
<td>0.84s</td>
<td>1.68s</td>
<td>6.30s</td>
</tr>
<tr>
<td><strong>WNAM</strong></td>
<td>0.87s</td>
<td>1.74s</td>
<td>6.25s</td>
</tr>
<tr>
<td><strong>GLOBAL</strong></td>
<td><strong>1.31s</strong></td>
<td><strong>1.80s</strong></td>
<td><strong>6.35s</strong></td>
</tr>
</tbody>
</table><p><b>Purge Everything, by Tag, by Prefix, by Hostname</b></p><table>
<thead>
<tr>
<th></th>
<th>P50</th>
<th>P75</th>
<th>P99</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>AFRICA</strong></td>
<td>1.42s</td>
<td>1.93s</td>
<td>4.24s</td>
</tr>
<tr>
<td><strong>APAC</strong></td>
<td>1.30s</td>
<td>2.00s</td>
<td>5.11s</td>
</tr>
<tr>
<td><strong>EEUR</strong></td>
<td>1.24s</td>
<td>1.77s</td>
<td>4.07s</td>
</tr>
<tr>
<td><strong>ENAM</strong></td>
<td>1.08s</td>
<td>1.62s</td>
<td>3.92s</td>
</tr>
<tr>
<td><strong>OCEANIA</strong></td>
<td>1.16s</td>
<td>1.70s</td>
<td>4.01s</td>
</tr>
<tr>
<td><strong>SA</strong></td>
<td>1.25s</td>
<td>1.79s</td>
<td>4.106s</td>
</tr>
<tr>
<td><strong>WEUR</strong></td>
<td>1.19s</td>
<td>1.73s</td>
<td>4.04s</td>
</tr>
<tr>
<td><strong>WNAM</strong></td>
<td>0.9995s</td>
<td>1.53s</td>
<td>3.83s</td>
</tr>
<tr>
<td><strong>GLOBAL</strong></td>
<td><strong>1.57s</strong></td>
<td><strong>2.32s</strong></td>
<td><strong>5.97s</strong></td>
</tr>
</tbody>
</table><p>A general note about these benchmarks — the data represented here was taken from over 48 hours (two days) of RUM purge latency data in May 2022. If you are interested in how quickly your content can be invalidated on Cloudflare, we suggest you <a href="https://dash.cloudflare.com/sign-up">test our platform</a> with your website.</p><p>Those numbers are good and much faster than most of our competitors. Even in the worst case, we see the time from when you tell us to purge an item to when it is removed globally is less than seven seconds. In most cases, it’s less than a second. That’s great for most applications, but we want to be even faster. Our goal is to get cache purge to as close as theoretically possible to the speed of light limit for a network our size, which is 200ms.</p><p>Intriguingly, LEO satellite networks may be able to provide even lower global latency than fiber optics because of the straightness of the paths between satellites that use laser links. We've done calculations of latency between LEO satellites that suggest that there are situations in which going to space will be the fastest path between two points on Earth. We'll let you know if we end up using laser-space-purge.</p><p>Just as we have with network performance, we are going to relentlessly measure our cache performance as well as the cache performance of our competitors. We won’t be satisfied until we verifiably are the fastest everywhere. To do that, we’ve built a new cache purge architecture which we’re confident will make us the fastest cache purge in the industry.</p>
    <div>
      <h3>Our new architecture</h3>
      <a href="#our-new-architecture">
        
      </a>
    </div>
    <p>Through the end of 2022, we will continue this blog series incrementally showing how we will become the fastest, most-scalable purge system in the industry. We will continue to update you with how our purge system is developing  and benchmark our data along the way.</p><p>Getting there will involve rearchitecting and optimizing our purge service, which hasn’t received a systematic redesign in over a decade. We’re excited to do our development in the open, and bring you along on our journey.</p><p>So what do we plan on updating?</p>
    <div>
      <h3>Introducing Coreless Purge</h3>
      <a href="#introducing-coreless-purge">
        
      </a>
    </div>
    <p>The first version of our cache purge system was designed on top of a set of central core services including authorization, authentication, request distribution, and filtering among other features that made it a high-reliability service. These core components had ultimately become a bottleneck in terms of scale and performance as our network continues to expand globally. While most of our purge dependencies have been containerized, the message queue used was still running on bare metals, which led to increased operational overhead when our system needed to scale.</p><p>Last summer, we built a proof of concept for a completely decentralized cache invalidation system using in-house tech – Cloudflare <a href="https://developers.cloudflare.com/workers/">Workers</a> and <a href="https://developers.cloudflare.com/workers/learning/using-durable-objects/">Durable Objects</a>. Using Durable Objects as a queuing mechanism gives us the flexibility to scale horizontally by adding more Durable Objects as needed and can reduce time to purge with quick regional fanouts of purge requests.</p><p>In the new purge system we’re ripping out the reliance on core data centers and moving all that functionality to every data center, we’re calling it <b>coreless purge</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4lwTbQG2Tj9eivYVv2KWBi/8a9d3abc8e17bfb3ae2eebb3f262dd6b/Coreless-Purge.png" />
            
            </figure><p>Here’s a general overview of how coreless purge will work:</p><ul><li><p>A purge request will be initiated via the API or UI. This request will specify how we should identify the assets to be purged.</p></li><li><p>The request will be routed to the nearest Cloudflare data center where it is identified to be a purge request and be passed to a Worker that will perform several of the key functions that currently occur in the core (like authorization, filtering, etc).</p></li><li><p>From there, the Worker will pass the purge request to a Durable Object in the data center. The Durable Object will queue all the requests and broadcast them to every data center when they are ready to be processed.</p></li><li><p>When the Durable Object broadcasts the purge request to every data center, another Worker will pass the request to the service in the data center that will invalidate the content in cache (executes the purge).</p></li></ul><p>We believe this re-architecture of our system built by stringing together multiple services from the Workers platform will help improve both the speed and scalability of the purge requests we will be able to handle.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We’re going to spend a lot of time building and optimizing purge because, if there’s one thing we learned here today, it's that cache invalidation is a difficult problem but those are exactly the types of problems that get us out of bed in the morning.</p><p>If you want to help us optimize our purge pipeline, we’re <a href="https://www.cloudflare.com/careers/jobs/?department=default&amp;location=default">hiring</a>.</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">60eg6IRY58lTNwkSmE1vFX</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cache Reserve: massively extending Cloudflare’s cache]]></title>
            <link>https://blog.cloudflare.com/introducing-cache-reserve/</link>
            <pubDate>Wed, 11 May 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ I’m delighted to announce an extension of the benefits of caching with Cache Reserve: a new way to persistently serve all static content from Cloudflare’s global cache ]]></description>
            <content:encoded><![CDATA[ <p></p><p>One hundred percent. 100%. One-zero-zero. That’s the cache ratio we’re all chasing. Having a high cache ratio means that more of a website’s content is served from a Cloudflare data center close to where a visitor is requesting the website. Serving content from Cloudflare’s cache means it loads faster for visitors, saves website operators money on <a href="https://www.cloudflare.com/learning/cloud/what-are-data-egress-fees/">egress fees</a> from origins, and provides multiple layers of resiliency and protection to make sure that content is reliably available to be served.</p><p>Today, I’m delighted to announce a massive extension of the benefits of caching with Cache Reserve: a new way to <i>persistently</i> serve all static content from Cloudflare’s global cache. By using Cache Reserve, customers can see higher cache hit ratios and lower egress bills.</p>
    <div>
      <h3>Why is getting a 100% cache ratio difficult?</h3>
      <a href="#why-is-getting-a-100-cache-ratio-difficult">
        
      </a>
    </div>
    <p>Every second, Cloudflare serves tens-of-millions of requests from our cache which equates to multiple terabytes-per-second of cached data being delivered to website visitors around the world. With this massive scale, we must ensure that the most requested content is cached in the areas where it is most popular. Otherwise, visitors might wait too long for content to be delivered from farther away and our network would be running inefficiently. If cache storage in a certain region is full, our network avoids imposing these inefficiencies on our customers by evicting less-popular content from the data center and replacing it with more-requested content.</p><p>This works well for the majority of use cases, but all customers have long tail content that is rarely requested and may be evicted from cache. This can be a cause of concern for customers, as this unpopular content can be a major cost driver if it is evicted repeatedly and needs to be served from an origin. This concern can be especially significant for customers with massive content libraries. So how can we make sure to keep this less popular content in cache to shield the customer from origin egress?</p><p>Cache Reserve removes customer content from this popularity contest and ensures that even if the specific content hasn’t been requested in months, it can still be served from Cloudflare’s cache - avoiding the need to pull it from the origin and saving the customer money on egress. <b>Cache Reserve helps get customers closer to that 100% cache ratio and helps serve all of their content from our global CDN, forever.</b>  </p>
    <div>
      <h3>Why is cache eviction needed?</h3>
      <a href="#why-is-cache-eviction-needed">
        
      </a>
    </div>
    <p>Most content served from our cache starts its journey from an origin server - where content is hosted. In order to be <a href="https://developers.cloudflare.com/cache/">admitted</a> to Cloudflare’s cache the content sent from the origin must meet certain eligibility criteria that ensures it can be reused to respond to other requests for a website (content that doesn’t change based on who is visiting the site).</p><p>After content is admitted to cache, the next question to consider is how long it should remain in cache. Since cache ratios are calculated by taking the number of requests for content and identifying the portion that are answered from a cache server instead of an origin server, ensuring content remains cached in an area it is highly requested is paramount to achieving a high cache ratio.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6r0Ji3hfY9z82e5WG4JgUV/42f00724a198542726e18f587c642942/Why-is-cache-eviction-needed.png" />
            
            </figure><p>Some CDNs use a pay-to-play model that allows customers to pay more money to ensure content is cached in certain areas for some length of time. At Cloudflare, we don’t charge customers based on where or for how long something is cached. This means that we have to use signals other than a customer’s willingness to pay to make sure that the right content is cached for the right amount of time and in the right areas.</p><p>Where to cache a piece of content is pretty straightforward (where it’s being requested), how long content should remain in cache can be highly variable.</p><p>Beyond headers like <a href="https://developers.cloudflare.com/cache/about/cache-control/">cache-control</a> or <a href="https://developers.cloudflare.com/cache/about/cdn-cache-control/">cdn-cache-control</a>, which help determine how long a customer wants something to be served from cache, the other element that <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> must consider is whether they need to evict content early to optimize storage of more popular assets. We do eviction based on an algorithm called “least recently used” or <a href="https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU)">LRU</a>. This means that the least-requested content can be evicted from cache first to make space for more popular content when storage space is full.</p><p>This caching strategy requires keeping track of a lot of information about when requests come in and constantly updating the cache to make sure that the hottest content is kept in cache and the least popular content is evicted. This works well and is fair for the wide-array of customers our CDN supports.</p><p>However, if a customer has a large library of content that might go through cycles of popularity and which they’d like to serve from cache regardless, then LRU might mean additional origin egress as assets that are requested sparingly over a long time frame are pulled more from the origin.    </p><p>That’s where Cache Reserve comes in. Cache Reserve is not an alternative to our popularity-based cache but a complement to it. By backstopping all cacheable content in Cache Reserve customers don't have to worry about unwanted cache eviction any longer.    </p>
    <div>
      <h3>Cache Reserve</h3>
      <a href="#cache-reserve">
        
      </a>
    </div>
    <p>Cache Reserve is a large, persistent data store that is <a href="/r2-open-beta/">implemented on top of R2</a>. By pushing a single button in the dashboard, all of your website’s cacheable content will be written to Cache Reserve. In the same way that <a href="/introducing-smarter-tiered-cache-topology-generation/">Tiered Cache</a> builds a hierarchy of caches between your visitors and your origin, Cache Reserve serves as the ultimate <a href="https://developers.cloudflare.com/cache/about/tiered-cache/">upper-tier cache</a> that will reserve storage space for your assets for as long as you want. This ensures that your content is always served from cache, shielding your origin from unneeded egress fees, and improving response performance.</p>
    <div>
      <h3>How Does Cache Reserve Work?</h3>
      <a href="#how-does-cache-reserve-work">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2tDsXZxGWLk21f6LBKlE1J/ca8d6b51a2851604e5edec1757369456/How-does-Cache-Reserve-work.png" />
            
            </figure><p>Cache Reserve sits between our edge data centers and your origin and provides guaranteed SLAs for how long your content can remain in cache.</p><p>As content is pulled from the origin, it will be written to Cache Reserve, followed by upper-tier data centers, and lower-tier data centers until it reaches the client to fulfill the request. Subsequent requests for the same content will not need to go all the way back to the origin for the response and can, instead, be served from a cache closer to the visitor. Improving both performance and costs of serving the assets. As content gets evicted from lower-tiers and upper-tiers, it will be backstopped by Cache Reserve.</p><p>Cache Reserve voids the request-based eviction that’s implemented in LRU and ensures that assets will remain in cache as long as they are needed. Cache Reserve extends the benefits of Tiered Cache by reducing the number of times Cloudflare’s network needs to ask an origin for content we should have in cache, while simultaneously limiting the number of connections and requests that our data centers need to open to your origin to ask for missing content. Using Cache Reserve with tiered cache helps collapse the number of requests that result from multiple concurrent cache misses from lower-tiers for the same content.</p><p>As an example, let’s assume a cold request for example.com, something our network has never seen before. If a client request comes into the closest lower-tier data center and it is a miss, that lower-tier is mapped to an upper-tier data center. When the lower-tier asks the upper-tier for the content and it is also a miss, the upper-tier will ask Cache Reserve for the content. Now, being the ultimate upper-tier, it will be the only data center that can ask the origin for content if it is not stored on our network. This will help limit the origin resources you need to devote to serving this content as once it’s written to Cache Reserve, your origin doesn’t need to fan out the content to any other part of Cloudflare’s network.</p><p>When your content does need updating, Cache Reserve will respect cache-control headers and purge requests. This means that if you want to control how long something remains fresh in Cache Reserve, before Cloudflare goes back to your origin to revalidate the content, set it as a cache-control header and it will be respected without risk of early eviction. Or if you want to update content on the fly, you can send a purge request which will be respected in both Cloudflare’s cache and in Cache Reserve.</p>
    <div>
      <h3>How do you use Cache Reserve?</h3>
      <a href="#how-do-you-use-cache-reserve">
        
      </a>
    </div>
    <p>Currently, Cache Reserve is in <b>closed beta</b>, meaning that it’s available to anyone who wants to sign up but we will be slowly rolling it out to customers over the coming weeks to make sure that we are quickly triaging edge cases and making fundamental improvements before we make it generally available to everyone.</p><p>To sign up for the Cache Reserve beta:</p><ul><li><p>Simply go to the <b>Caching tile</b> in the dashboard.</p></li><li><p>Navigate to the <a href="https://dash.cloudflare.com/caching/cache-reserve"><b>Cache Reserve</b> page</a> and push the sign up button.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sN44XnzUrfZWqCZYKMcct/6d0ce3703f5b0d3596077d3afdb765c9/Screen-Shot-2022-05-09-at-9.52.58-AM.png" />
            
            </figure><p>The Cache Reserve Plan will mimic the low cost of R2. Storage will be \$0.015 per GB per month and operations will be \$0.36 per million reads, and \$4.50 per million writes. For more information about pricing, please refer to the R2 <a href="https://developers.cloudflare.com/r2/platform/pricing/">page</a> to get a general idea (Cache Reserve pricing page will be out soon).  </p>
    <div>
      <h3>Try it out!</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Cache Reserve holds tremendous promise to increase cache hit ratios — which will improve the economics of running any website while speeding up visitors' experiences. We’re excited to begin letting people use Cache Reserve soon. Be sure to <a href="https://dash.cloudflare.com/caching/cache-reserve">check out the beta</a> and let us know what you think.</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cache Reserve]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">1sdZvm2Siy8x7D6ELqjXC8</guid>
            <dc:creator>Alex Krivit</dc:creator>
        </item>
        <item>
            <title><![CDATA[Crawler Hints Update: Cloudflare Supports IndexNow and Announces General Availability]]></title>
            <link>https://blog.cloudflare.com/cloudflare-now-supports-indexnow/</link>
            <pubDate>Mon, 18 Oct 2021 16:30:53 GMT</pubDate>
            <description><![CDATA[ Crawler Hints now supports IndexNow, a new protocol that allows websites to notify search engines whenever content on their website content is created, updated, or deleted.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2iltb3QjEq4Bh3H1Cl9wLe/ff96685b7792d346d4810ecbf1be0787/web-crawling.png" />
            
            </figure><p>In the midst of the <a href="https://www.nbcnews.com/science/environment/us-just-hottest-summer-record-rcna1957">hottest summer on record</a>, Cloudflare held its first ever <a href="https://www.cloudflare.com/impact-week/">Impact Week</a>. We announced a variety of products and initiatives that aim to make the Internet and our planet a better place, with a focus on environmental, social, and governance projects. Today, we’re excited to share an update on Crawler Hints, an initiative announced during Impact Week. <a href="/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/">Crawler Hints</a> is a service that improves the operating efficiency of approximately <a href="https://radar.cloudflare.com/">45%</a> of the Internet traffic that comes from web crawlers and bots.</p><p>Crawler Hints achieves this efficiency improvement by ensuring that crawlers get information about what they’ve crawled previously and if it makes sense to crawl a website again.</p><p>Today we are excited to announce two updates for Crawler Hints:</p><ol><li><p><b>The first</b>: Crawler Hints now supports <a href="https://www.indexnow.org/">IndexNow</a>, a new protocol that allows websites to notify search engines whenever content on their website content is created, updated, or deleted. By <a href="https://blogs.bing.com/webmaster/october-2021/IndexNow-Instantly-Index-your-web-content-in-Search-Engines">collaborating with Microsoft</a> and Yandex, Cloudflare can help improve the efficiency of their search engine infrastructure, customer origin servers, and the Internet at large.</p></li><li><p><b>The second</b>: Crawler Hints is now generally available to all Cloudflare customers for free. Customers can benefit from these more efficient crawls with a single button click. If you want to enable Crawler Hints, you can do so in the <b>Cache Tab</b> of the Dashboard.</p></li></ol>
    <div>
      <h3>What problem does Crawler Hints solve?</h3>
      <a href="#what-problem-does-crawler-hints-solve">
        
      </a>
    </div>
    <p>Crawlers help make the Internet work. Crawlers are automated services that travel the Internet looking for… well, whatever they are programmed to look for. To power experiences that rely on indexing content from across the web, search engines and similar services operate massive networks of bots that crawl the Internet to identify the content most relevant to a user query. But because content on the web is always changing, and there is no central clearinghouse for <i>when</i> these changes happen on websites, search engine crawlers have a Sisyphean task. They must continuously wander the Internet, making guesses on how frequently they should check a given site for updates to its content.</p><p>Companies that run search engines have worked hard to make the process as efficient as possible, pushing the state-of-the-art for crawl cadence and infrastructure efficiency. But there remains one clear area of waste: excessive crawl.</p><p>At Cloudflare, we see traffic from all the major search crawlers, and have spent the last year studying how often these bots revisit a page that hasn't changed since they last saw it. Every one of these visits is a waste. And, unfortunately, our observation suggests that 53% of this crawler traffic is wasted.</p><p>With Crawler Hints, we expect to make this task a bit more tractable by providing an additional heuristic to the people who run these crawlers. This will allow them to know when content has been changed or added to a site instead of relying on preferences or previous changes that might not reflect the true change cadence for a site. <b>Crawler Hints aims to increase the proportion of relevant crawls and limit crawls that don’t find fresh content, improving customer experience and reducing the need for repeated crawls.</b></p><p>Cloudflare sits in a unique position on the Internet to help give crawlers hints about when they should recrawl a site. Don’t knock on a website’s door every 30 seconds to see if anything is new when Cloudflare can proactively tell your crawler when it’s the best time to index new or changed content. That’s Crawler Hints in a nutshell!</p><p>If you want to learn more about Crawler Hints, see the <a href="/crawler-hints-how-cloudflare-is-reducing-the-environmental-impact-of-web-searches/">original blog</a>.</p>
    <div>
      <h3>What is IndexNow?</h3>
      <a href="#what-is-indexnow">
        
      </a>
    </div>
    <p><a href="https://www.indexnow.org/">IndexNow</a> is a standard that was <a href="https://blogs.bing.com/webmaster/october-2021/IndexNow-Instantly-Index-your-web-content-in-Search-Engines">written by Microsoft</a> and Yandex search engines. The standard aims to provide an efficient manner of signaling to search engines and other crawlers for when they should crawl content. Cloudflare’s Crawler Hints now supports IndexNow.</p><blockquote><p><i>​​In its simplest form, IndexNow is a simple ping so that search engines know that a URL and its content has been added, updated, or deleted, allowing search engines to quickly reflect this change in their search results.</i>- <a href="https://www.indexnow.org/">www.indexnow.org</a></p></blockquote><p>By enabling Crawler Hints on your website, with the simple click of a button, Cloudflare will take care of signaling to these search engines when your content has changed via the IndexNow protocol. You don’t need to do anything else!  </p><p>What does this mean for search engine operators? With Crawler Hints you’ll receive a near real-time, pushed feed of change events of Cloudflare websites (that have opted in). This, in turn, will dramatically improve not just the quality of your results, but also the energy efficiency of running your bots.</p>
    <div>
      <h3>Collaborating with Industry leaders</h3>
      <a href="#collaborating-with-industry-leaders">
        
      </a>
    </div>
    <p>Cloudflare is in a unique position to have a <a href="https://w3techs.com/technologies/overview/proxy">sizable portion of the Internet</a> proxied behind us. As a result, we are able to see trends in the way bots access web resources. That visibility allows us to be proactive about signaling which crawls are required vs. not. We are excited to work with partners to make these insights useful to our customers. Search engines are key constituents in this equation. We are happy to collaborate and share this vision of a more efficient Internet with Microsoft Bing, and Yandex. We have been testing our interaction via IndexNow with Bing and Yandex for months with some early successes.  </p><p>This is just the beginning. Crawler Hints is a continuous process that will require working with more and more partners to improve Internet efficiency more generally. While this may take time and participation from other key parts of the industry, we are open to collaborate with any interested participant who relies on crawling to power user experiences.</p><blockquote><p><i>“The cache data from CDNs is a really valuable signal for content freshness. Cloudflare, as one of the top CDNs, is key in the adoption of IndexNow to become an industry-wide standard with a large portion of the internet actually using it. Cloudflare has built a really easy 1-click button for their users to start using it right away. Cloudflare’s mission of helping build a better Internet resonates well with why I started IndexNow i.e. to build a more efficient and effective Search.”</i><b>- </b><b><i><b>Fabrice Canel</b></i></b><b><i>, Principal Program Manager</i></b></p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KNZYN7jyh3Tq4IzxLloof/b333dd352855c57f61a50a5e956c3fe7/Screen-Shot-2021-10-18-at-8.25.56-AM.png" />
            
            </figure><blockquote><p><i>“Yandex is excited to join IndexNow as part of our long-term focus on sustainability. We have been working with the Cloudflare team in early testing to incorporate their caching signals in our crawling mechanism via the IndexNow API. The results are great so far.”</i><b>- </b><b><i><b>Maxim Zagrebin</b></i></b><b><i>, Head of Yandex Search</i></b></p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Wfd1Ma47SAZCEzzRrftnX/d86515ecd41770e133e650892a013436/Screen-Shot-2021-10-18-at-8.25.50-AM.png" />
            
            </figure><blockquote><p><i>"DuckDuckGo is supportive of anything that makes search more environmentally friendly and better for end users without harming privacy. We're looking forward to working with Cloudflare on this proposal."</i><b>- </b><b><i><b>Gabriel Weinberg</b></i></b><b><i>, CEO and Founder</i></b></p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/35W6NfS2GN9ta5qN68BhUg/5020126c5a739758e4b54034f96eb15c/Horizontal_Default--1-.jpg" />
            
            </figure>
    <div>
      <h3>How do Cloudflare customers benefit?</h3>
      <a href="#how-do-cloudflare-customers-benefit">
        
      </a>
    </div>
    <p>Crawler Hints doesn’t just benefit search engines. For our customers and origin owners, Crawler Hints will ensure that search engines and other bot-powered experiences will always have the freshest version of your content, translating into happier users and ultimately influencing search rankings. Crawler Hints will also mean less traffic hitting your origin, improving resource consumption. Moreover, your site performance will be improved as well: your human customers will not be competing with bots!</p><p>And for Internet users? When you interact with bot-fed experiences — which we all do every day, whether we realize it or not, like search engines or pricing tools — these will now deliver more useful results from crawled data, because Cloudflare has signaled to the owners of the bots the moment they need to update their results.</p>
    <div>
      <h3>How can I enable Crawler Hints for my website?</h3>
      <a href="#how-can-i-enable-crawler-hints-for-my-website">
        
      </a>
    </div>
    <p>Crawler Hints is free to use for all Cloudflare customers and promises to revolutionize web efficiency. If you’d like to see how Crawler Hints can benefit how your website is indexed by the worlds biggest search engines, please feel free to opt-into the service:</p><ol><li><p>Sign in to your Cloudflare Account.</p></li><li><p>In the dashboard, navigate to the <b>Cache tab.</b></p></li><li><p>Click on the <b>Configuration</b> section.</p></li><li><p>Locate the Crawler Hints sign up card and enable. It's that easy.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6csTJTB4b54BV7Lw1Mn9wC/852526fff3dbcc949586bda216809009/Crawler.png" />
            
            </figure><p>Once you’ve enabled it, we will begin sending hints to search engines about when they should crawl particular parts of your website. Crawler Hints holds tremendous promise to improve the efficiency of the Internet.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re thrilled to collaborate with industry leaders Microsoft Bing, and Yandex to bring IndexNow to Crawler Hints, and to bring Crawler Hints to a wide audience in general availability. We look forward to working with additional companies who run crawlers to help make this process more efficient for the whole Internet.</p> ]]></content:encoded>
            <category><![CDATA[Crawler Hints]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[SEO]]></category>
            <guid isPermaLink="false">2HzZcqvqFaoDkhIKR2kkVF</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Abhi Das</dc:creator>
        </item>
    </channel>
</rss>