
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 12:24:40 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Policy, privacy and post-quantum: anonymous credentials for everyone]]></title>
            <link>https://blog.cloudflare.com/pq-anonymous-credentials/</link>
            <pubDate>Thu, 30 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ The world is adopting anonymous credentials for digital privacy, but these systems are vulnerable to quantum computers. This post explores the cryptographic challenges and promising research paths toward building new, quantum-resistant credentials from the ground up. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet is in the midst of one of the most complex transitions in its history: the migration to <a href="https://www.cloudflare.com/en-gb/pqc/"><u>post-quantum (PQ) cryptography.</u></a> Making a system safe against quantum attackers isn't just a matter of replacing elliptic curves and RSA with PQ alternatives, such as <a href="https://csrc.nist.gov/pubs/fips/203/final"><u>ML-KEM</u></a> and <a href="https://csrc.nist.gov/pubs/fips/204/final"><u>ML-DSA</u></a>. These algorithms have higher costs than their classical counterparts, making them unsuitable as drop-in replacements in many situations.</p><p>Nevertheless, we're <a href="https://blog.cloudflare.com/pq-2025/"><u>making steady progress</u></a> on the most important systems. As of this writing, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS connections</u></a> to Cloudflare's edge are safe against <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>store-now/harvest-later attacks</u></a>. Quantum safe authentication is further out, as it will require more significant changes to how certificates work. Nevertheless, this year we've <a href="https://blog.cloudflare.com/bootstrap-mtc/"><u>taken a major step</u></a> towards making TLS deployable at scale with PQ certificates.</p><p>That said, TLS is only the lowest hanging fruit. There are <a href="https://github.com/fancy-cryptography/fancy-cryptography"><u>many more ways</u></a> we have come to rely on cryptography than key exchange and authentication and which aren’t as easy to migrate. In this blog post, we'll take a look at <b>Anonymous Credentials (ACs)</b>.</p><p>ACs solve a common privacy dilemma: how to prove a specific fact (for example that one has had a valid driver’s license for more than three years) without over-sharing personal information (like the place of birth)? Such problems are fundamental to a number of use cases, and ACs may provide the foundation we need to make these applications as private as possible.</p><p>Just like for TLS, the central question for ACs is whether there are drop-in, PQ replacements for its classical primitives that will work at the scale required, or will it be necessary to re-engineer the application to mitigate the cost of PQ.</p><p>We'll take a stab at answering this question in this post. We'll focus primarily on an emerging use case for ACs described in a <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>concurrent post</u></a>: rate-limiting requests from agentic AI platforms and users. This demanding, high-scale use case is the perfect lens through which to evaluate the practical readiness of today's post-quantum research. We'll use it as our guiding problem to measure each cryptographic approach.</p><p>We'll first explore the current landscape of classical AC adoption across the tech industry and the public sector. Then, we’ll discuss what cryptographic researchers are currently looking into on the post-quantum side. Finally, we’ll take a look at what it'll take to bridge the gap between theory and real-world applications.</p><p>While anonymous credentials are only seeing their first real-world deployments in recent years, it is critical to start thinking about the post-quantum challenge concurrently. This isn’t a theoretical, too-soon problem given the store-now decrypt-later threat. If we wait for mass adoption before solving post-quantum anonymous credentials, ACs risk being dead on arrival. Fortunately, our survey of the state of the art shows the field is close to a practical solution. Let’s start by reviewing real-world use-cases of ACs. </p>
    <div>
      <h2>Real world (classical) anonymous credentials</h2>
      <a href="#real-world-classical-anonymous-credentials">
        
      </a>
    </div>
    <p>In 2026, the European Union is <a href="https://eur-lex.europa.eu/eli/reg/2024/1183/oj"><u>set to launch its digital identity wallet</u></a>, a system that will allow EU citizens, residents and businesses to digitally attest to their personal attributes. This will enable them, for example, to display their driver’s license on their phone or <a href="https://educatedguesswork.org/posts/age-verification-id/"><u>perform age</u></a> <a href="https://soatok.blog/2025/07/31/age-verification-doesnt-need-to-be-a-privacy-footgun/"><u>verification</u></a>. Cloudflare's use cases for ACs are a bit different and revolve around keeping our customers secure by, for example, rate limiting bots and humans as we <a href="https://blog.cloudflare.com/privacy-pass-standard/"><u>currently do with Privacy Pass</u></a>. The EU wallet is a massive undertaking in identity provisioning, and our work operates at a massive scale of traffic processing. Both initiatives are working to solve a shared fundamental problem: allowing an entity to prove a specific attribute about themselves without compromising their privacy by revealing more than they have to.</p><p>The EU's goal is a fully mobile, secure, and user-friendly digital ID. The current technical plan is ambitious, as laid out in the <a href="https://ec.europa.eu/digital-building-blocks/sites/spaces/EUDIGITALIDENTITYWALLET/pages/900014854/Version+2.0+of+the+Architecture+and+Reference+Framework+now+available"><u>Architecture Reference Framework (ARF)</u></a>. It defines the key privacy goals of unlinkability to guarantee that if a user presents attributes multiple times, the recipients cannot link these separate presentations to conclude that they concern the same user. However, currently proposed solutions fail to achieve this. The framework correctly identifies the core problem: attestations contain <i>unique, fixed elements such as hash values, […], public keys, and signatures</i> that colluding entities could store and compare to track individuals.</p><p>In its present form, the ARF's recommendation to mitigate cross-session linkability is <i>limited-time attestations</i>. The framework acknowledges in the text that this would <i>only partially mitigate Relying Party linkability</i>. An alternative proposal that would mitigate linkability risks are single-use credentials. They are not considered at the moment due to <i>complexity and management overhead</i>. The framework therefore leans on <i>organisational and enforcement measure</i>s to deter collusion instead of providing a stronger guarantee backed by cryptography.</p><p>This reliance on trust assumptions could become problematic, especially in the sensitive context of digital identity. When asked for feedback, c<a href="https://github.com/eu-digital-identity-wallet/eudi-doc-architecture-and-reference-framework/issues/200"><u>ryptographic researchers agree</u></a> that the proper solution would be to adopt anonymous credentials. However, this solution presents a long-term challenge. Well-studied methods for anonymous credentials, such as those based on <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-bbs-signatures/"><u>BBS signatures</u></a>, are vulnerable to quantum computers. While some <a href="https://datatracker.ietf.org/doc/rfc9474/"><u>anonymous</u></a> <a href="https://datatracker.ietf.org/doc/draft-schlesinger-cfrg-act/"><u>schemes</u></a> are PQ-unlinkable, meaning that user privacy is preserved even when cryptographically relevant quantum computers exist, new credentials could be forged. This may be an attractive target for, say, a nation state actor.</p><p>New cryptography also faces deployment challenges: in the EU, only approved cryptographic primitives, as listed in the <a href="https://www.sogis.eu/documents/cc/crypto/SOGIS-Agreed-Cryptographic-Mechanisms-1.3.pdf"><u>SOG-IS catalogue,</u></a> can be used. At the time of writing, this catalogue is limited to established algorithms such as RSA or ECDSA. But when it comes to post-quantum cryptography, SOG-IS is <a href="https://www.sogis.eu/documents/cc/crypto/SOGIS-Agreed-Cryptographic-Mechanisms-1.3.pdf"><u>leaving the problem wide open</u></a>.</p><p>The wallet's first deployment will not be quantum-secure. However, with the transition to post-quantum algorithms being ahead of us, as soon as 2030 for high-risk use cases per <a href="https://digital-strategy.ec.europa.eu/en/library/coordinated-implementation-roadmap-transition-post-quantum-cryptography"><u>the EU roadma</u></a>p, research in a post-quantum compatible alternative for anonymous credentials is critical. This will encompass<b> </b><i>standardizing more cryptography.</i></p><p>Regarding existing large scale deployments, the US has allowed digital ID on smartphones since 2024. They <a href="https://www.tsa.gov/digital-id/participating-states"><u>can be used at TSA checkpoints</u></a> for instance. The <a href="https://www.dhs.gov/science-and-technology/privacy-preserving-digital-credential-wallets-verifiers"><u>Department of Homeland Security lists funding for six privacy-preserving digital credential wallets and verifiers on their website.</u></a> This early exploration and engagement is a positive sign, and highlights the need to plan for privacy-preserving presentations. </p><p>Finally, ongoing efforts at the Internet Engineering Task Force (IETF)<b> </b>aim<b> </b>to build a more private Internet by standardizing advanced cryptographic techniques. Active individual drafts (i.e., not yet adopted by a working group), such as <a href="https://datatracker.ietf.org/doc/draft-google-cfrg-libzk/"><u>Longfellow</u></a> and Anonymous Credit Tokens (<a href="https://datatracker.ietf.org/doc/draft-schlesinger-cfrg-act/"><u>ACT</u></a>), and adopted drafts like Anonymous Rate-limited Credentials (<a href="https://datatracker.ietf.org/doc/draft-yun-privacypass-crypto-arc/"><u>ARC</u></a>), propose more flexible multi-show anonymous credentials that incorporate developments over the last several years. At IETF 117 in 2023, <a href="https://www.irtf.org/anrw/2023/slides-117-anrw-sessc-not-so-low-hanging-fruit-security-and-privacy-research-opportunities-for-ietf-protocols-00.pdf"><u>post-quantum anonymous credentials and deployable generic anonymous credentials were presented as a research opportunity</u></a>. Check out our <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>post on rate limiting agents</u></a> for details.</p><p>Before we get into the state-of-the-art for PQ, allow us to try to crystalize a set of requirements for real world applications.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>Given the diversity of use cases, adoption of ACs will be made easier by the fact that they can be built from a handful of powerful primitives. (More on this in our <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>concurrent post</u></a>.) As we'll see in the next section, we don't yet have drop-in, PQ alternatives for these kinds of primitives. The "building blocks" of PQ ACs are likely to look quite different, and we're going to know something about what we're building towards.</p><p>For our purposes, we can think of an anonymous credential as a kind of fancy <a href="https://en.wikipedia.org/wiki/Blind_signature"><b><u>blind signature</u></b></a>. What's that you ask? A blind signature scheme has two phases: <b>issuance</b>, in which the server signs a message chosen by the client; and <b>presentation</b>, in which the client reveals the message and the signature to the server. The scheme should be <b>unlinkable</b> in the sense that the server can't link any message and signature to the run of the issuance protocol in which it was produced. It should also be <b>unforgeable</b> in the sense that no client can produce a valid signature without interacting with the server.</p><p>The key difference between ACs and blind signatures is that, during presentation of an AC, the client only presents <i>part of the message</i> in plaintext; the rest of the message is kept secret. Typically, the message has three components:</p><ol><li><p>Private <b>state</b>, such as a counter that, for example, keeps track of the number of times the credential was presented. The client would prove to the server that the state is "valid", for example, a counter with value $0 \leq C \leq N$, without revealing $C$. In many situations, it's desirable to allow the server to update this state upon successful presentation, for example, by decrementing the counter. In the context of rate limiting, this is the number of how many requests are left for a credential.</p></li><li><p>A random value called the <b>nullifier</b> that is revealed to the server during presentation. In rate-limiting, the nullifier prevents a user from spending a credential with a given state more than once.</p></li><li><p>Public <b>attributes</b> known to both the client and server that bind the AC to some application context. For example, this might represent the window of time in which the credential is valid (without revealing the exact time it was issued).</p></li></ol><p>Such ACs are well-suited for rate limiting requests made by the client. Here the idea is to prevent the client from making more than some maximum number of requests during the credential's lifetime. For example, if the presentation limit is 1,000 and the validity window is one hour, then the clients can make up to 0.27 requests/second on average before it gets throttled.</p><p>It's usually desirable to enforce rate limits on a <b>per-origin</b> basis. This means that if the presentation limit is 1,000, then the client can make at most 1,000 requests to any website that can verify the credential. Moreover, it can do so safely, i.e., without breaking unlinkability across these sites.</p><p>The current generation of ACs being considered for standardization at IETF are only <b>privately verifiable,</b> meaning the server issuing the credential (the <b>issuer</b>) must share a private key with the server verifying the credential (the <b>origin</b>). This will be sufficient for some deployment scenarios, but many will require <b>public verifiability</b>, where the origin only needs the issuer's public key. This is possible with BBS-based credentials, for example.</p><p>Finally, let us say a few words about round complexity. An AC is <b>round optimal</b> if issuance and presentation both complete in a single HTTP request and response. In our survey of PQ ACs, we found a number of papers that discovered neat tricks that reduce bandwidth (the total number of bits transferred between the client and server) at the cost of additional rounds. However, for use cases like ours, <b>round optimality</b> is an absolute necessity, especially for presentation. Not only do multiple rounds have a high impact on latency, they also make the implementation far more complex.</p><p>Within these constraints, our goal is to develop PQ ACs that have as low communication cost (i.e., bandwidth consumption) and runtime as possible in the context of rate-limiting.</p>
    <div>
      <h2>"Ideal world" (PQ) anonymous credentials</h2>
      <a href="#ideal-world-pq-anonymous-credentials">
        
      </a>
    </div>
    <p>The academic community has produced a number of promising post-quantum ACs. In our survey of the state of the art, we evaluated several leading schemes, scoring them on their underlying primitives and performance to determine which are truly ready for the Internet. To understand the challenges, it is essential to first grasp the cryptographic building blocks used in ACs today. We’ll now discuss some of the core concepts that frequently appear in the field.</p>
    <div>
      <h3>Relevant cryptographic paradigms</h3>
      <a href="#relevant-cryptographic-paradigms">
        
      </a>
    </div>
    
    <div>
      <h4>Zero-knowledge proofs</h4>
      <a href="#zero-knowledge-proofs">
        
      </a>
    </div>
    <p>Zero-knowledge proofs (ZKPs) are a cryptographic protocol that allows a <i>prover</i> to convince a <i>verifier</i> that a statement is true without revealing the secret information, or <i>witness</i>. ZKPs play a central role in ACs: they allow proving statements of the secret part of the credential's state without revealing the state itself. This is achieved by transforming the statement into a mathematical representation, such as a set of polynomial equations over a finite field. The prover then generates a proof by performing complex operations on this representation, which can only be completed correctly if they possess the valid witness.</p><p>General-purpose ZKP systems, like <a href="https://eprint.iacr.org/2018/046"><u>Scalable Transparent Arguments of Knowledge (STARKs)</u></a>, can prove the integrity of <i>any</i> computation up to a certain size. In a STARK-based system, the computational trace is represented as a <i>set of polynomials</i>. The prover then constructs a proof by evaluating these polynomials and committing to them using cryptographic hash functions. The verifier can then perform a quick probabilistic check on this proof to confirm that the original computation was executed correctly. Since the proof itself is just a collection of hashes and sampled polynomial values, it is secure against quantum computers, providing a statistically sound guarantee that the claimed result is valid.</p>
    <div>
      <h4>Cut-and-Choose</h4>
      <a href="#cut-and-choose">
        
      </a>
    </div>
    <p>Cut-and-choose is a cryptographic technique designed to ensure a prover’s honest behaviour by having a verifier check a random subset of their work. The prover first commits to multiple instances of a computation, after which the verifier randomly chooses a portion to be <i>cut open</i> by revealing the underlying secrets for inspection. If this revealed subset is correct, the verifier gains high statistical confidence that the remaining, un-opened instances are also correct.</p><p>This technique is important because while it is a generic tool used to build protocols secure against malicious adversaries, it also serves as a crucial case study. Its security is not trivial; for example, practical attacks on cut-and-choose schemes built with (post-quantum) homomorphic encryption have succeeded by <a href="https://eprint.iacr.org/2025/1890.pdf"><u>attacking the algebraic structure of the encoding</u></a>, not the encryption itself. This highlights that even generic constructions must be carefully analyzed in their specific implementation to prevent subtle vulnerabilities and information leaks.</p>
    <div>
      <h4>Sigma Protocols</h4>
      <a href="#sigma-protocols">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-sigma-protocols/01/"><u>Sigma protocols</u></a> follow a more structured approach that does not require us to throw away any computations. The <a href="https://pages.cs.wisc.edu/~mkowalcz/628.pdf"><u>three-move protocol</u></a> starts with a <i>commitment</i> phase where the prover generates some randomness<i>,</i> which is added to the input to generate the commitment, and sends the commitment to the verifier. Then, the verifier <i>challenges </i>the prover with an unpredictable challenge. To finish the proof, the prover provides a <i>response</i> in which they combine the initial randomness with the verifier’s challenge in a way that is only possible if the secret value, such as the solution to a discrete logarithm problem, is known.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ihEZ5KhWBQ0PZF5pTc0Bi/e35de03a89af0c2254bcc114041f6904/image4.png" />
          </figure><p><sup>Depiction of a Sigma protocol flow, where the prover commits to their witness $w$, the verifier challenges the prover to prove knowledge about $w$, and the prover responds with a mathematical statement that the verifier can either accept or reject.</sup></p><p>In practice, the prover and verifier don't run this interactive protocol. Instead, they make it non-interactive using a technique known as the <a href="https://link.springer.com/content/pdf/10.1007/3-540-47721-7_12.pdf"><u>Fiat-Shamir transformation</u></a>. The idea is that the prover generates the challenge <i>itself</i>, by deriving it from its own commitment. It may sound a bit odd, but it works quite well. In fact, it's the basis of signatures like ECDSA and even PQ signatures like ML-DSA.</p>
    <div>
      <h4>MPC in the head</h4>
      <a href="#mpc-in-the-head">
        
      </a>
    </div>
    <p>Multi-party computation (MPC) is a cryptographic tool that allows multiple parties to jointly compute a function over their inputs without revealing their individual inputs to the other parties. <a href="https://web.cs.ucla.edu/~rafail/PUBLIC/77.pdf"><u>MPC in the Head</u></a> (MPCitH) is a technique to generate zero-knowledge proofs by simulating a multi-party protocol <i>in the head</i> of the prover.</p><p>The prover simulates the state and communication for each virtual party, commits to these simulations, and shows the commitments to the verifier. The verifier then challenges the prover to open a subset of these virtual parties. Since MPC protocols are secure even if a minority of parties are dishonest, revealing this subset doesn't leak the secret, yet it convinces the verifier that the overall computation was correct. </p><p>This paradigm is particularly useful to us because it's a flexible way to build post-quantum secure ZKPs. MPCitH constructions build their security from symmetric-key primitives (like hash functions). This approach is also transparent, requiring no trusted setup. While STARKs share these post-quantum and transparent properties, MPCitH often offers faster prover times for many computations. Its primary trade-off, however, is that its proofs scale linearly with the size of the circuit to prove, while STARKs are succinct, meaning their proof size grows much slower.</p>
    <div>
      <h4>Rejection sampling</h4>
      <a href="#rejection-sampling">
        
      </a>
    </div>
    <p>When a randomness source is biased or outputs numbers outside the desired range, rejection sampling can correct the distribution. For example, imagine you need a random number between 1 and 10, but your computer only gives you random numbers between 0 and 255. (Indeed, this is the case!) The rejection sampling algorithm calls the RNG until it outputs a number below 11 and above 0: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ogslPSn4DJYx3R5jGZ3mi/7ab640864dc26d6e1e2eb53c25f628ea/image6.png" />
          </figure><p>Calling the generator over and over again may seem a bit wasteful. An efficient implementation can be realized with an eXtendable Output Function (XOF). A XOF takes an input, for example a seed, and computes an arbitrarily-long output. An example is the SHAKE family (part of the <a href="https://csrc.nist.gov/pubs/fips/202/final"><u>SHA3 standard</u></a>), and the recently proposed round-reduced version of SHAKE called <a href="https://datatracker.ietf.org/doc/rfc9861/"><u>TurboSHAKE</u></a>.</p><p>Let’s imagine you want to have three numbers between 1 and 10. Instead of calling the XOF over and over, you can also ask the XOF for several bytes of output. Since each byte has a probability of 3.52% to be in range, asking the XOF for 174 bytes is enough to have a greater than 99% chance of finding at least three usable numbers. In fact, we can be even smarter than this: 10 fits in four bits, so we can split the output bytes into lower and higher <a href="https://en.wikipedia.org/wiki/Nibble"><u>nibbles</u></a>. The probability of a nibble being in the desired range is now 56.4%:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4W98tjgA7gIkaM7A5LBMyi/7b12bbfd22e53b84439a7c9e690605d9/image2.png" />
          </figure><p><sup>Rejection sampling by batching queries. </sup></p><p>Rejection sampling is a part of many cryptographic primitives, including many we'll discuss in the schemes we look at below.</p>
    <div>
      <h3>Building post-quantum ACs</h3>
      <a href="#building-post-quantum-acs">
        
      </a>
    </div>
    <p>Classical anonymous credentials (ACs), such as ARC and ACT, are built from algebraic groups- specifically, elliptic curves, which are very efficient. Their security relies on the assumption that certain mathematical problems over these groups are computationally hard. The premise of post-quantum cryptography, however, is that quantum computers can solve these supposedly hard problems. The most intuitive solution is to replace elliptic curves with a post-quantum alternative. In fact, cryptographers have been working on a replacement for a number of years: <a href="https://eprint.iacr.org/2018/383"><u>CSIDH</u></a>. </p><p>This raises the key question: can we simply adapt a scheme like ARC by replacing its elliptic curves with CSIDH? The short answer is <b>no</b>, due to a critical roadblock in constructing the necessary zero-knowledge proofs. While we can, in theory, <a href="https://eprint.iacr.org/2023/1614"><u>build the required Sigma protocols or MPC-in-the-Head (MPCitH) proofs from CSIDH</u></a>, they have a prerequisite that makes them unusable in practice: they require a <b>trusted setup</b> to ensure the prover cannot cheat. This requirement is a non-starter, as <a href="https://eprint.iacr.org/2022/518"><u>no algorithm for performing a trusted setup in CSIDH exists</u></a>. The trusted setup for sigma protocols can be replaced by a combination of <a href="https://eprint.iacr.org/2016/505"><u>generic techniques from multi-party computation</u></a> and cut-and-choose protocols, but that adds significant computation cost to the already computationally expensive isogeny operations.</p><p>This specific difficulty highlights a more general principle. The high efficiency of classical credentials like ARC is deeply tied to the rich algebraic structure of elliptic curves. Swapping this component for a post-quantum alternative, or moving to generic constructions, fundamentally alters the design and its trade-offs. We must therefore accept that post-quantum anonymous credentials cannot be a simple "lift-and-shift" of today's schemes. They will require new designs built from different cryptographic primitives, such as lattices or hash functions.</p>
    <div>
      <h3>Prefabricated schemes from generic approaches</h3>
      <a href="#prefabricated-schemes-from-generic-approaches">
        
      </a>
    </div>
    <p>At Cloudflare, we explored a <a href="https://eprint.iacr.org/2023/414"><u>post-quantum privacy pass construction in 2023</u></a> that closely resembles the functionality needed for anonymous credentials. The main result is a generic construction that composes separate, quantum-secure building blocks: a digital signature scheme and a general-purpose ZKP system:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dpmFzSv7HG5JHEEqu7D9o/ea1f02c37c0e36dc0972dfd1044fa9a3/image8.png" />
          </figure><p>The figure shows a cryptographic protocol divided into two main phases: (1.) Issuance: The user commits to a message (without revealing it) and sends the commitment to the server. The server signs the commitment and returns this signed commitment, which serves as a token. The user verifies the server's signature. (2.) Redemption: To use the token, the user presents it and constructs a proof. This proof demonstrates they have a valid signature on the commitment and opens the commitment to reveal the original message. If the server validates the proof, the user and server continue (e.g., to access a rate-limited origin).</p><p>The main appeal of this modular design is its flexibility. The experimental <a href="https://github.com/guruvamsi-policharla/zkdilithium"><u>implementation</u></a> uses a modified version of the signature ML-DSA signatures and STARKs, but the components can be easily swapped out. The design provides strong, composable security guarantees derived directly from the underlying parts. A significant speedup for the construction came from replacing the hash function SHA3 in ML-DSA with the zero-knowledge friendly <a href="https://eprint.iacr.org/2019/458"><u>Poseidon</u></a>.</p><p>However, the modularity of our post-quantum Privacy Pass construction <a href="https://zkdilithium.cloudflareresearch.com/index.html"><u>incurs a significant performance overhead</u></a> demonstrated in a clear trade-off between proof generation time and size: a fast 300 ms proof generation requires a large 173 kB signature, while a 4.8s proof generation time cuts the size of the signature nearly in half. A balanced parameter set, which serves as a good benchmark for any dedicated solution to beat, took 660 ms to sign and resulted in a 112 kB signature. The implementation is currently a proof of concept, with perhaps some room for optimization. Alternatively, a different signature like <a href="https://datatracker.ietf.org/doc/draft-ietf-cose-falcon/"><u>FN-DSA</u></a> could offer speed improvements: while its issuance is more complex, its verification is far more straightforward, boiling down to a simple hash-to-lattice computation and a norm check.</p><p>However, while this construction gives a functional baseline, these figures highlight the performance limitations for a real-time rate limiting system, where every millisecond counts. The 660 ms signing time strongly motivates the development of <i>dedicated</i> cryptographic constructions that trade some of the modularity for performance.</p>
    <div>
      <h3>Solid structure: Lattices</h3>
      <a href="#solid-structure-lattices">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/lattice-crypto-primer/"><u>Lattices</u></a> are a natural starting point when discussing potential post-quantum AC candidates. NIST standardized ML-DSA and ML-KEM as signature and KEM algorithms, both of which are based on lattices. So, are lattices the answer to post-quantum anonymous credentials?</p><p>The answer is a bit nuanced. While explicit anonymous credential schemes from lattices exist, they have shortcomings that prevent real-world deployment: for example, a <a href="https://eprint.iacr.org/2023/560.pdf"><u>recent scheme</u></a> sacrifices round-optimality for smaller communication size, which is unacceptable for a service like Privacy Pass where every second counts. Given that our RTT is 100ms or less for the majority of users, each extra communication round adds tangible latency especially for those on slower Internet connections. When the final credential size is still over 100 kB, the trade-offs are hard to justify. So, our search continues. We expand our horizon by looking into <i>blind signatures </i>and whether we can adapt them for anonymous credentials.</p>
    <div>
      <h4>Two-step approach: Hash-and-sign</h4>
      <a href="#two-step-approach-hash-and-sign">
        
      </a>
    </div>
    <p>A prominent paradigm in lattice-based signatures is the <i>hash-and-sign</i> construction. Here, the message is first hashed to a point in the lattice. Then, the signer uses their secret key, a <a href="https://eprint.iacr.org/2007/432"><u>lattice trapdoor</u></a>, to generate a vector that, when multiplied with the private key, evaluates to the hashed point in the lattice. This is the core mechanism behind signature schemes like FN-DSA.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66hA0KmluGoGO4I2SHAGTv/1a465c6c810e4f17df3112b96ed816da/image1.png" />
          </figure><p>Adapting hash-and-sign for blind signatures is tricky, since the signer may not learn the message. This introduces a significant security challenge: If the user can request signatures on arbitrary points, they can mount an attack to extract the trapdoor by repeatedly requesting signatures for carefully chosen arbitrary points. These points can be used to reconstruct a short basis, which is equivalent to a key recovery. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1lyCHqOTL477mFGSWjH3dv/48ffe46acfbe81b692c2ba30f383634b/image9.png" />
          </figure><p>The standard defense against this attack is to require the user to prove in zero-knowledge that the point they are asking to be signed is the blinded output of the specified hash function. However, proving hash preimages leads to the same problem as in the generic post-quantum privacy pass paper: proving a conventional hash function (like SHA3) inside a ZKP is computationally expensive and has a large communication complexity.</p><p>This difficult trade-off is at the heart of recent academic work. The <a href="https://eprint.iacr.org/2023/077.pdf"><u>state-of-the-art paper</u></a> presents two lattice-based blind signature schemes with small signature sizes of 22 KB for a signature and 48 kB for a privately-verifiable protocol that may be more useful in a setting like anonymous credential. However, this focus on the final signature size comes at the cost of an impractical <i>issuance</i>. The user must provide ZKPs for the correct hash and lattice relations that, by the paper’s own analysis, can add to<i> several hundred kilobytes</i> and take<i> 20 seconds to generate and 10 seconds to verify</i>.</p><p>While these results are valuable for advancing the field, this trade-off is a significant barrier for any large-scale, practical system. For our use case, a protocol that increases the final signature size moderately in exchange for a more efficient and lightweight issuance process would be a more suitable and promising direction.</p>
    <div>
      <h4>Best of two signatures: Hash-and-sign with aborts</h4>
      <a href="#best-of-two-signatures-hash-and-sign-with-aborts">
        
      </a>
    </div>
    <p>A promising technique for blind signatures combines the hash-and-sign paradigm with <i>Fiat-Shamir with aborts</i>, a method that relies on rejection sampling signatures. In this approach, the signer repeatedly attempts to generate a signature and aborts any result that may leak information about the secret key. This process ensures the final signature is statistically independent of the key and is used in modern signatures like ML-DSA. The <a href="https://eprint.iacr.org/2014/1027"><u>Phoenix signature</u></a> scheme uses <i>hash-and-sign with aborts</i>, where a message is first hashed into the lattice and signed, with rejection sampling employed to break the dependency between the signature and the private key.</p><p>Building on this foundation is an <a href="https://eprint.iacr.org/2024/131"><u>anonymous credential scheme for hash-and-sign with aborts</u></a>. The main improvement over hash-and-sign anonymous credentials is that, instead of proving the validity of a hash, the user commits to their attributes, which avoids costly zero-knowledge proofs.</p><p>The scheme is <a href="https://github.com/Chair-for-Security-Engineering/lattice-anonymous-credentials"><u>fully implemented</u></a> and credentials with attribute proofs just under 80 KB and signatures under 7 kB. The scheme takes less than 400 ms for issuance and 500 ms for showing the credential. The protocol also has a lot of features necessary for anonymous credentials, allowing users to prove relations between attributes and request pseudonyms for different instances.</p><p>This research presents a compelling step towards real-world deployability by combining state-of-the-art techniques to achieve a much healthier balance between performance and security. While the underlying mathematics are a bit more complex, the scheme is fully implemented and with a proof of knowledge of a signature at 40 kB and a prover time under a second, the scheme stands out as a great contender. However, for practical deployment, these figures would likely need a significant speedup to be usable in real-time systems. An improvement seems plausible, given recent <a href="https://eprint.iacr.org/2024/1952"><u>advances in lattice samplers</u></a>. Though the exact scale we can achieve is unclear. Still, we think it would be worthwhile to nudge the underlying design paradigm a little closer to our use cases.</p>
    <div>
      <h3>Do it yourself: MPC-in-the-head </h3>
      <a href="#do-it-yourself-mpc-in-the-head">
        
      </a>
    </div>
    <p>While the lattice-based hash-and-sign with aborts scheme provides one path to post-quantum signatures, an alternative approach is emerging from the MPCitH variant VOLE-in-the-Head <a href="https://eprint.iacr.org/2023/996"><u>(VOLEitH)</u></a>. </p><p>This scheme builds on <a href="https://eprint.iacr.org/2017/617"><u>Vector Oblivious Linear Evaluation (VOLE)</u></a>, an interactive protocol where one party's input vector is processed with another's secret value <i>delta</i>, creating a <i>correlation</i>. This VOLE correlation is used as a cryptographic commitment to the prover’s input. The system provides a zero-knowledge proof because the prover is bound by this correlation and cannot forge a solution without knowing the secret delta. The verifier, in turn, just has to verify that the final equation holds when the commitment is opened. This system is <i>linearly homomorphic</i>, which means that two commitments can be combined. This property is ideal for the <i>commit-and-prove</i> paradigm, where the prover first commits to the witnesses and then proves the validity of the circuit gate by gate. The primary trade-off is that the proofs are linear in the size of the circuit, but they offer substantially better runtimes. We also use linear-sized proofs for ARC and ACT.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6o073F0y7J7RxxHuDb4BSY/1ac0c4fc8b154dd77a8d3294016cbd32/image4.png" />
          </figure><p><sup>Example of evaluating a circuit gate by first committing to each wire and then proving the composition. This is easy for linear gates.</sup></p><p>This commit-and-prove approach allows <a href="https://link.springer.com/chapter/10.1007/978-3-031-91134-7_14"><u>VOLEitH</u></a> to efficiently prove the evaluation of symmetric ciphers, which are quantum-resistant. The transformation to a non-interactive protocol follows the standard MPCitH method: the prover commits to all secret values, a challenge is used to select a subset to reveal, and the prover proves consistency.</p><p>Efficient implementations operate over two mathematical fields (binary and prime) simultaneously, allowing these ZK circuits to handle both arithmetic and bitwise functions (like XORs) efficiently. Based on this foundation, a <a href="https://www.youtube.com/watch?v=VMeaF9xgbcw"><u>recent talk</u></a> teased the potential for blind signatures from the multivariate quadratic signature scheme <a href="https://pqmayo.org/about/"><u>MAYO</u></a> with sizes of just 7.5 kB and signing/verification times under 50 ms.</p><p>The VOLEitH approach, as a general-purpose solution system, represents a promising new direction for performant constructions. There are a <a href="https://pqc-mirath.org"><u>number</u></a> <a href="https://mqom.org"><u>of</u></a> <a href="https://pqc-perk.org"><u>competing</u></a> <a href="https://sdith.org"><u>in-the-head</u></a> schemes in the <a href="https://csrc.nist.gov/projects/pqc-dig-sig"><u>NIST competition for additional signature schemes</u></a>, including <a href="https://faest.info/authors.html"><u>one based on VOLEitH</u></a>. The current VOLEitH literature focuses on high-performance digital signatures, and an explicit construction for a full anonymous credential system has not yet been proposed. This means that features standard to ACs, such as multi-show unlinkability or the ability to prove relations between attributes, are not yet part of the design, whereas they are explicitly supported by the lattice construction. However, the preliminary results show great potential for performance, and it will be interesting to see the continued cryptanalysis and feature development from this line of VOLEitH in the area of anonymous credentials, especially since the general-purpose construction allows adding features easily.
</p><table><tr><td><p><b>Approach</b></p></td><td><p><b>Pros</b></p></td><td><p><b>Cons</b></p></td><td><p><b>Practical Viability</b></p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2023/414"><u>Generic Composition</u></a></p></td><td><p>Flexible construction, strong security</p></td><td><p>Large signatures (112 kB), slow (660 ms)</p></td><td><p>Low: Performance is not great</p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2023/077.pdf"><u>Hash-and-sign</u></a></p></td><td><p>Potentially tiny signatures, lots of optimization potential</p></td><td><p>Current implementation large and slow</p></td><td><p>Low: Performance is not great</p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2024/131"><u>Hash-and-sign with aborts</u></a></p></td><td><p>Full AC system, good balance in communication</p></td><td><p>Slow runtimes (1s)</p></td><td><p>Medium: promising but performance would need to improve</p></td></tr><tr><td><p><a href="https://www.youtube.com/watch?v=VMeaF9xgbcw"><u>VOLEitH</u></a></p></td><td><p>Excellent potential performance (&lt;50ms, 7.5 kB)</p></td><td><p>not a full AC system, not peer-reviewed</p></td><td><p>Medium: promising research direction, no full solution available so far</p></td></tr></table>
    <div>
      <h2>Closing the gap</h2>
      <a href="#closing-the-gap">
        
      </a>
    </div>
    <p>My (that is Lena's) internship focused on a critical question: what should we look at next to build ACs for the Internet? For us, "the right direction" means developing protocols that can be integrated with real world applications, and developed collaboratively at the IETF. To make these a reality, we need researchers to look beyond blind signatures; we need a complete privacy-preserving protocol that combines blind signatures with efficient zero-knowledge proofs and properties like multi-show credentials that have an internal state. The issuance should also be sublinear in communication size with the number of presentations.</p><p>So, with the transition to post-quantum cryptography on the horizon, what are our thoughts on the current IETF proposals? A 2022 NIST presentation on the current state of anonymous credentials states that <a href="https://csrc.nist.gov/csrc/media/Presentations/2022/stppa4-revoc-decent/images-media/20221121-stppa4--baldimtsi--anon-credentials-revoc-decentral.pdf"><u>efficient post-quantum secure solutions are basically non-existent</u></a>. We argue that the last three years show nice developments in lattices and MPCitH anonymous credentials, but efficient post-quantum protocols still need work. Moving protocols into a post-quantum world isn't just a matter of swapping out old algorithms for new ones. A common approach on constructing post-quantum versions of classical protocols is swapping out the building blocks for their quantum-secure counterpart. </p><p>We believe this approach is essential, but not forward-looking. In addition to identifying how modern concerns can be accommodated on old cryptographic designs, we should be building new, post-quantum native protocols.</p><ul><li><p>For ARC, the conceptual path to a post-quantum construction seems relatively straightforward. The underlying cryptography follows a similar structure as the lattice-based anonymous credentials, or, when accepting a protocol with fewer features, the <a href="https://eprint.iacr.org/2023/414"><u>generic post-quantum privacy-pass</u></a> construction. However, we need to support per-origin rate-limiting, which allows us to transform a token at an origin without leaking us being able to link the redemption to redemptions at other origins, a feature that none of the post-quantum anonymous credential protocols or blind signatures support. Also, ARC is sublinear in communication size with respect to the number of tokens issued, which so far only the hash-and-sign with abort lattices achieve, although the notion of “limited shows” is not present in the current proposal. In addition, it would be great to gauge efficient implementations, especially for blind signatures, as well as looking into efficient zero-knowledge proofs. </p></li><li><p>For ACT, we need the protocols for ARC and an additional state. Even for the simplest counter, we need the ability to homomorphically subtract from that balance within the credential itself. This is a much more complex cryptographic requirement. It would also be interesting to see a post-quantum double-spend prevention that enforces the sequential nature of ACT. </p></li></ul><p>Working on ACs and other privacy-preserving cryptography inevitably leads to a major bottleneck: efficient zero-knowledge proofs, or to be more exact, efficiently proving hash function evaluations. In a ZK circuit, multiplications are expensive. Each wire in the circuit that performs a multiplication requires a cryptographic commitment, which adds communication overhead. In contrast, other operations like XOR can be virtually "free." This makes a huge difference in performance. For example, SHAKE (the primitive used in ML-DSA) can be orders of magnitude slower than arithmetization-friendly hash functions inside a ZKP. This is why researchers and implementers are already using <a href="https://eprint.iacr.org/2019/458"><u>Poseidon</u></a> or <a href="https://eprint.iacr.org/2023/323"><u>Poseidon2</u></a> to make their protocols faster.</p><p>Currently, <a href="https://www.poseidon-initiative.info/"><u>Ethereum</u></a> is <a href="https://x.com/VitalikButerin/status/1894681713613164888"><u>seriously considering migrating Ethereum to the Poseidon hash</u></a> and calls for cryptanalysis, but there is no indication of standardization. This is a problem: papers increasingly use different instantiations of Poseidon to fit their use-case, and there <a href="https://eprint.iacr.org/2016/492"><u>are</u></a> <a href="https://eprint.iacr.org/2023/323"><u>more</u></a> <a href="https://eprint.iacr.org/2022/840"><u>and</u></a> <a href="https://eprint.iacr.org/2025/1893"><u>more</u></a> <a href="https://eprint.iacr.org/2025/926"><u>zero</u></a>-<a href="https://eprint.iacr.org/2020/1143"><u>knowledge</u></a> <a href="https://eprint.iacr.org/2019/426"><u>friendly</u></a> <a href="https://eprint.iacr.org/2023/1025"><u>hash</u></a> <a href="https://eprint.iacr.org/2021/1038"><u>functions</u></a> <a href="https://eprint.iacr.org/2022/403"><u>coming</u></a> <a href="https://eprint.iacr.org/2025/058"><u>out</u></a>, tailored to different use-cases. We would like to see at least one XOF and one hash each for a prime field and for a binary field, ideally with some security levels. And also, is Poseidon the best or just the most well-known ZK friendly cipher? Is it always secure against quantum computers (like we believe AES to be), and are there other attacks like the <a href="https://eprint.iacr.org/2025/950"><u>recent</u></a> <a href="https://eprint.iacr.org/2025/937"><u>attacks</u></a> on round-reduced versions?</p><p>Looking at algebra and zero-knowledge brings us to a fundamental debate in modern cryptography. Imagine a line representing the spectrum of research: On one end, you have protocols built on very well-analyzed standard assumptions like the <a href="https://blog.cloudflare.com/lattice-crypto-primer/#breaking-lattice-cryptography-by-finding-short-vectors"><u>SIS problem</u></a> on lattices or the collision resistance of SHA3. On the other end, you have protocols that gain massive efficiency by using more algebraic structure, which in turn relies on newer, stronger cryptographic assumptions. Breaking novel hash functions is somewhere in the middle. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BMtbDoVnrmKeTvhCyfOjK/616438127351eedf6ff41db282a0511e/image7.png" />
          </figure><p>The answer for the Internet can’t just be to relent and stay at the left end of our graph to be safe. For the ecosystem to move forward, we need to have confidence in both. We need more research to validate the security of ZK-friendly primitives like Poseidon, and we need more scrutiny on the stronger assumptions that enable efficient algebraic methods.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As we’ve explored, the cryptographic properties that make classical ACs efficient, particularly the rich structure of elliptic curves, do not have direct post-quantum equivalents. Our survey of the state of the art from generic compositions using STARKs, to various lattice-based schemes, and promising new directions like MPC-in-the-head, reveals a field full of potential but with no clear winner. The trade-offs between communication cost, computational cost, and protocol rounds remain a significant barrier to practical, large-scale deployment, especially in comparison to elliptic curve constructions.</p><p>To bridge this gap, we must move beyond simply building post-quantum blind signatures. We challenge our colleagues in academia and industry to develop complete, post-quantum native protocols that address real-world needs. This includes supporting essential features like the per-origin rate-limiting required for ARC or the complex stateful credentials needed for ACT.</p><p>A critical bottleneck for all these approaches is the lack of efficient, standardized, and well-analyzed zero-knowledge-friendly hash functions. We need to research zero-knowledge friendly primitives and build industry-wide confidence to enable efficient post-quantum privacy.</p><p>If you’re working on these problems, or you have experience in the management and deployment of classical credentials, now is the time to engage. The world is rapidly adopting credentials for everything from digital identity to bot management, and it is our collective responsibility to ensure these systems are private and secure for a post-quantum future. We can tell for certain that there are more discussions to be had, and if you’re interested in helping to build this more secure and private digital world, we’re hiring 1,111 interns over the course of next year, and have open positions!</p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[European Union]]></category>
            <category><![CDATA[Elliptic Curves]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">JA04hlqr6TaeGhkvyutbt</guid>
            <dc:creator>Lena Heimberger</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keeping the Internet fast and secure: introducing Merkle Tree Certificates]]></title>
            <link>https://blog.cloudflare.com/bootstrap-mtc/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is launching an experiment with Chrome to evaluate fast, scalable, and quantum-ready Merkle Tree Certificates, all without degrading performance or changing WebPKI trust relationships. ]]></description>
            <content:encoded><![CDATA[ <p>The world is in a race to build its first quantum computer capable of solving practical problems not feasible on even the largest conventional supercomputers. While the quantum computing paradigm promises many benefits, it also threatens the security of the Internet by breaking much of the cryptography we have come to rely on.</p><p>To mitigate this threat, Cloudflare is helping to migrate the Internet to Post-Quantum (PQ) cryptography. Today, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of traffic to Cloudflare's edge network is protected against the most urgent threat: an attacker who can intercept and store encrypted traffic today and then decrypt it in the future with the help of a quantum computer. This is referred to as the <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a><i> </i>threat.</p><p>However, this is just one of the threats we need to address. A quantum computer can also be used to crack a server's <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>, allowing an attacker to impersonate the server to unsuspecting clients. The good news is that we already have PQ algorithms we can use for quantum-safe authentication. The bad news is that adoption of these algorithms in TLS will require significant changes to one of the most complex and security-critical systems on the Internet: the Web Public-Key Infrastructure (WebPKI).</p><p>The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/#how-many-added-bytes-are-too-many-for-tls"><u>noticeable impact</u></a> on the performance of TLS.</p><p>That makes drop-in PQ certificates a tough sell to enable today: they don’t bring any security benefit before Q-day — the day a cryptographically relevant quantum computer arrives — but they do degrade performance. We could sit and wait until Q-day is a year away, but that’s playing with fire. Migrations always take longer than expected, and by waiting we risk the security and privacy of the Internet, which is <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/"><u>dear to us</u></a>.</p><p>It's clear that we must find a way to make post-quantum certificates cheap enough to deploy today by default for everyone — not just those that can afford it. In this post, we'll introduce you to the plan we’ve brought together with industry partners to the <a href="https://datatracker.ietf.org/group/plants/about/"><u>IETF</u></a> to redesign the WebPKI in order to allow a smooth transition to PQ authentication with no performance impact (and perhaps a performance improvement!). We'll provide an overview of one concrete proposal, called <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a>, whose goal is to whittle down the number of public keys and signatures in the TLS handshake to the bare minimum required.</p><p>But talk is cheap. We <a href="https://blog.cloudflare.com/experiment-with-pq/"><u>know</u></a> <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>from</u></a> <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>experience</u></a> that, as with any change to the Internet, it's crucial to test early and often. <b>Today we're announcing our intent to deploy MTCs on an experimental basis in collaboration with Chrome Security.</b> In this post, we'll describe the scope of this experiment, what we hope to learn from it, and how we'll make sure it's done safely.</p>
    <div>
      <h2>The WebPKI today — an old system with many patches</h2>
      <a href="#the-webpki-today-an-old-system-with-many-patches">
        
      </a>
    </div>
    <p>Why does the TLS handshake have so many public keys and signatures?</p><p>Let's start with Cryptography 101. When your browser connects to a website, it asks the server to <b>authenticate</b> itself to make sure it's talking to the real server and not an impersonator. This is usually achieved with a cryptographic primitive known as a digital signature scheme (e.g., ECDSA or ML-DSA). In TLS, the server signs the messages exchanged between the client and server using its <b>secret key</b>, and the client verifies the signature using the server's <b>public key</b>. In this way, the server confirms to the client that they've had the same conversation, since only the server could have produced a valid signature.</p><p>If the client already knows the server's public key, then only <b>1 signature</b> is required to authenticate the server. In practice, however, this is not really an option. The web today is made up of around a billion TLS servers, so it would be unrealistic to provision every client with the public key of every server. What's more, the set of public keys will change over time as new servers come online and existing ones rotate their keys, so we would need some way of pushing these changes to clients.</p><p>This scaling problem is at the heart of the design of all PKIs.</p>
    <div>
      <h3>Trust is transitive</h3>
      <a href="#trust-is-transitive">
        
      </a>
    </div>
    <p>Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a <b>certificate</b>.</p><p>A certificate binds a public key to the identity of the server — usually its DNS name, e.g., <code>cloudflareresearch.com</code>. The certificate is signed by a Certification Authority (CA) whose public key is known to the client. In addition to verifying the server's handshake signature, the client verifies the signature of this certificate. This establishes a chain of trust: by accepting the certificate, the client is trusting that the CA verified that the public key actually belongs to the server with that identity.</p><p>Clients are typically configured to trust many CAs and must be provisioned with a public key for each. Things are much easier however, since there are only 100s of CAs instead of billions. In addition, new certificates can be created without having to update clients.</p><p>These efficiencies come at a relatively low cost: for those counting at home, that's <b>+1</b> signature and <b>+1</b> public key, for a total of <b>2 signatures and 1 public key</b> per TLS handshake.</p><p>That's not the end of the story, however. As the WebPKI has evolved, so have these chains of trust grown a bit longer. These days it's common for a chain to consist of two or more certificates rather than just one. This is because CAs sometimes need to rotate<b> </b>their keys, just as servers do. But before they can start using the new key, they must distribute the corresponding public key to clients. This takes time, since it requires billions of clients to update their trust stores. To bridge the gap, the CA will sometimes use the old key to issue a certificate for the new one and append this certificate to the end of the chain.</p><p>That's<b> +1</b> signature and<b> +1</b> public key, which brings us to<b> 3 signatures and 2 public keys</b>. And we still have a little ways to go.</p>
    <div>
      <h3>Trust but verify</h3>
      <a href="#trust-but-verify">
        
      </a>
    </div>
    <p>The main job of a CA is to verify that a server has control over the domain for which it’s requesting a certificate. This process has evolved over the years from a high-touch, CA-specific process to a standardized, <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>mostly automated process</u></a> used for issuing most certificates on the web. (Not all CAs fully support automation, however.) This evolution is marked by a number of security incidents in which a certificate was <b>mis-issued </b>to a party other than the server, allowing that party to impersonate the server to any client that trusts the CA.</p><p>Automation helps, but <a href="https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudulent_certificates"><u>attacks</u></a> are still possible, and mistakes are almost inevitable. <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>Earlier this year</u></a>, several certificates for Cloudflare's encrypted 1.1.1.1 resolver were issued without our involvement or authorization. This apparently occurred by accident, but it nonetheless put users of 1.1.1.1 at risk. (The mis-issued certificates have since been revoked.)</p><p>Ensuring mis-issuance is detectable is the job of the Certificate Transparency (CT) ecosystem. The basic idea is that each certificate issued by a CA gets added to a public <b>log</b>. Servers can audit these logs for certificates issued in their name. If ever a certificate is issued that they didn't request itself, the server operator can prove the issuance happened, and the PKI ecosystem can take action to prevent the certificate from being trusted by clients.</p><p>Major browsers, including Firefox and Chrome and its derivatives, require certificates to be logged before they can be trusted. For example, Chrome, Safari, and Firefox will only accept the server's certificate if it appears in at least two logs the browser is configured to trust. This policy is easy to state, but tricky to implement in practice:</p><ol><li><p>Operating a CT log has historically been fairly expensive. Logs ingest billions of certificates over their lifetimes: when an incident happens, or even just under high load, it can take some time for a log to make a new entry available for auditors.</p></li><li><p>Clients can't really audit logs themselves, since this would expose their browsing history (i.e., the servers they wanted to connect to) to the log operators.</p></li></ol><p>The solution to both problems is to include a signature from the CT log along with the certificate. The signature is produced immediately in response to a request to log a certificate, and attests to the log's intent to include the certificate in the log within 24 hours.</p><p>Per browser policy, certificate transparency adds <b>+2</b> signatures to the TLS handshake, one for each log. This brings us to a total of <b>5 signatures and 2 public keys</b> in a typical handshake on the public web.</p>
    <div>
      <h3>The future WebPKI</h3>
      <a href="#the-future-webpki">
        
      </a>
    </div>
    <p>The WebPKI is a living, breathing, and highly distributed system. We've had to patch it a number of times over the years to keep it going, but on balance it has served our needs quite well — until now.</p><p>Previously, whenever we needed to update something in the WebPKI, we would tack on another signature. This strategy has worked because conventional cryptography is so cheap. But <b>5 signatures and 2 public keys </b>on average for each TLS handshake is simply too much to cope with for the larger PQ signatures that are coming.</p><p>The good news is that by moving what we already have around in clever ways, we can drastically reduce the number of signatures we need.</p>
    <div>
      <h3>Crash course on Merkle Tree Certificates</h3>
      <a href="#crash-course-on-merkle-tree-certificates">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a> is a proposal for the next generation of the WebPKI that we are implementing and plan to deploy on an experimental basis. Its key features are as follows:</p><ol><li><p>All the information a client needs to validate a Merkle Tree Certificate can be disseminated out-of-band. If the client is sufficiently up-to-date, then the TLS handshake needs just <b>1 signature, 1 public key, and 1 Merkle tree inclusion proof</b>. This is quite small, even if we use post-quantum algorithms.</p></li><li><p>The MTC specification makes certificate transparency a first class feature of the PKI by having each CA run its own log of exactly the certificates they issue.</p></li></ol><p>Let's poke our head under the hood a little. Below we have an MTC generated by one of our internal tests. This would be transmitted from the server to the client in the TLS handshake:</p>
            <pre><code>-----BEGIN CERTIFICATE-----
MIICSzCCAUGgAwIBAgICAhMwDAYKKwYBBAGC2ksvADAcMRowGAYKKwYBBAGC2ksv
AQwKNDQzNjMuNDguMzAeFw0yNTEwMjExNTMzMjZaFw0yNTEwMjgxNTMzMjZaMCEx
HzAdBgNVBAMTFmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wWTATBgcqhkjOPQIBBggq
hkjOPQMBBwNCAARw7eGWh7Qi7/vcqc2cXO8enqsbbdcRdHt2yDyhX5Q3RZnYgONc
JE8oRrW/hGDY/OuCWsROM5DHszZRDJJtv4gno2wwajAOBgNVHQ8BAf8EBAMCB4Aw
EwYDVR0lBAwwCgYIKwYBBQUHAwEwQwYDVR0RBDwwOoIWY2xvdWRmbGFyZXJlc2Vh
cmNoLmNvbYIgc3RhdGljLWN0LmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wDAYKKwYB
BAGC2ksvAAOB9QAAAAAAAAACAAAAAAAAAAJYAOBEvgOlvWq38p45d0wWTPgG5eFV
wJMhxnmDPN1b5leJwHWzTOx1igtToMocBwwakt3HfKIjXYMO5CNDOK9DIKhmRDSV
h+or8A8WUrvqZ2ceiTZPkNQFVYlG8be2aITTVzGuK8N5MYaFnSTtzyWkXP2P9nYU
Vd1nLt/WjCUNUkjI4/75fOalMFKltcc6iaXB9ktble9wuJH8YQ9tFt456aBZSSs0
cXwqFtrHr973AZQQxGLR9QCHveii9N87NXknDvzMQ+dgWt/fBujTfuuzv3slQw80
mibA021dDCi8h1hYFQAA
-----END CERTIFICATE-----</code></pre>
            <p>Looks like your average PEM encoded certificate. Let's decode it and look at the parameters:</p>
            <pre><code>$ openssl x509 -in merkle-tree-cert.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 531 (0x213)
        Signature Algorithm: 1.3.6.1.4.1.44363.47.0
        Issuer: 1.3.6.1.4.1.44363.47.1=44363.48.3
        Validity
            Not Before: Oct 21 15:33:26 2025 GMT
            Not After : Oct 28 15:33:26 2025 GMT
        Subject: CN=cloudflareresearch.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:70:ed:e1:96:87:b4:22:ef:fb:dc:a9:cd:9c:5c:
                    ef:1e:9e:ab:1b:6d:d7:11:74:7b:76:c8:3c:a1:5f:
                    94:37:45:99:d8:80:e3:5c:24:4f:28:46:b5:bf:84:
                    60:d8:fc:eb:82:5a:c4:4e:33:90:c7:b3:36:51:0c:
                    92:6d:bf:88:27
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:cloudflareresearch.com, DNS:static-ct.cloudflareresearch.com
    Signature Algorithm: 1.3.6.1.4.1.44363.47.0
    Signature Value:
        00:00:00:00:00:00:02:00:00:00:00:00:00:00:02:58:00:e0:
        44:be:03:a5:bd:6a:b7:f2:9e:39:77:4c:16:4c:f8:06:e5:e1:
        55:c0:93:21:c6:79:83:3c:dd:5b:e6:57:89:c0:75:b3:4c:ec:
        75:8a:0b:53:a0:ca:1c:07:0c:1a:92:dd:c7:7c:a2:23:5d:83:
        0e:e4:23:43:38:af:43:20:a8:66:44:34:95:87:ea:2b:f0:0f:
        16:52:bb:ea:67:67:1e:89:36:4f:90:d4:05:55:89:46:f1:b7:
        b6:68:84:d3:57:31:ae:2b:c3:79:31:86:85:9d:24:ed:cf:25:
        a4:5c:fd:8f:f6:76:14:55:dd:67:2e:df:d6:8c:25:0d:52:48:
        c8:e3:fe:f9:7c:e6:a5:30:52:a5:b5:c7:3a:89:a5:c1:f6:4b:
        5b:95:ef:70:b8:91:fc:61:0f:6d:16:de:39:e9:a0:59:49:2b:
        34:71:7c:2a:16:da:c7:af:de:f7:01:94:10:c4:62:d1:f5:00:
        87:bd:e8:a2:f4:df:3b:35:79:27:0e:fc:cc:43:e7:60:5a:df:
        df:06:e8:d3:7e:eb:b3:bf:7b:25:43:0f:34:9a:26:c0:d3:6d:
        5d:0c:28:bc:87:58:58:15:00:00</code></pre>
            <p>While some of the parameters probably look familiar, others will look unusual. On the familiar side, the subject and public key are exactly what we might expect: the DNS name is <code>cloudflareresearch.com</code> and the public key is for a familiar signature algorithm, ECDSA-P256. This algorithm is not PQ, of course — in the future we would put ML-DSA-44 there instead.</p><p>On the unusual side, OpenSSL appears to not recognize the signature algorithm of the issuer and just prints the raw OID and bytes of the signature. There's a good reason for this: the MTC does not have a signature in it at all! So what exactly are we looking at?</p><p>The trick to leave out signatures is that a Merkle Tree Certification Authority (MTCA) produces its <i>signatureless</i> certificates <i>in batches</i> rather than individually. In place of a signature, the certificate has an <b>inclusion proof</b> of the certificate in a batch of certificates signed by the MTCA.</p><p>To understand how inclusion proofs work, let's think about a slightly simplified version of the MTC specification. To issue a batch, the MTCA arranges the unsigned certificates into a data structure called a <b>Merkle tree</b> that looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LGhISsS07kbpSgDkqx8p2/68e3b36deeca7f97139654d2c769df68/image3.png" />
          </figure><p>Each leaf of the tree corresponds to a certificate, and each inner node is equal to the hash of its children. To sign the batch, the MTCA uses its secret key to sign the head of the tree. The structure of the tree guarantees that each certificate in the batch was signed by the MTCA: if we tried to tweak the bits of any one of the certificates, the treehead would end up having a different value, which would cause the signature to fail.</p><p>An inclusion proof for a certificate consists of the hash of each sibling node along the path from the certificate to the treehead:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UZZHkRwsBLWXRYeop4rXv/8598cde48c27c112bc4992889f3d5799/image1.gif" />
          </figure><p>Given a validated treehead, this sequence of hashes is sufficient to prove inclusion of the certificate in the tree. This means that, in order to validate an MTC, the client also needs to obtain the signed treehead from the MTCA.</p><p>This is the key to MTC's efficiency:</p><ol><li><p>Signed treeheads can be disseminated to clients out-of-band and validated offline. Each validated treehead can then be used to validate any certificate in the corresponding batch, eliminating the need to obtain a signature for each server certificate.</p></li><li><p>During the TLS handshake, the client tells the server which treeheads it has. If the server has a signatureless certificate covered by one of those treeheads, then it can use that certificate to authenticate itself. That's <b>1 signature,1 public key and 1 inclusion proof</b> per handshake, both for the server being authenticated.</p></li></ol><p>Now, that's the simplified version. MTC proper has some more bells and whistles. To start, it doesn’t create a separate Merkle tree for each batch, but it grows a single large tree, which is used for better transparency. As this tree grows, periodically (sub)tree heads are selected to be shipped to browsers, which we call <b>landmarks</b>. In the common case browsers will be able to fetch the most recent landmarks, and servers can wait for batch issuance, but we need a fallback: MTC also supports certificates that can be issued immediately and don’t require landmarks to be validated, but these are not as small. A server would provision both types of Merkle tree certificates, so that the common case is fast, and the exceptional case is slow, but at least it’ll work.</p>
    <div>
      <h2>Experimental deployment</h2>
      <a href="#experimental-deployment">
        
      </a>
    </div>
    <p>Ever since early designs for MTCs emerged, we’ve been eager to experiment with the idea. In line with the IETF principle of “<a href="https://www.ietf.org/runningcode/"><u>running code</u></a>”, it often takes implementing a protocol to work out kinks in the design. At the same time, we cannot risk the security of users. In this section, we describe our approach to experimenting with aspects of the Merkle Tree Certificates design <i>without</i> changing any trust relationships.</p><p>Let’s start with what we hope to learn. We have lots of questions whose answers can help to either validate the approach, or uncover pitfalls that require reshaping the protocol — in fact, an implementation of an early MTC draft by <a href="https://www.cs.ru.nl/masters-theses/2025/M_Pohl___Implementation_and_Analysis_of_Merkle_Tree_Certificates_for_Post-Quantum_Secure_Authentication_in_TLS.pdf"><u>Maximilian Pohl</u></a> and <a href="https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-07.html#name-acknowledgements"><u>Mia Celeste</u></a> did exactly this. We’d like to know:</p><p><b>What breaks?</b> Protocol ossification (the tendency of implementation bugs to make it harder to change a protocol) is an ever-present issue with deploying protocol changes. For TLS in particular, despite having built-in flexibility, time after time we’ve found that if that flexibility is not regularly used, there will be buggy implementations and middleboxes that break when they see things they don’t recognize. TLS 1.3 deployment <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>took years longer</u></a> than we hoped for this very reason. And more recently, the rollout of PQ key exchange in TLS caused the Client Hello to be split over multiple TCP packets, something that many middleboxes <a href="https://tldr.fail/"><u>weren't ready for</u></a>.</p><p><b>What is the performance impact?</b> In fact, we expect MTCs to <i>reduce </i>the size of the handshake, even compared to today's non-PQ certificates. They will also reduce CPU cost: ML-DSA signature verification is about as fast as ECDSA, and there will be far fewer signatures to verify. We therefore expect to see a <i>reduction in latency</i>. We would like to see if there is a measurable performance improvement.</p><p><b>What fraction of clients will stay up to date? </b>Getting the performance benefit of MTCs requires the clients and servers to be roughly in sync with one another. We expect MTCs to have fairly short lifetimes, a week or so. This means that if the client's latest landmark is older than a week, the server would have to fallback to a larger certificate. Knowing how often this fallback happens will help us tune the parameters of the protocol to make fallbacks less likely.</p><p>In order to answer these questions, we are implementing MTC support in our TLS stack and in our certificate issuance infrastructure. For their part, Chrome is implementing MTC support in their own TLS stack and will stand up infrastructure to disseminate landmarks to their users.</p><p>As we've done in past experiments, we plan to enable MTCs for a subset of our free customers with enough traffic that we will be able to get useful measurements. Chrome will control the experimental rollout: they can ramp up slowly, measuring as they go and rolling back if and when bugs are found.</p><p>Which leaves us with one last question: who will run the Merkle Tree CA?</p>
    <div>
      <h3>Bootstrapping trust from the existing WebPKI</h3>
      <a href="#bootstrapping-trust-from-the-existing-webpki">
        
      </a>
    </div>
    <p>Standing up a proper CA is no small task: it takes years to be trusted by major browsers. That’s why Cloudflare isn’t going to become a “real” CA for this experiment, and Chrome isn’t going to trust us directly.</p><p>Instead, to make progress on a reasonable timeframe, without sacrificing due diligence, we plan to "mock" the role of the MTCA. We will run an MTCA (on <a href="https://github.com/cloudflare/azul/"><u>Workers</u></a> based on our <a href="https://blog.cloudflare.com/azul-certificate-transparency-log/"><u>StaticCT logs</u></a>), but for each MTC we issue, we also publish an existing certificate from a trusted CA that agrees with it. We call this the <b>bootstrap certificate</b>. When Chrome’s infrastructure pulls updates from our MTCA log, they will also pull these bootstrap certificates, and check whether they agree. Only if they do, they’ll proceed to push the corresponding landmarks to Chrome clients. In other words, Cloudflare is effectively just “re-encoding” an existing certificate (with domain validation performed by a trusted CA) as an MTC, and Chrome is using certificate transparency to keep us honest.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>With almost 50% of our traffic already protected by post-quantum encryption, we’re halfway to a fully post-quantum secure Internet. The second part of our journey, post-quantum certificates, is the hardest yet though. A simple drop-in upgrade has a noticeable performance impact and no security benefit before Q-day. This means it’s a hard sell to enable today by default. But here we are playing with fire: migrations always take longer than expected. If we want to keep an ubiquitously private and secure Internet, we need a post-quantum solution that’s performant enough to be enabled by default <b>today</b>.</p><p>Merkle Tree Certificates (MTCs) solves this problem by reducing the number of signatures and public keys to the bare minimum while maintaining the WebPKI's essential properties. We plan to roll out MTCs to a fraction of free accounts by early next year. This does not affect any visitors that are not part of the Chrome experiment. For those that are, thanks to the bootstrap certificates, there is no impact on security.</p><p>We’re excited to keep the Internet fast <i>and</i> secure, and will report back soon on the results of this experiment: watch this space! MTC is evolving as we speak, if you want to get involved, please join the IETF <a href="https://mailman3.ietf.org/mailman3/lists/plants@ietf.org/"><u>PLANTS mailing list</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4jURWdZzyjdrcurJ4LlJ1z</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[MoQ: Refactoring the Internet's real-time media stack]]></title>
            <link>https://blog.cloudflare.com/moq/</link>
            <pubDate>Fri, 22 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Media over QUIC (MoQ) is a new IETF standard that resolves this conflict, creating a single foundation for sub-second, interactive streaming at a global scale.
 ]]></description>
            <content:encoded><![CDATA[ <p>For over two decades, we've built real-time communication on the Internet using a patchwork of specialized tools. RTMP gave us ingest. <a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/"><u>HLS</u></a> and <a href="https://www.mpeg.org/standards/MPEG-DASH/"><u>DASH</u></a> gave us scale. WebRTC gave us interactivity. Each solved a specific problem for its time, and together they power the global streaming ecosystem we rely on today.</p><p>But using them together in 2025 feels like building a modern application with tools from different eras. The seams are starting to show—in complexity, in latency, and in the flexibility needed for the next generation of applications, from sub-second live auctions to massive interactive events. We're often forced to make painful trade-offs between latency, scale, and operational complexity.</p><p>Today Cloudflare is launching the first Media over QUIC (MoQ) relay network, running on every Cloudflare server in datacenters in 330+ cities. MoQ is an open protocol being developed at the <a href="https://www.ietf.org/"><u>IETF</u></a> by engineers from across the industry—not a proprietary Cloudflare technology. MoQ combines the low-latency interactivity of WebRTC, the scalability of HLS/DASH, and the simplicity of a single architecture, all built on a modern transport layer. We're joining Meta, Google, Cisco, and others in building implementations that work seamlessly together, creating a shared foundation for the next generation of real-time applications on the Internet.</p>
    <div>
      <h3><b>An evolutionary ladder of compromise</b></h3>
      <a href="#an-evolutionary-ladder-of-compromise">
        
      </a>
    </div>
    <p>To understand the promise of MoQ, we first have to appreciate the history that led us here—a journey defined by a series of architectural compromises where solving one problem inevitably created another.</p><p><b>The RTMP era: Conquering latency, compromising on scale</b></p><p>In the early 2000s, <b>RTMP (Real-Time Messaging Protocol)</b> was a breakthrough. It solved the frustrating "download and wait" experience of early video playback on the web by creating a persistent, stateful TCP connection between a <a href="https://en.wikipedia.org/wiki/Adobe_Flash"><u>Flash</u></a> client and a server. This enabled low-latency streaming (2-5 seconds), powering the first wave of live platforms like <a href="http://justin.tv"><u>Justin.tv</u></a> (which later became Twitch).</p><p>But its strength was its weakness. That stateful connection, which had to be maintained for every viewer, was architecturally hostile to scale. It required expensive, specialized media servers and couldn't use the commodity HTTP-based <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/"><u>Content Delivery Networks (CDNs)</u></a> that were beginning to power the rest of the web. Its reliance on TCP also meant that a single lost packet could freeze the entire stream—a phenomenon known as <a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u>head-of-line blocking</u></a>—creating jarring latency spikes. The industry retained RTMP for the "first mile" from the camera to servers (ingest), but a new solution was needed for the "last mile" from servers to your screen (delivery).</p><p><b>The HLS &amp; DASH era: Solving for scale, compromising on latency</b></p><p>The catalyst for the next era was the iPhone's rejection of Flash. In response, Apple created <a href="https://www.cloudflare.com/learning/video/what-is-http-live-streaming/"><b><u>HLS (HTTP Live Streaming)</u></b></a>. HLS, and its open-standard counterpart <b>MPEG-DASH</b> abandoned stateful connections and treated video as a sequence of small, static files delivered over standard HTTP.</p><p>This enabled much greater scalability. By moving to the interoperable open standard of HTTP for the underlying transport, video could now be distributed by any web server and cached by global CDNs, allowing platforms to reach millions of viewers reliably and relatively inexpensively. The compromise? A <i>significant</i> trade-off in latency. To ensure smooth playback, players needed to buffer at least three video segments before starting. With segment durations of 6-10 seconds, this baked 15-30 seconds of latency directly into the architecture.</p><p>While extensions like <a href="https://developer.apple.com/documentation/http-live-streaming/enabling-low-latency-http-live-streaming-hls"><u>Low-Latency HLS (LL-HLS)</u></a> have more recently emerged to achieve latencies in the 3-second range, they remain complex patches<a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u> fighting against the protocol's fundamental design</u></a>. These extensions introduce a layer of stateful, real-time communication—using clever workarounds like holding playlist requests open—that ultimately strain the stateless request-response model central to HTTP's scalability and composability.</p><p><b>The WebRTC Era: Conquering conversational latency, compromising on architecture</b></p><p>In parallel, <b>WebRTC (Web Real-Time Communication)</b> emerged to solve a different problem: plugin-free, two-way conversational video with sub-500ms latency within a browser. It worked by creating direct peer-to-peer (P2P) media paths, removing central servers from the equation.</p><p>But this P2P model is fundamentally at odds with broadcast scale. <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/#webrtc-growing-pains"><u>In a mesh network, the number of connections grows quadratically with each new participant</u></a> (the "N-squared problem"). For more than a handful of users, the model collapses under the weight of its own complexity. To work around this, the industry developed server-based topologies like the Selective Forwarding Unit (SFU) and Multipoint Control Unit (MCU). These are effective but require building what is essentially a <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/#is-cloudflare-calls-a-real-sfu"><u>private, stateful, real-time CDN</u></a>—a complex and expensive undertaking that is not standardized across infrastructure providers.</p><p>This journey has left us with a fragmented landscape of specialized, non-interoperable silos, forcing developers to stitch together multiple protocols and accept a painful three-way tension between <b>latency, scale, and complexity</b>.</p>
    <div>
      <h3><b>Introducing MoQ</b></h3>
      <a href="#introducing-moq">
        
      </a>
    </div>
    <p>This is the context into which Media over QUIC (MoQ) emerges. It's not just another protocol; it's a new design philosophy built from the ground up to resolve this historical trilemma. Born out of an open, community-driven effort at the IETF, <u>MoQ aims to be a foundational Internet technology, not a proprietary product</u>.</p><p>Its promise is to unify the disparate worlds of streaming by delivering:</p><ol><li><p><b>Sub-second latency at broadcast scale:</b> Combining the latency of WebRTC with the scale of HLS/DASH and the simplicity of RTMP.</p></li><li><p><b>Architectural simplicity:</b> Creating a single, flexible protocol for ingest, distribution, and interactive use cases, eliminating the need to transcode between different technologies.</p></li><li><p><b>Transport efficiency:</b> Building on <a href="https://blog.cloudflare.com/the-road-to-quic/"><u>QUIC</u></a>, a <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/"><u>UDP</u></a> based protocol to eliminate bottlenecks like TCP<a href="https://blog.cloudflare.com/the-road-to-quic/#head-of-line-blocking"><u> head-of-line blocking</u></a>.</p></li></ol><p>The initial focus was "Media" over QUIC, but the core concepts—named tracks of timed, ordered, but independent data—are so flexible that the working group is now simply calling the protocol "MoQ." The name reflects the power of the abstraction: it's a generic transport for any real-time data that needs to be delivered efficiently and at scale.</p><p>MoQ is now generic enough that it’s a data fanout or pub/sub system, for everything from audio/video (high bandwidth data) to sports score updates (low bandwidth data).</p>
    <div>
      <h3><b>A deep dive into the MoQ protocol stack</b></h3>
      <a href="#a-deep-dive-into-the-moq-protocol-stack">
        
      </a>
    </div>
    <p>MoQ's elegance comes from solving the right problem at the right layer. Let's build up from the foundation to see how it achieves sub-second latency at scale.</p><p>The choice of QUIC as MoQ's foundation isn't arbitrary—it addresses issues that have plagued streaming protocols for decades.</p><p>By building on <b>QUIC</b> (the transport protocol that also powers <a href="https://www.cloudflare.com/learning/performance/what-is-http3/"><u>HTTP/3</u></a>), MoQ solves some key streaming problems:</p><ul><li><p><b>No head-of-line blocking:</b> Unlike TCP where one lost packet blocks everything behind it, QUIC streams are independent. A lost packet on one stream (e.g., an audio track) doesn't block another (e.g., the main video track). This alone eliminates the stuttering that plagued RTMP.</p></li><li><p><b>Connection migration:</b> When your device switches from Wi-Fi to cellular mid-stream, the connection seamlessly migrates without interruption—no rebuffering, no reconnection.</p></li><li><p><b>Fast connection establishment:</b> QUIC's <a href="https://blog.cloudflare.com/even-faster-connection-establishment-with-quic-0-rtt-resumption/"><u>0-RTT resumption</u></a> means returning viewers can start playing instantly.</p></li><li><p><b>Baked-in, mandatory encryption:</b> All QUIC connections are encrypted by default with <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/"><u>TLS 1.3</u></a>.</p></li></ul>
    <div>
      <h4>The core innovation: Publish/subscribe for media</h4>
      <a href="#the-core-innovation-publish-subscribe-for-media">
        
      </a>
    </div>
    <p>With QUIC solving transport issues, MoQ introduces its key innovation: treating media as subscribable tracks in a publish/subscribe system. But unlike traditional pub/sub, this is designed specifically for real-time media at CDN scale.</p><p>Instead of complex session management (WebRTC) or file-based chunking (HLS), <b>MoQ lets publishers announce named tracks of media that subscribers can request</b>. A relay network handles the distribution without needing to understand the media itself.</p>
    <div>
      <h4>How MoQ organizes media: The data model</h4>
      <a href="#how-moq-organizes-media-the-data-model">
        
      </a>
    </div>
    <p>Before we see how media flows through the network, let's understand how MoQ structures it. MoQ organizes data in a hierarchy:</p><ul><li><p><b>Tracks</b>: Named streams of media, like "video-1080p" or "audio-english". Subscribers request specific tracks by name.</p></li><li><p><b>Groups</b>: Independently decodable chunks of a track. For video, this typically means a GOP (Group of Pictures) starting with a keyframe. New subscribers can join at any Group boundary.</p></li><li><p><b>Objects</b>: The actual packets sent on the wire. Each Object belongs to a Track and has a position within a Group.</p></li></ul><p>This simple hierarchy enables two capabilities:</p><ol><li><p>Subscribers can start playback at <b>Group</b> boundaries without waiting for the next keyframe</p></li><li><p>Relays can forward <b>Objects</b> without parsing or understanding the media format</p></li></ol>
    <div>
      <h5>The network architecture: From publisher to subscriber</h5>
      <a href="#the-network-architecture-from-publisher-to-subscriber">
        
      </a>
    </div>
    <p>MoQ’s network components are also simple:</p><ul><li><p><b>Publishers</b>: Announce track namespaces and send Objects</p></li><li><p><b>Subscribers</b>: Request specific tracks by name</p></li><li><p><b>Relays</b>: Connect publishers to subscribers by forwarding immutable Objects without parsing or <a href="https://www.cloudflare.com/learning/video/video-encoding-formats/"><u>transcoding</u></a> the media</p></li></ul><p>A Relay acts as a subscriber to receive tracks from upstream (like the original publisher) and simultaneously acts as a publisher to forward those same tracks downstream. This model is the key to MoQ's scalability: one upstream subscription can fan out to serve thousands of downstream viewers.</p>
    <div>
      <h5>The MoQ Stack</h5>
      <a href="#the-moq-stack">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4g2MroH24otkzH3LQsFWZe/84ca43ad6c1c933ac395bf4ac767c584/image1.png" />
          </figure><p>MoQ's architecture can be understood as three distinct layers, each with a clear job:</p><ol><li><p><b>The Transport Foundation (QUIC or WebTransport):</b> This is the modern foundation upon which everything is built. MoQT can run directly over raw <b>QUIC</b>, which is ideal for native applications, or over <b>WebTransport</b>, which is required for use in a web browser. Crucially, the<a href="https://www.ietf.org/archive/id/draft-ietf-webtrans-http3-02.html"> <u>WebTransport protocol</u></a> and its corresponding<a href="https://w3c.github.io/webtransport/"> <u>W3C browser API</u></a> make QUIC's multiplexed reliable streams and unreliable datagrams directly accessible to browser applications. This is a game-changer. Protocols like <a href="https://blog.cloudflare.com/stream-now-supports-srt-as-a-drop-in-replacement-for-rtmp/"><u>SRT</u></a> may be efficient, but their lack of native browser support relegates them to ingest-only roles. WebTransport gives MoQ first-class citizenship on the web, making it suitable for both ingest and massive-scale distribution directly to clients.</p></li><li><p><b>The MoQT Layer:</b> Sitting on top of QUIC (or WebTransport), the MoQT layer provides the signaling and structure for a publish-subscribe system. This is the primary focus of the IETF working group. It defines the core control messages—like ANNOUNCE, and SUBSCRIBE—and the basic data model we just covered. MoQT itself is intentionally spartan; it doesn't know or care if the data it's moving is <a href="https://www.cloudflare.com/learning/video/what-is-h264-avc/"><u>H.264</u></a> video, Opus audio, or game state updates.</p></li><li><p><b>The Streaming Format Layer:</b> This is where media-specific logic lives. A streaming format defines things like manifests, codec metadata, and packaging rules.
 <a href="https://datatracker.ietf.org/doc/draft-ietf-moq-warp/"><b><u>WARP</u></b></a> is one such format being developed alongside MoQT at the IETF, but it isn't the only one. Another standards body, like DASH-IF, could define a <a href="https://www.iso.org/standard/85623.html"><u>CMAF</u></a>-based streaming format over MoQT. A company that controls both original publisher and end subscriber can develop its own proprietary streaming format to experiment with new codecs or delivery mechanisms without being constrained by the transport protocol.</p></li></ol><p>This separation of layers is why different organizations can build interoperable implementations while still innovating at the streaming format layer.</p>
    <div>
      <h4>End-to-End Data Flow</h4>
      <a href="#end-to-end-data-flow">
        
      </a>
    </div>
    <p>Now that we understand the architecture and the data model, let's walk through how these pieces come together to deliver a stream. The protocol is flexible, but a typical broadcast flow relies on the <code>ANNOUNCE</code> and <code>SUBSCRIBE </code>messages to establish a data path from a publisher to a subscriber through the relay network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2iRTJFdtCjIOcyg7ezoYgJ/e303ea8d1eb438328b60fdb28be47e84/image2.png" />
          </figure><p>Here is a step-by-step breakdown of what happens in this flow:</p><ol><li><p><b>Initiating Connections:</b> The process begins when the endpoints, acting as clients, connect to the relay network. The Original Publisher initiates a connection with its nearest relay (we'll call it Relay A). Separately, an End Subscriber initiates a connection with its own local relay (Relay B). These endpoints perform a <code>SETUP</code> handshake with their respective relays to establish a MoQ session and declare supported parameters.</p></li><li><p><b>Announcing a Namespace:</b> To make its content discoverable, the Publisher sends an <code>ANNOUNCE</code> message to Relay A. This message declares that the publisher is the authoritative source for a given <b>track namespace</b>. Relay A receives this and registers in a shared control plane (a conceptual database) that it is now a source for this namespace within the network.</p></li><li><p><b>Subscribing to a Track:</b> When the End Subscriber wants to receive media, it sends a <code>SUBSCRIBE</code> message to its relay, Relay B. This message is a request for a specific <b>track name</b> within a specific <b>track namespace</b>.</p></li><li><p><b>Connecting the Relays:</b> Relay B receives the <code>SUBSCRIBE</code> request and queries the control plane. It looks up the requested namespace and discovers that Relay A is the source. Relay B then initiates a session with Relay A (if it doesn't already have one) and forwards the <code>SUBSCRIBE</code> request upstream.</p></li><li><p><b>Completing the Path and Forwarding Objects:</b> Relay A, having received the subscription request from Relay B, forwards it to the Original Publisher. With the full path now established, the Publisher begins sending the <code>Objects</code> for the requested track. The Objects flow from the Publisher to Relay A, which forwards them to Relay B, which in turn forwards them to the End Subscriber. If another subscriber connects to Relay B and requests the same track, Relay B can immediately start sending them the Objects without needing to create a new upstream subscription.</p></li></ol>
    <div>
      <h5>An Alternative Flow: The <code>PUBLISH</code> Model</h5>
      <a href="#an-alternative-flow-the-publish-model">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KJYU1eWNyuSZEHNYonDDn/3898003d5a7f5904787c7ef009b22fe0/image3.png" />
          </figure><p>More recent drafts of the MoQ specification have introduced an alternative, push-based model using a <code>PUBLISH</code> message. In this flow, a publisher can effectively ask for permission to send a track's objects to a relay <i>without</i> waiting for a <code>SUBSCRIBE </code>request. The publisher sends a <code>PUBLISH</code> message, and the relay's <code>PUBLISH_OK</code> response indicates whether it will accept the objects. This is particularly useful for ingest scenarios, where a publisher wants to send its stream to an entry point in the network immediately, ensuring the media is available the instant the first subscriber connects.</p>
    <div>
      <h4>Advanced capabilities: Prioritization and congestion control</h4>
      <a href="#advanced-capabilities-prioritization-and-congestion-control">
        
      </a>
    </div>
    <p>MoQ’s benefits really shine when networks get congested. MoQ includes mechanisms for handling the reality of network traffic. One such mechanism is Subgroups.</p><p><b>Subgroups</b> are subdivisions within a Group that effectively map directly to the underlying QUIC streams. All Objects within the same Subgroup are generally sent on the same QUIC stream, guaranteeing their delivery order. Subgroup numbering also presents an opportunity to encode prioritization: within a Group, lower-numbered Subgroups are considered higher priority. </p><p>This enables intelligent quality degradation, especially with layered codecs (e.g. SVC):</p><ul><li><p><b>Subgroup 0</b>: Base video layer (360p) - must deliver</p></li><li><p><b>Subgroup 1</b>: Enhancement to 720p - deliver if bandwidth allows</p></li><li><p><b>Subgroup 2</b>: Enhancement to 1080p - first to drop under congestion</p></li></ul><p>When a relay detects congestion, it can drop Objects from higher-numbered Subgroups, preserving the base layer. Viewers see reduced quality instead of buffering.</p><p>The MoQ specification defines a scheduling algorithm that determines the order for all objects that are "ready to send." When a relay has multiple objects ready, it prioritizes them first by <b>group order</b> (ascending or descending) and then, within a group, by <b>subgroup id</b>. Our implementation supports the <b>group order</b> preference, which can be useful for low-latency broadcasts. If a viewer falls behind and its subscription uses descending group order, the relay prioritizes sending Objects from the newest "live" Group, potentially canceling unsent Objects from older Groups. This can help viewers catch up to the live edge quickly, a highly desirable feature for many interactive streaming use cases. The optimal strategies for using these features to improve QoE for specific use cases are still an open research question. We invite developers and researchers to use our network to experiment and help find the answers.</p>
    <div>
      <h3><b>Implementation: building the Cloudflare MoQ relay</b></h3>
      <a href="#implementation-building-the-cloudflare-moq-relay">
        
      </a>
    </div>
    <p>Theory is one thing; implementation is another. To validate the protocol and understand its real-world challenges, we've been building one of the first global MoQ relay networks. Cloudflare's network, which places compute and logic at the edge, is very well suited for this.</p><p>Our architecture connects the abstract concepts of MoQ to the Cloudflare stack. In our deep dive, we mentioned that when a publisher <code>ANNOUNCE</code>s a namespace, relays need to register this availability in a "shared control plane" so that <code>SUBSCRIBE</code> requests can be routed correctly. For this critical piece of state management, we use <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>.</p><p>When a publisher announces a new namespace to a relay in, say, London, that relay uses a Durable Object—our strongly consistent, single-threaded storage solution—to record that this namespace is now available at that specific location. When a subscriber in Paris wants a track from that namespace, the network can query this distributed state to find the nearest source and route the <code>SUBSCRIBE</code> request accordingly. This architecture builds upon the technology we developed for Cloudflare's real-time services and provides a solution to the challenge of state management at a global scale.</p>
    <div>
      <h4>An Evolving Specification</h4>
      <a href="#an-evolving-specification">
        
      </a>
    </div>
    <p>Building on a new protocol in the open means implementing against a moving target. To get MoQ into the hands of the community, we made a deliberate trade-off: our current relay implementation is based on a <b>subset of the features defined in </b><a href="https://www.ietf.org/archive/id/draft-ietf-moq-transport-07.html"><b><u>draft-ietf-moq-transport-07</u></b></a>. This version became a de facto target for interoperability among several open-source projects and pausing there allowed us to put effort towards other aspects of deploying our relay network<b>.</b></p><p>This draft of the protocol makes a distinction between accessing "past" and "future" content. <code><b>SUBSCRIBE</b></code> is used to receive <b>future</b> objects for a track as they arrive—like tuning into a live broadcast to get everything from that moment forward. In contrast, <code><b>FETCH</b></code> provides a mechanism for accessing <b>past</b> content that a relay may already have in its cache—like asking for a recording of a song that just played.</p><p>Both are part of the same specification, but for the most pressing low-latency use cases, a performant implementation of <code>SUBSCRIBE</code> is what matters most. For that reason, we have focused our initial efforts there and have not yet implemented <code>FETCH</code>.</p><p>This is where our roadmap is flexible and where the community can have a direct impact. Do you need <code>FETCH</code> to build on-demand or catch-up functionality? Or is more complete support for the prioritization features within <code>SUBSCRIBE</code> more critical for your use case? The feedback we receive from early developers will help us decide what to build next.</p><p>As always, we will announce our updates and changes to our implementation as we continue with development on our <a href="https://developers.cloudflare.com/moq"><u>developer docs pages</u></a>.</p>
    <div>
      <h3>Kick the tires on the future</h3>
      <a href="#kick-the-tires-on-the-future">
        
      </a>
    </div>
    <p>We believe in building in the open and interoperability in the community. MoQ is not a Cloudflare technology but a foundational Internet technology. To that end, the first demo client we’re presenting is an open source, community example.</p><p><b>You can access the demo here: </b><a href="https://moq.dev/publish/"><b><u>https://moq.dev/publish/</u></b></a></p><p>Even though this is a preview release, we are running MoQ relays at Cloudflare’s full scale, like we do every production service. This means every server that is part of the Cloudflare network in more than 330 cities is now a MoQ relay.</p><p>We invite you to experience the "wow" moment of near-instant, sub-second streaming latency that MoQ enables. How would you use a protocol that offers the speed of a video call with the scale of a global broadcast?</p>
    <div>
      <h3><b>Interoperability</b></h3>
      <a href="#interoperability">
        
      </a>
    </div>
    <p>We’ve been working with others in the IETF WG community and beyond on interoperability of publishers, players and other parts of the MoQ ecosystem. So far, we’ve tested with:</p><ul><li><p>Luke Curley’s <a href="https://moq.dev"><u>moq.dev</u></a></p></li><li><p>Lorenzo Miniero’s <a href="https://github.com/meetecho/imquic"><u>imquic</u></a></p></li><li><p>Meta’s <a href="https://github.com/facebookexperimental/moxygen"><u>Moxygen</u></a> </p></li><li><p><a href="https://github.com/englishm/moq-rs"><u>moq-rs</u></a></p></li><li><p><a href="https://github.com/englishm/moq-js"><u>moq-js</u></a></p></li><li><p><a href="https://norsk.video/"><u>Norsk</u></a></p></li><li><p><a href="https://vindral.com/"><u>Vindral</u></a></p></li></ul>
    <div>
      <h3>The Road Ahead</h3>
      <a href="#the-road-ahead">
        
      </a>
    </div>
    <p>The Internet's media stack is being refactored. For two decades, we've been forced to choose between latency, scale, and complexity. The compromises we made solved some problems, but also led to a fragmented ecosystem.</p><p>MoQ represents a promising new foundation—a chance to unify the silos and build the next generation of real-time applications on a scalable protocol. We're committed to helping build this foundation in the open, and we're just getting started.</p><p>MoQ is a realistic way forward, built on QUIC for future proofing, easier to understand than WebRTC, compatible with browsers unlike RTMP.</p><p>The protocol is evolving, the implementations are maturing, and the community is growing. Whether you're building the next generation of live streaming, exploring real-time collaboration, or pushing the boundaries of interactive media, consider whether MoQ may provide the foundation you need.</p>
    <div>
      <h3>Availability and pricing</h3>
      <a href="#availability-and-pricing">
        
      </a>
    </div>
    <p>We want developers to start building with MoQ today. To make that possible MoQ at Cloudflare is in tech preview - this means it's available free of charge for testing (at any scale). Visit our <a href="https://developers.cloudflare.com/moq/"><u>developer homepage </u></a>for updates and potential breaking changes.</p><p>Indie developers and large enterprises alike ask about pricing early in their adoption of new technologies. We will be transparent and clear about MoQ pricing. In general availability, self-serve customers should expect to pay 5 cents/GB outbound with no cost for traffic sent towards Cloudflare. </p><p>Enterprise customers can expect usual pricing in line with regular media delivery pricing, competitive with incumbent protocols. This means if you’re already using Cloudflare for media delivery, you should not be wary of adopting new technologies because of cost. We will support you.</p><p>If you’re interested in partnering with Cloudflare in adopting the protocol early or contributing to its development, please reach out to us at <a><u>moq@cloudflare.com</u></a>! Engineers excited about the future of the Internet are standing by.</p>
    <div>
      <h3>Get involved:</h3>
      <a href="#get-involved">
        
      </a>
    </div>
    <ul><li><p><b>Try the demo:</b> <a href="https://moq.dev/publish/"><u>https://moq.dev/publish/</u></a></p></li><li><p><b>Read the Internet draft:</b> <a href="https://datatracker.ietf.org/doc/draft-ietf-moq-transport/"><u>https://datatracker.ietf.org/doc/draft-ietf-moq-transport/</u></a></p></li><li><p><b>Contribute</b> to the protocol’s development: <a href="https://datatracker.ietf.org/group/moq/documents/"><u>https://datatracker.ietf.org/group/moq/documents/</u></a></p></li><li><p><b>Visit </b>our developer homepage: <a href="https://developers.cloudflare.com/moq/"><u>https://developers.cloudflare.com/moq/</u></a></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Live Streaming]]></category>
            <category><![CDATA[WebRTC]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Standards]]></category>
            <guid isPermaLink="false">2XgF5NjmAy3cqybLPkpMFu</guid>
            <dc:creator>Mike English</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Examining HTTP/3 usage one year on]]></title>
            <link>https://blog.cloudflare.com/http3-usage-one-year-on/</link>
            <pubDate>Tue, 06 Jun 2023 13:00:20 GMT</pubDate>
            <description><![CDATA[ With the HTTP/3 RFC celebrating its 1st birthday, we examined HTTP version usage trends between May 2022 - May 2023. We found that HTTP/3 usage by browsers continued to grow, but that search engine and social media bots continued to effectively ignore the latest version of the web’s core protocol ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3fGsSPUCSxABWlwpC5HfdV/ca7cf03337e600bd768b8acc7d06de36/image11-1.png" />
            
            </figure><p>In June 2022, after the publication of a set of HTTP-related Internet standards, including the <a href="https://www.rfc-editor.org/rfc/rfc9114.html">RFC that formally defined HTTP/3</a>, we published <a href="/cloudflare-view-http3-usage/"><i>HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends</i></a>. One year on, as the RFC reaches its first birthday, we thought it would be interesting to look back at how these trends have evolved over the last year.</p><p>Our previous post reviewed usage trends for <a href="https://datatracker.ietf.org/doc/html/rfc9112">HTTP/1.1</a>, <a href="https://datatracker.ietf.org/doc/html/rfc9113">HTTP/2</a>, and <a href="https://datatracker.ietf.org/doc/html/rfc9114">HTTP/3</a> observed across Cloudflare’s network between May 2021 and May 2022, broken out by version and browser family, as well as for search engine indexing and social media bots. At the time, we found that browser-driven traffic was overwhelmingly using HTTP/2, although HTTP/3 usage was showing signs of growth. Search and social bots were mixed in terms of preference for <a href="https://www.cloudflare.com/learning/performance/http2-vs-http1.1/">HTTP/1.1 vs. HTTP/2</a>, with little-to-no HTTP/3 usage seen.</p><p>Between May 2022 and May 2023, we found that HTTP/3 usage in browser-retrieved content continued to grow, but that search engine indexing and social media bots continued to effectively ignore the latest version of the web’s core protocol. (Having said that, <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">the benefits of HTTP/3</a> are very user-centric, and arguably offer minimal benefits to bots designed to asynchronously crawl and index content. This may be a key reason that we see such low adoption across these automated user agents.) In addition, HTTP/3 usage across API traffic is still low, but doubled across the year. Support for HTTP/3 is on by default for zones using Cloudflare’s free tier of service, while paid customers have the option to activate support.</p><p>HTTP/1.1 and HTTP/2 use TCP as a transport layer and add security via <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS</a>. HTTP/3 uses QUIC to provide both the transport layer and security. Due to the difference in transport layer, user agents usually require learning that an origin is accessible using HTTP/3 before they'll try it. One method of discovery is <a href="https://httpwg.org/specs/rfc7838.html">HTTP Alternative Services</a>, where servers return an Alt-Svc response header containing a list of supported <a href="https://developer.mozilla.org/en-US/docs/Glossary/ALPN">Application-Layer Protocol Negotiation Identifiers (ALPN IDs)</a>. Another method is the <a href="/speeding-up-https-and-http-3-negotiation-with-dns/">HTTPS record type</a>, where clients query the DNS to learn the supported ALPN IDs. The ALPN ID for HTTP/3 is "h3" but while the specification was in development and iteration, we added a suffix to identify the particular draft version e.g., "h3-29" identified <a href="https://datatracker.ietf.org/doc/html/draft-ietf-quic-http-29">draft 29</a>. In order to maintain compatibility for a wide range of clients, Cloudflare advertised both "h3" and "h3-29". However, draft 29 was published close to three years ago and clients have caught up with support for the final RFC. As of late May 2023, Cloudflare no longer advertises h3-29 for zones that have HTTP/3 enabled, helping to save several bytes on each HTTP response or <a href="https://www.cloudflare.com/learning/dns/dns-records/">DNS record</a> that would have included it. Because a browser and web server typically automatically negotiate the highest HTTP version available, HTTP/3 takes precedence over HTTP/2.</p><p>In the sections below, “likely automated” and “automated” traffic based on <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">Cloudflare bot score</a> has been filtered out for desktop and mobile browser analysis to restrict analysis to “likely human” traffic, but it is included for the search engine and social media bot analysis. In addition, references to HTTP requests or HTTP traffic below include requests made over both HTTP and HTTPS.</p>
    <div>
      <h3>Overall request distribution by HTTP version</h3>
      <a href="#overall-request-distribution-by-http-version">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BD7nhUhMqEAKHE3lw7Ucs/1230d8393a90c5a431398edccac2b6c2/download-3.png" />
            
            </figure><p>Aggregating global web traffic to Cloudflare on a daily basis, we can observe usage trends for HTTP/1.1, HTTP/2, and HTTP/3 across the surveyed one year period. The share of traffic over HTTP/1.1 declined from 8% to 7% between May and the end of September, but grew rapidly to over 11% through October. It stayed elevated into the new year and through January, dropping back down to 9% by May 2023. Interestingly, the weekday/weekend traffic pattern became more pronounced after the October increase, and remained for the subsequent six months. HTTP/2 request share saw nominal change over the year, beginning around 68% in May 2022, but then starting to decline slightly in June. After that, its share didn’t see a significant amount of change, ending the period just shy of 64%. No clear weekday/weekend pattern was visible for HTTP/2. Starting with just over 23% share in May 2022, the percentage of requests over HTTP/3 grew to just over 30% by August and into September, but dropped to around 26% by November. After some nominal loss and growth, it ended the surveyed time period at 28% share. (Note that this graph begins in late May due to data retention limitations encountered when generating the graph in early June.)</p>
    <div>
      <h3>API request distribution by HTTP version</h3>
      <a href="#api-request-distribution-by-http-version">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1iKB6A5jGKVMBcaRAYckjv/47b7777093a2cad492060224a8257601/download--1--2.png" />
            
            </figure><p>Although <a href="/application-security-2023/">API traffic</a> makes up a significant amount of Cloudflare’s request volume, only a small fraction of those requests are made over HTTP/3. Approximately half of such requests are made over HTTP/1.1, with another third over HTTP/2. However, HTTP/3 usage for APIs grew from around 6% in May 2022 to over 12% by May 2023. HTTP/3’s smaller share of traffic is likely due in part to support for HTTP/3 in key tools like <a href="https://curl.se/docs/http3.html">curl</a> still being considered as “experimental”. Should this change in the future, with HTTP/3 gaining first-class support in such tools, we expect that this will accelerate growth in HTTP/3 usage, both for <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> and overall as well.</p>
    <div>
      <h3>Mitigated request distribution by HTTP version</h3>
      <a href="#mitigated-request-distribution-by-http-version">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4J6wSXALudeaeZNzxvgRBo/6614895278f32528664d3f1444402350/download--2--2.png" />
            
            </figure><p>The analyses presented above consider all HTTP requests made to Cloudflare, but we also thought that it would be interesting to look at HTTP version usage by potentially malicious traffic, so we broke out just those requests that were mitigated by one of Cloudflare’s application security solutions. The graph above shows that the vast majority of mitigated requests are made over HTTP/1.1 and HTTP/2, with generally less than 5% made over HTTP/3. Mitigated requests appear to be most frequently made over HTTP/1.1, although HTTP/2 accounted for a larger share between early August and late November. These observations suggest that attackers don’t appear to be investing the effort to upgrade their tools to take advantage of the newest version of HTTP, finding the older versions of the protocol sufficient for their needs. (Note that this graph begins in late May 2022 due to data retention limitations encountered when generating the graph in early June 2023.)</p>
    <div>
      <h3>HTTP/3 use by desktop browser</h3>
      <a href="#http-3-use-by-desktop-browser">
        
      </a>
    </div>
    <p>As we noted last year, <a href="https://caniuse.com/http3">support for HTTP/3 in the stable release channels of major browsers</a> came in November 2020 for Google Chrome and Microsoft Edge, and April 2021 for Mozilla Firefox. We also noted that in Apple Safari, HTTP/3 support needed to be <a href="https://developer.apple.com/forums/thread/660516">enabled</a> in the “Experimental Features” developer menu in production releases. However, in the most recent releases of Safari, it appears that this step is no longer necessary, and that HTTP/3 is now natively supported.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46nLEn3t41yyWfhZkznaEa/29c5b7bfa59b34f55977cc65ad67c2bf/download--3--2.png" />
            
            </figure><p>Looking at request shares by browser, Chrome started the period responsible for approximately 80% of HTTP/3 request volume, but the continued growth of Safari dropped it to around 74% by May 2023. A year ago, Safari represented less than 1% of HTTP/3 traffic on Cloudflare, but grew to nearly 7% by May 2023, likely as a result of support graduating from experimental to production.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7L26q8bA4sjcNlx7Vh0ohL/667061d148ccbaa0043cd3fd1e38a879/download--4--2.png" />
            
            </figure><p>Removing Chrome from the graph again makes trends across the other browsers more visible. As noted above, Safari experienced significant growth over the last year, while Edge saw a bump from just under 10% to just over 11% in June 2022. It stayed around that level through the new year, and then gradually dropped below 10% over the next several months. Firefox dropped slightly, from around 10% to just under 9%, while reported HTTP/3 traffic from Internet Explorer was near zero.</p><p>As we did in last year’s post, we also wanted to look at how the share of HTTP versions has changed over the last year across each of the leading browsers. The relative stability of HTTP/2 and HTTP/3 seen over the last year is in some contrast to the observations made in last year’s post, which saw some noticeable shifts during the May 2021 - May 2022 timeframe.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bt30ch54tIJHXIcA4xzNo/ae9bf1593781da888f4c4843ac379465/download--5--1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CRgOGynraOznE7gu7AFjM/ec94e0c8be1b4ec2fb4f9f8f0a715cb0/download--6--1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AbRtdFXbJ43Xli6w0jVJN/ad597050da4a948d41a9b275c3752d96/download--7-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/GgWSo7NP45FZffpsqScv1/e269f55f5b59402ad771a065196b3d59/download--8-.png" />
            
            </figure><p>In looking at request share by protocol version across the major desktop browser families, we see that across all of them, HTTP/1.1 share grows in late October. Further analysis indicates that this growth was due to significantly higher HTTP/1.1 request volume across several large customer zones, but it isn’t clear <b>why</b> this influx of traffic using an older version of HTTP occurred. It is clear that HTTP/2 remains the dominant protocol used for content requests by the major browsers, consistently accounting for 50-55% of request volume for Chrome and Edge, and ~60% for Firefox. However, for Safari, HTTP/2’s share dropped from nearly 95% in May 2022 to around 75% a year later, thanks to the growth in HTTP/3 usage.</p><p>HTTP/3 share on Safari grew from under 3% to nearly 18% over the course of the year, while its share on the other browsers was more consistent, with Chrome and Edge hovering around 40% and Firefox around 35%, and both showing pronounced weekday/weekend traffic patterns. (That pattern is arguably the most pronounced for Edge.) Such a pattern becomes more, yet still barely, evident with Safari in late 2022, although it tends to vary by less than a percent.</p>
    <div>
      <h3>HTTP/3 usage by mobile browser</h3>
      <a href="#http-3-usage-by-mobile-browser">
        
      </a>
    </div>
    <p>Mobile devices are responsible for <a href="https://radar.cloudflare.com/traffic?range=28d">over half</a> of request volume to Cloudflare, with Chrome Mobile <a href="https://radar.cloudflare.com/adoption-and-usage?range=28d">generating</a> more than 25% of all requests, and Mobile Safari more than 10%. Given this, we decided to explore HTTP/3 usage across these two key mobile platforms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gEIDgm3aTtDYNZj0GD5bS/3728f1ff9a6790b44f66deec6101871c/download--9-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6a0By0cXENGpNbqV1BQB7G/5ddbdae331642235f4395c0972e8121e/download--10-.png" />
            
            </figure><p>Looking at Chrome Mobile and Chrome Mobile Webview (an embeddable version of Chrome that applications can use to display Web content), we find HTTP/1.1 usage to be minimal, topping out at under 5% of requests. HTTP/2 usage dropped from 60% to just under 55% between May and mid-September, but then bumped back up to near 60%, remaining essentially flat to slightly lower through the rest of the period. In a complementary fashion, HTTP/3 traffic increased from 37% to 45%, before falling just below 40% in mid-September, hovering there through May. The usage patterns ultimately look very similar to those seen with desktop Chrome, albeit without the latter’s clear weekday/weekend traffic pattern.</p><p>Perhaps unsurprisingly, the usage patterns for Mobile Safari and Mobile Safari Webview closely mirror those seen with desktop Safari. HTTP/1.1 share increases in October, and HTTP/3 sees strong growth, from under 3% to nearly 18%.</p>
    <div>
      <h3>Search indexing bots</h3>
      <a href="#search-indexing-bots">
        
      </a>
    </div>
    <p>Exploring usage of the various versions of HTTP by <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/">search engine crawlers/bots</a>, we find that last year’s trend continues, and that there remains little-to-no usage of HTTP/3. (As mentioned above, this is somewhat expected, as HTTP/3 is optimized for browser use cases.) Graphs for Bing &amp; Baidu here are trimmed to a period ending April 1, 2023 due to anomalous data during April that is being investigated.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2zpXgMRgxrsctpJHrIShZC/ca5830878f4a328f0ee98ea5afdd883b/download--11-.png" />
            
            </figure><p>GoogleBot continues to rely primarily on HTTP/1.1, which generally comprises 55-60% of request volume. The balance is nearly all HTTP/2, although some nominal growth in HTTP/3 usage sees it peaking at just under 2% in March.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/415sWhtv1TxuyQJgW95wq4/a4340af637f9dbc42f25b3d81f61f356/download--12-.png" />
            
            </figure><p>Through January 2023, around 85% of requests from Microsoft’s BingBot were made via HTTP/2, but dropped to closer to 80% in late January. The balance of the requests were made via HTTP/1.1, as HTTP/3 usage was negligible.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2EP2wiznT1eIlRB7rwDQZB/6a2ffcda96dbc4a77f85d4866453aea0/download--13-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BxyzvnZnL8rWgIpIcEh9D/63cd123faeb6e9795b3766082b918b1a/download--14-.png" />
            
            </figure><p>Looking at indexing bots from search engines based outside of the United States, Russia’s YandexBot appears to use HTTP/1.1 almost exclusively, with HTTP/2 usage generally around 1%, although there was a period of increased usage between late August and mid-November. It isn’t clear what ultimately caused this increase. There was no meaningful request volume seen over HTTP/3. The indexing bot used by Chinese search engine Baidu also appears to strongly prefer HTTP/1.1, generally used for over 85% of requests. However, the percentage of requests over HTTP/2 saw a number of spikes, briefly reaching over 60% on days in July, November, and December 2022, as well as January 2023, with several additional spikes in the 30% range. Again, it isn’t clear what caused this spiky behavior. HTTP/3 usage by BaiduBot is effectively non-existent as well.</p>
    <div>
      <h3>Social media bots</h3>
      <a href="#social-media-bots">
        
      </a>
    </div>
    <p>Similar to Bing &amp; Baidu above, the graphs below are also trimmed to a period ending April 1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JuMhjbBKR1azrJ0zgZfeM/7c765cdc40f70ddbdec16aa447585e0f/download--15-.png" />
            
            </figure><p>Facebook’s use of HTTP/3 for site crawling and indexing over the last year remained near zero, similar to what we observed over the previous year. HTTP/1.1 started the period accounting for under 60% of requests, and except for a brief peak above it in late May, usage of HTTP/1.1 steadily declined over the course of the year, dropping to around 30% by April 2023. As such, use of HTTP/2 increased from just over 40% in May 2022 to over 70% in April 2023. Meta engineers confirmed that this shift away from HTTP/1.1 usage is an expected gradual change in their infrastructure's use of HTTP, and that they are slowly working towards removing HTTP/1.1 from their infrastructure entirely.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Gq43pPPIHZuV6L8gqwHCv/1bb85f1d4eb42551a64e41e943678e7f/download--16-.png" />
            
            </figure><p>In last year’s blog post, we noted that “TwitterBot clearly has a strong and consistent preference for HTTP/2, accounting for 75-80% of its requests, with the balance over HTTP/1.1.” This preference generally remained the case through early October, at which point HTTP/2 usage began a gradual decline to just above 60% by April 2023. It isn’t clear what drove the week-long HTTP/2 drop and HTTP/1.1 spike in late May 2022. And as we noted last year, TwitterBot’s use of HTTP/3 remains non-existent.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fb6YfpvO1LgDfHmQ9ExXh/5c999b0cf8ced8ca0c7a64b9788c3fb5/download--17-.png" />
            
            </figure><p>In contrast to Facebook’s and Twitter’s site crawling bots, HTTP/3 actually accounts for a noticeable, and growing, volume of requests made by LinkedIn’s bot, increasing from just under 1% in May 2022 to just over 10% in April 2023. We noted last year that LinkedIn’s use of HTTP/2 began to take off in March 2022, growing to approximately 5% of requests. Usage of this version gradually increased over this year’s surveyed period to 15%, although the growth was particularly erratic and spiky, as opposed to a smooth, consistent increase. HTTP/1.1 remained the dominant protocol used by LinkedIn’s bots, although its share dropped from around 95% in May 2022 to 75% in April 2023.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>On the whole, we are excited to see that usage of HTTP/3 has generally increased for browser-based consumption of traffic, and recognize that there is opportunity for significant further growth if and when it starts to be used more actively for API interactions through production support in key tools like curl. And though disappointed to see that search engine and social media bot usage of HTTP/3 remains minimal to non-existent, we also recognize that the real-time benefits of using the newest version of the web’s foundational protocol may not be completely applicable for asynchronous automated content retrieval.</p><p>You can follow these and other trends in the “Adoption and Usage” section of Cloudflare Radar at <a href="https://radar.cloudflare.com/adoption-and-usage">https://radar.cloudflare.com/adoption-and-usage</a>, as well as by following <a href="https://twitter.com/cloudflareradar">@CloudflareRadar</a> on Twitter or <a href="https://cloudflare.social/@radar">https://cloudflare.social/@radar</a> on Mastodon.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">7Dpg4lAaYLKzXNozcyuxdv</guid>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends]]></title>
            <link>https://blog.cloudflare.com/cloudflare-view-http3-usage/</link>
            <pubDate>Mon, 06 Jun 2022 20:49:17 GMT</pubDate>
            <description><![CDATA[ HTTP/3 is now RFC 9114. We explore Cloudflare's view of how it is being used ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, a cluster of Internet standards were published that rationalize and modernize the definition of HTTP - the application protocol that underpins the web. This work includes updates to, and <a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/">refactoring</a> of, HTTP semantics, HTTP caching, HTTP/1.1, HTTP/2, and the brand-new <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>. Developing these specifications has been no mean feat and today marks the culmination of efforts far and wide, in the Internet Engineering Task Force (IETF) and beyond. We thought it would be interesting to celebrate the occasion by sharing some analysis of Cloudflare's view of HTTP traffic over the last 12 months.</p><p>However, before we get into the traffic data, for quick reference, here are the new RFCs that you should make a note of and start using:</p><ul><li><p>HTTP Semantics - <a href="https://www.rfc-editor.org/rfc/rfc9110.html">RFC 9110</a></p><ul><li><p>HTTP's overall architecture, common terminology and shared protocol aspects such as request and response messages, methods, status codes, header and trailer fields, message content, representation data, content codings and much more. Obsoletes RFCs <a href="https://www.rfc-editor.org/rfc/rfc2818.html">2818</a>, <a href="https://www.rfc-editor.org/rfc/rfc7231.html">7231</a>, <a href="https://www.rfc-editor.org/rfc/rfc7232.html">7232</a>, <a href="https://www.rfc-editor.org/rfc/rfc7233.html">7233</a>, <a href="https://www.rfc-editor.org/rfc/rfc7235.html">7235</a>, <a href="https://www.rfc-editor.org/rfc/rfc7538.html">7538</a>, <a href="https://www.rfc-editor.org/rfc/rfc7615.html">7615</a>, <a href="https://www.rfc-editor.org/rfc/rfc7694.html">7694</a>, and portions of <a href="https://www.rfc-editor.org/rfc/rfc7230.html">7230</a>.</p></li></ul></li><li><p>HTTP Caching - <a href="https://www.rfc-editor.org/rfc/rfc9111.html">RFC 9111</a></p><ul><li><p>HTTP caches and related header fields to control the behavior of response caching. Obsoletes RFC <a href="https://www.rfc-editor.org/rfc/rfc7234.html">7234</a>.</p></li></ul></li><li><p>HTTP/1.1 - <a href="https://www.rfc-editor.org/rfc/rfc9112.html">RFC 9112</a></p><ul><li><p>A syntax, aka "wire format", of HTTP that uses a text-based format. Typically used over TCP and TLS. Obsolete portions of RFC <a href="https://www.rfc-editor.org/rfc/rfc7230.html">7230</a>.</p></li></ul></li><li><p>HTTP/2 - RFC <a href="https://www.rfc-editor.org/rfc/rfc9113.html">9113</a></p><ul><li><p>A syntax of HTTP that uses a binary framing format, which provides streams to support concurrent requests and responses. Message fields can be compressed using HPACK. Typically used over TCP and TLS. Obsoletes RFCs <a href="https://www.rfc-editor.org/rfc/rfc7540.html">7540</a> and <a href="https://www.rfc-editor.org/rfc/rfc8740.html">8740</a>.</p></li></ul></li><li><p>HTTP/3 - RFC <a href="https://www.rfc-editor.org/rfc/rfc9114.html">9114</a></p><ul><li><p>A syntax of HTTP that uses a binary framing format optimized for the QUIC transport protocol. Message fields can be compressed using QPACK.</p></li></ul></li><li><p>QPACK - RFC <a href="https://www.rfc-editor.org/rfc/rfc9204.html">9204</a></p><ul><li><p>A variation of HPACK field compression that is optimized for the QUIC transport protocol.</p></li></ul></li></ul><p>On May 28, 2021, we <a href="/quic-version-1-is-live-on-cloudflare/">enabled</a> QUIC version 1 and HTTP/3 for all Cloudflare customers, using the final "h3" identifier that matches RFC 9114. So although today's publication is an occasion to celebrate, for us nothing much has changed, and it's business as usual.</p><p><a href="https://caniuse.com/http3">Support for HTTP/3 in the stable release channels of major browsers</a> came in November 2020 for Google Chrome and Microsoft Edge and April 2021 for Mozilla Firefox. In Apple Safari, HTTP/3 support currently needs to be <a href="https://developer.apple.com/forums/thread/660516">enabled</a> in the “Experimental Features” developer menu in production releases.</p><p>A browser and web server typically automatically negotiate the highest HTTP version available. Thus, HTTP/3 takes precedence over HTTP/2. We looked back over the last year to understand HTTP/3 usage trends across the Cloudflare network, as well as analyzing HTTP versions used by traffic from leading browser families (Google Chrome, Mozilla Firefox, Microsoft Edge, and Apple Safari), major search engine indexing bots, and bots associated with some popular social media platforms. The graphs below are based on aggregate HTTP(S) traffic seen globally by the Cloudflare network, and include requests for website and application content across the Cloudflare customer base between May 7, 2021, and May 7, 2022. We used <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">Cloudflare bot scores</a> to restrict analysis to “likely human” traffic for the browsers, and to “likely automated” and “automated” for the search and social bots.</p>
    <div>
      <h3>Traffic by HTTP version</h3>
      <a href="#traffic-by-http-version">
        
      </a>
    </div>
    <p>Overall, HTTP/2 still comprises the majority of the request traffic for Cloudflare customer content, as clearly seen in the graph below. After remaining fairly consistent through 2021, HTTP/2 request volume increased by approximately 20% heading into 2022. HTTP/1.1 request traffic remained fairly flat over the year, aside from a slight drop in early December. And while HTTP/3 traffic initially trailed HTTP/1.1, it surpassed it in early July, growing steadily and  roughly doubling in twelve months.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UKNCgWJPAocsCrvTmqKOG/6c1d9ff45b8c4430f4663f4fe8a41964/image13-1.png" />
            
            </figure>
    <div>
      <h3>HTTP/3 traffic by browser</h3>
      <a href="#http-3-traffic-by-browser">
        
      </a>
    </div>
    <p>Digging into just HTTP/3 traffic, the graph below shows the trend in daily aggregate request volume over the last year for HTTP/3 requests made by the surveyed browser families. Google Chrome (orange line) is far and away the leading browser, with request volume far outpacing the others.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fOBxNVQis3KRP9qMJtN0h/07df569e787dcfd3b918124a9c324b30/image6-21.png" />
            
            </figure><p>Below, we remove Chrome from the graph to allow us to more clearly see the trending across other browsers. Likely because it is also based on the Chromium engine, the trend for Microsoft Edge closely mirrors Chrome. As noted above, Mozilla Firefox first enabled production support in <a href="https://hacks.mozilla.org/2021/04/quic-and-http-3-support-now-in-firefox-nightly-and-beta/">version 88</a> in April 2021, making it available by default by the end of May. The increased adoption of that updated version during the following month is clear in the graph as well, as HTTP/3 request volume from Firefox grew rapidly. HTTP/3 traffic from Apple Safari increased gradually through April, suggesting growth in the number of users enabling the experimental feature or running a Technology Preview version of the browser. However, Safari’s HTTP/3 traffic has subsequently dropped over the last couple of months. We are not aware of any specific reasons for this decline, but our most recent observations indicate HTTP/3 traffic is recovering.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Mupv6iXQ195JfJkFLJQjX/cb0cc4153c043740e92e93fb2e041626/image2-57.png" />
            
            </figure><p>Looking at the lines in the graph for Chrome, Edge, and Firefox, a weekly cycle is clearly visible in the graph, suggesting greater usage of these browsers during the work week. This same pattern is absent from Safari usage.</p><p>Across the surveyed browsers, Chrome ultimately accounts for approximately 80% of the HTTP/3 requests seen by Cloudflare, as illustrated in the graphs below. Edge is responsible for around another 10%, with Firefox just under 10%, and Safari responsible for the balance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Yph7V9e1W31PWkSry6pCy/6c874447bfa49392244e587dfb3d35fe/image1-64.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zPj3TMsZuiirtoljldJYy/12f92447d2d0b3c26afd0b5754c510f1/image8-10.png" />
            
            </figure><p>We also wanted to look at how the mix of HTTP versions has changed over the last year across each of the leading browsers. Although the percentages vary between browsers, it is interesting to note that the trends are very similar across Chrome, Firefox and Edge. (After Firefox turned on default HTTP/3 support in May 2021, of course.)  These trends are largely customer-driven – that is, they are likely due to changes in Cloudflare customer configurations.</p><p>Most notably we see an increase in HTTP/3 during the last week of September, and a decrease in HTTP/1.1 at the beginning of December. For Safari, the HTTP/1.1 drop in December is also visible, but the HTTP/3 increase in September is not. We expect that over time, once Safari supports HTTP/3 by default that its trends will become more similar to those seen for the other browsers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P1m2PJH7GBq9kBUL4vH0/fd19391109337e16a255967b54120392/image7-12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Fj8pBr6Z5tpMV9XUTC1lJ/68b10faf3b1f840844d1cf97f8204b64/image9-6.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6iMw3Aj3IXWpjn4LyxBAsG/7b59629b937fb39d352a126a5bd178d3/image12-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dtxJLKq2N23CBwcZlEz8s/64bdc5275320e16bd0b8780db49e0ffc/image11-2.png" />
            
            </figure>
    <div>
      <h3>Traffic by search indexing bot</h3>
      <a href="#traffic-by-search-indexing-bot">
        
      </a>
    </div>
    <p>Back in 2014, Google <a href="https://developers.google.com/search/blog/2014/08/https-as-ranking-signal">announced</a> that it would start to consider HTTPS usage as a ranking signal as it indexed websites. However, it does not appear that Google, or any of the other major search engines, currently consider support for the latest versions of HTTP as a ranking signal. (At least not directly – the performance improvements associated with newer versions of HTTP could theoretically influence rankings.) Given that, we wanted to understand which versions of HTTP the indexing bots themselves were using.</p><p>Despite leading the charge around the development of QUIC, and integrating HTTP/3 support into the Chrome browser early on, it appears that on the indexing/crawling side, Google still has quite a long way to go. The graph below shows that requests from GoogleBot are still predominantly being made over HTTP/1.1, although use of HTTP/2 has grown over the last six months, gradually approaching HTTP/1.1 request volume. (A <a href="https://developers.google.com/search/blog/2020/09/googlebot-will-soon-speak-http2">blog post</a> from Google provides some potential insights into this shift.) Unfortunately, the volume of requests from GoogleBot over HTTP/3 has remained extremely limited over the last year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gTr2C26AB8SF6aK0CaiK/9b555a912b3428ad9e15572936bf4fb1/image4-32.png" />
            
            </figure><p>Microsoft’s BingBot also fails to use HTTP/3 when indexing sites, with near-zero request volume. However, in contrast to GoogleBot, BingBot prefers to use HTTP/2, with a wide margin developing in mid-May 2021 and remaining consistent across the rest of the past year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/444sdNtnh5h0LNsGtUWmuV/b4d3a2f76ec4a5b8fc579371c9f005a2/image10-5.png" />
            
            </figure>
    <div>
      <h3>Traffic by social media bot</h3>
      <a href="#traffic-by-social-media-bot">
        
      </a>
    </div>
    <p>Major social media platforms use custom bots to retrieve metadata for shared content, <a href="https://developers.facebook.com/docs/sharing/bot/">improve language models for speech recognition technology</a>, or otherwise index website content. We also surveyed the HTTP version preferences of the bots deployed by three of the leading social media platforms.</p><p>Although <a href="https://http3check.net/?host=www.facebook.com">Facebook supports HTTP/3</a> on their main website (and presumably their mobile applications as well), their back-end FacebookBot crawler does not appear to support it. Over the last year, on the order of 60% of the requests from FacebookBot have been over HTTP/1.1, with the balance over HTTP/2. Heading into 2022, it appeared that HTTP/1.1 preference was trending lower, with request volume over the 25-year-old protocol dropping from near 80% to just under 50% during the fourth quarter. However, that trend was abruptly reversed, with HTTP/1.1 growing back to over 70% in early February. The reason for the reversal is unclear.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6upA1FtAbR6TWhexL8CkxT/87b3f1d676e1f9189ad5b7dc1d869e4a/image3-44.png" />
            
            </figure><p>Similar to FacebookBot, it appears TwitterBot’s use of HTTP/3 is, unfortunately, pretty much non-existent. However, TwitterBot clearly has a strong and consistent preference for HTTP/2, accounting for 75-80% of its requests, with the balance over HTTP/1.1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2c9sz97ViywLHaRd4vxCwE/9c981e7c39f8c894957447b4a3337c1a/image14-1.png" />
            
            </figure><p>In contrast, LinkedInBot has, over the last year, been firmly committed to making requests over HTTP/1.1, aside from the apparently brief anomalous usage of HTTP/2 last June. However, in mid-March, it appeared to tentatively start exploring the use of other HTTP versions, with around 5% of requests now being made over HTTP/2, and around 1% over HTTP/3, as seen in the upper right corner of the graph below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ozCJpCXILw6ulzIAxDyXn/70f9a1c95d76f4fd1f6f9e70e4d3e270/image5-23.png" />
            
            </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We're happy that HTTP/3 has, at long last, been published as <a href="https://www.rfc-editor.org/rfc/rfc9114.html">RFC 9114</a>. More than that, we're super pleased to see that regardless of the wait, browsers have steadily been enabling support for the protocol by default. This allows end users to seamlessly gain the advantages of HTTP/3 whenever it is available. On Cloudflare's global network, we've seen continued growth in the share of traffic speaking HTTP/3, demonstrating continued interest from customers in enabling it for their sites and services. In contrast, we are disappointed to see bots from the major search and social platforms continuing to rely on aging versions of HTTP. We'd like to build a better understanding of how these platforms chose particular HTTP versions and welcome collaboration in exploring the advantages that HTTP/3, in particular, could provide.</p><p>Current statistics on HTTP/3 and QUIC adoption at a country and autonomous system (ASN) level can be found on <a href="https://radar.cloudflare.com/">Cloudflare Radar</a>.</p><p>Running HTTP/3 and QUIC on the edge for everyone has allowed us to monitor a wide range of aspects related to interoperability and performance across the Internet. Stay tuned for future blog posts that explore some of the technical developments we've been making.</p><p>And this certainly isn't the end of protocol innovation, as HTTP/3 and QUIC provide many exciting new opportunities. The IETF and wider community are already underway building new capabilities on top, such as <a href="/unlocking-quic-proxying-potential/">MASQUE</a> and <a href="https://datatracker.ietf.org/wg/webtrans/documents/">WebTransport</a>. Meanwhile, in the last year, the QUIC Working Group has adopted new work such as <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-v2/">QUIC version 2</a>, and the <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-multipath/">Multipath Extension to QUIC</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">4Dd2QedroFWYvUMb5Ba3ha</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[HPKE: Standardizing public-key encryption (finally!)]]></title>
            <link>https://blog.cloudflare.com/hybrid-public-key-encryption/</link>
            <pubDate>Thu, 24 Feb 2022 23:12:36 GMT</pubDate>
            <description><![CDATA[ HPKE (RFC 9180) was made to be simple, reusable, and future-proof by building upon knowledge from prior PKE schemes and software implementations. This article provides an overview of this new standard, going back to discuss its motivation, design goals, and development process ]]></description>
            <content:encoded><![CDATA[ <p>For the last three years, the <a href="https://irtf.org/cfrg">Crypto Forum Research Group</a> of the <a href="https://irtf.org/">Internet Research Task Force (IRTF)</a> has been working on specifying the next generation of (hybrid) public-key encryption (PKE) for Internet protocols and applications. The result is Hybrid Public Key Encryption (HPKE), published today as <a href="https://www.rfc-editor.org/rfc/rfc9180.html">RFC 9180</a>.</p><p>HPKE was made to be simple, reusable, and future-proof by building upon knowledge from prior PKE schemes and software implementations. It is already in use in a large assortment of emerging Internet standards, including TLS <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/">Encrypted Client Hello</a> and <a href="https://datatracker.ietf.org/doc/draft-pauly-dprive-oblivious-doh/">Oblivious DNS-over-HTTPS</a>, and has a large assortment of interoperable implementations, including one in <a href="https://github.com/cloudflare/circl/tree/master/hpke">CIRCL</a>. This article provides an overview of this new standard, going back to discuss its motivation, design goals, and development process.</p>
    <div>
      <h3>A primer on public-key encryption</h3>
      <a href="#a-primer-on-public-key-encryption">
        
      </a>
    </div>
    <p>Public-key cryptography is decades old, with its roots going back to the seminal work of Diffie and Hellman in 1976, entitled “<a href="https://ee.stanford.edu/~hellman/publications/24.pdf">New Directions in Cryptography</a>.” Their proposal – today called Diffie-Hellman key exchange – was a breakthrough. It allowed one to transform small secrets into big secrets for cryptographic applications and protocols. For example, one can bootstrap a secure channel for exchanging messages with confidentiality and integrity using a key exchange protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49zWFnGoaVQPUcKjDT52uO/09610fee57bd627cb23faae14bc05ca4/Screen-Shot-2022-02-24-at-11.09.26-AM.png" />
            
            </figure><p>Unauthenticated Diffie-Hellman key exchange</p><p>In this example, Sender and Receiver exchange freshly generated public keys with each other, and then combine their own secret key with their peer’s public key. Algebraically, this yields the same value \(g^{xy} = (g^x)^y = (g^y)^x\). Both parties can then use this as a <i>shared secret</i> for performing other tasks, such as encrypting messages to and from one another.</p><p>The <a href="https://datatracker.ietf.org/doc/html/rfc8446">Transport Layer Security</a> (TLS) protocol is one such application of this concept. Shortly after Diffie-Hellman was unveiled to the world, RSA came into the fold. The RSA cryptosystem is another public key algorithm that has been used to build digital signature schemes, PKE algorithms, and key transport protocols. A key transport protocol is similar to a key exchange algorithm in that the sender, Alice, generates a random symmetric key and then encrypts it under the receiver’s public key. Upon successful decryption, both parties then share this secret key. (This fundamental technique, known as static RSA, was used pervasively in the context of TLS. See <a href="/rfc-8446-aka-tls-1-3/">this post</a> for details about this old technique in TLS 1.2 and prior versions.)</p><p>At a high level, PKE between a sender and receiver is a protocol for encrypting messages under the receiver’s public key. One way to do this is via a so-called non-interactive key exchange protocol.</p><p>To illustrate how this might work, let \(g^y\) be the receiver’s public key, and let \(m\) be a message that one wants to send to this receiver. The flow looks like this:</p><ol><li><p>The sender generates a fresh private and public key pair, \((x, g^x)\).</p></li><li><p>The sender computes \(g^{xy} = (g^y)^x\), which can be done without involvement from the receiver, that is, non-interactively.</p></li><li><p>The sender then uses this shared secret to derive an encryption key, and uses this key to encrypt m.</p></li><li><p>The sender packages up \(g^x\) and the encryption of \(m\), and sends both to the receiver.</p></li></ol><p>The general paradigm here is called "hybrid public-key encryption" because it combines a non-interactive key exchange based on public-key cryptography for establishing a shared secret, and a symmetric encryption scheme for the actual encryption. To decrypt \(m\), the receiver computes the same shared secret \(g^{xy} = (g^x)^y\), derives the same encryption key, and then decrypts the ciphertext.</p><p>Conceptually, PKE of this form is quite simple. General designs of this form date back for many years and include the <a href="https://www.cs.ucdavis.edu/~rogaway/papers/dhies.pdf">Diffie-Hellman Integrated Encryption System</a> (DHIES) and ElGamal encryption. However, despite this apparent simplicity, there are numerous subtle design decisions one has to make in designing this type of protocol, including:</p><ul><li><p>What type of key exchange protocol should be used for computing the shared secret? Should this protocol be based on modern elliptic curve groups like Curve25519? Should it support future post-quantum algorithms?</p></li><li><p>How should encryption keys be derived? Are there other keys that should be derived? How should additional application information be included in the encryption key derivation, if at all?</p></li><li><p>What type of encryption algorithm should be used? What types of messages should be encrypted?</p></li><li><p>How should sender and receiver encode and exchange public keys?</p></li></ul><p>These and other questions are important for a protocol, since they are required for interoperability. That is, senders and receivers should be able to communicate without having to use the same source code.</p><p>There have been a number of efforts in the past to standardize PKE, most of which focus on elliptic curve cryptography. Some examples of past standards include: ANSI X9.63 (ECIES), IEEE 1363a, ISO/IEC 18033-2, and SECG SEC 1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/85SKzkEqGOEuyw4brVcak/fe2a769b6c8210d6851f265e46dec90a/Screen-Shot-2022-02-24-at-11.09.53-AM.png" />
            
            </figure><p>Timeline of related standards and software</p><p>A paper by <a href="https://ieeexplore.ieee.org/abstract/document/5604194/">Martinez et al.</a> provides a thorough and technical comparison of these different standards. The key points are that all these existing schemes have shortcomings. They either rely on outdated or not-commonly-used primitives such as <a href="https://en.wikipedia.org/wiki/RIPEMD">RIPEMD</a> and CMAC-AES, lack accommodations for moving to modern primitives (e.g., <a href="https://datatracker.ietf.org/doc/html/rfc5116">AEAD</a> algorithms), lack proofs of <a href="https://link.springer.com/chapter/10.1007/BFb0055718">IND-CCA2</a> security, or, importantly, fail to provide test vectors and interoperable implementations.</p><p>The lack of a single standard for public-key encryption has led to inconsistent and often non-interoperable support across libraries. In particular, hybrid PKE implementation support is fractured across the community, ranging from the hugely popular and simple-to-use <a href="https://nacl.cr.yp.to/box.html">NaCl box</a> and <a href="https://libsodium.gitbook.io/doc/public-key_cryptography/sealed_boxes">libsodium box seal</a> implementations based on modern algorithm variants like X-SalsaPoly1305 for authenticated encryption, to <a href="https://www.bouncycastle.org/specifications.html">BouncyCastle</a> implementations based on “classical” algorithms like AES and elliptic curves.</p><p>Despite the lack of a single standard, this hasn’t stopped the adoption of ECIES instantiations for widespread and critical applications. For example, the Apple and Google <a href="https://covid19-static.cdn-apple.com/applications/covid19/current/static/contact-tracing/pdf/ENPA_White_Paper.pdf">Exposure Notification Privacy-preserving Analytics</a> (ENPA) platform uses ECIES for public-key encryption.</p><p>When designing protocols and applications that need a simple, reusable, and agile abstraction for public-key encryption, existing standards are not fit for purpose. That’s where HPKE comes into play.</p>
    <div>
      <h3>Construction and design goals</h3>
      <a href="#construction-and-design-goals">
        
      </a>
    </div>
    <p>HPKE is a public-key encryption construction that is designed from the outset to be simple, reusable, and future-proof. It lets a sender encrypt arbitrary-length messages under a receiver’s public key, as shown below. You can try this out in the browser at <a href="https://www.franziskuskiefer.de/p/tldr-hybrid-public-key-encryption/">Franziskus Kiefer’s blog post on HPKE</a>!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2QjgXQ7cSPcN5kLUpdpYEO/ba5071877cd0b2b6168d2c4e9493ea52/image5-2.png" />
            
            </figure><p>HPKE overview</p><p>HPKE is built in stages. It starts with a Key Encapsulation Mechanism (KEM), which is similar to the key transport protocol described earlier and, in fact, can be constructed from the Diffie-Hellman key agreement protocol. A KEM has two algorithms: Encapsulation and Decapsulation, or Encap and Decap for short. The Encap algorithm creates a symmetric secret and wraps it for a public key such that only the holder of the corresponding private key can unwrap it. An attacker knowing this encapsulated key cannot recover even a single bit of the shared secret. Decap takes the encapsulated key and the private key associated with the public key, and computes the original shared secret. From this shared secret, HPKE computes a series of derived keys that are then used to encrypt and authenticate plaintext messages between sender and receiver.</p><p>This simple construction was driven by several high-level design goals and principles. We will discuss these goals and how they were met below.</p>
    <div>
      <h3>Algorithm agility</h3>
      <a href="#algorithm-agility">
        
      </a>
    </div>
    <p>Different applications, protocols, and deployments have different constraints, and locking any single use case into a specific (set of) algorithm(s) would be overly restrictive. For example, some applications may wish to use post-quantum algorithms when available, whereas others may wish to use different authenticated encryption algorithms for symmetric-key encryption. To accomplish this goal, HPKE is designed as a composition of a Key Encapsulation Mechanism (KEM), Key Derivation Function (KDF), and Authenticated Encryption Algorithm (AEAD). Any combination of the three algorithms yields a valid instantiation of HPKE, subject to certain security constraints about the choice of algorithm.</p><p>One important point worth noting here is that HPKE is not a <i>protocol</i>, and therefore does nothing to ensure that sender and receiver agree on the HPKE ciphersuite or shared context information. Applications and protocols that use HPKE are responsible for choosing or negotiating a specific HPKE ciphersuite that fits their purpose. This allows applications to be opinionated about their choice of algorithms to simplify implementation and analysis, as is common with protocols like <a href="https://www.wireguard.com/">WireGuard</a>, or be flexible enough to support choice and agility, as is the approach taken with TLS.</p>
    <div>
      <h3>Authentication modes</h3>
      <a href="#authentication-modes">
        
      </a>
    </div>
    <p>At a high level, public-key encryption ensures that only the holder of the private key can decrypt messages encrypted for the corresponding public key (being able to decrypt the message is an implicit authentication of the receiver.) However, there are other ways in which applications may wish to authenticate messages from sender to receiver. For example, if both parties have a pre-shared key, they may wish to ensure that both can demonstrate possession of this pre-shared key as well. It may also be desirable for senders to demonstrate knowledge of their own private key in order for recipients to decrypt the message (this functionally is similar to signing an encryption, but has some <a href="https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hpke-12#section-9.1.1">subtle and important differences</a>).</p><p>To support these various use cases, HPKE admits different modes of authentication, allowing various combinations of pre-shared key and sender private key authentication. The additional private key contributes to the shared secret between the sender and receiver, and the pre-shared key contributes to the derivation of the application data encryption secrets. This process is referred to as the “key schedule”, and a simplified version of it is shown below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kF11CKLHDthJSduTAD10i/d47cc6df3c62d35970674fcd6e25df9e/image2-11.png" />
            
            </figure><p>Simplified HPKE key schedule</p><p>These modes come at a price, however: not all KEM algorithms will work with all authentication modes. For example, for most post-quantum KEM algorithms there isn’t a private key authentication variant known.</p>
    <div>
      <h3>Reusability</h3>
      <a href="#reusability">
        
      </a>
    </div>
    <p>The core of HPKE’s construction is its key schedule. It allows secrets produced and shared with KEMs and pre-shared keys to be mixed together to produce additional shared secrets between sender and receiver for performing authenticated encryption and decryption. HPKE allows applications to build on this key schedule without using the corresponding AEAD functionality, for example, by <a href="https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hpke-12#section-5.3">exporting a shared application-specific secret</a>. Using HPKE in an “export-only” fashion allows applications to use other, non-standard AEAD algorithms for encryption, should that be desired. It also allows applications to use a KEM different from those specified in the standard, as is done in the proposed <a href="https://claucece.github.io/draft-celi-wiggers-tls-authkem/draft-celi-wiggers-tls-authkem.html">TLS AuthKEM draft</a>.</p>
    <div>
      <h3>Interface simplicity</h3>
      <a href="#interface-simplicity">
        
      </a>
    </div>
    <p>HPKE hides the complexity of message encryption from callers. Encrypting a message with additional authenticated data from sender to receiver for their public key is as simple as the following two calls:</p>
            <pre><code>// Create an HPKE context to send messages to the receiver
encapsulatedKey, senderContext = SetupBaseS(receiverPublicKey, ”shared application info”)

// AEAD encrypt the message using the context
ciphertext = senderContext.Seal(aad, message)</code></pre>
            <p>In fact, many implementations are likely to offer a simplified <a href="https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hpke-12#section-6">“single-shot” interface</a> that does context creation and message encryption with one function call.</p><p>Notice that this interface does not expose anything like nonce ("number used once") or sequence numbers to the callers. The HPKE context manages nonce and sequence numbers internally, which means the application is responsible for message ordering and delivery. This was an important design decision done to hedge against key and nonce reuse, <a href="https://fahrplan.events.ccc.de/congress/2010/Fahrplan/events/4087.en.html">which</a> <a href="https://link.springer.com/chapter/10.1007/978-3-662-45611-8_14">can</a> <a href="https://eprint.iacr.org/2014/161">be</a> <a href="https://eprint.iacr.org/2019/023.pdf">catastrophic</a> for <a href="https://eprint.iacr.org/2020/615">security</a>.</p><p>Consider what would be necessary if HPKE delegated nonce management to the application. The sending application using HPKE would need to communicate the nonce along with each ciphertext value for the receiver to successfully decrypt the message. If this nonce was ever reused, then security of the <a href="https://eprint.iacr.org/2016/475">AEAD may fall apart</a>. Thus, a sending application would necessarily need some way to ensure that nonces were never reused. Moreover, by sending the nonce to the receiver, the application is effectively implementing a message sequencer. The application could just as easily implement and use this sequencer to ensure in-order message delivery and processing. Thus, at the end of the day, exposing the nonce seemed both harmful and, ultimately, redundant.</p>
    <div>
      <h3>Wire format</h3>
      <a href="#wire-format">
        
      </a>
    </div>
    <p>Another hallmark of HPKE is that all messages that do not contain application data are fixed length. This means that serializing and deserializing HPKE messages is trivial and there is no room for application choice. In contrast, some implementations of hybrid PKE deferred choice of wire format details, such as whether to use elliptic curve point compression, to applications. HPKE handles this under the KEM abstraction.</p>
    <div>
      <h3>Development process</h3>
      <a href="#development-process">
        
      </a>
    </div>
    <p>HPKE is the result of a three-year development cycle between industry practitioners, protocol designers, and academic cryptographers. In particular, HPKE built upon prior art relating to public-key encryption, iterated on a design and specification in a tight specification, implementation, experimentation, and analysis loop, with an ultimate goal towards real world use.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/11GwbhP4mtylFViu3VGoqP/580f50018acc54ba7d8fef892f339bf4/image3-15.png" />
            
            </figure><p>HPKE development process</p><p>This process isn’t new. TLS 1.3 and QUIC famously demonstrated this as an effective way of producing high quality technical specifications that are maximally useful for their consumers.</p><p>One particular point worth highlighting in this process is the value of interoperability and analysis. From the very first draft, interop between multiple, independent implementations was a goal. And since then, every revision was carefully checked by multiple library maintainers for soundness and correctness. This helped catch a number of mistakes and improved overall clarity of the technical specification.</p><p>From a formal analysis perspective, HPKE brought novel work to the community. Unlike protocol design efforts like those around TLS and QUIC, HPKE was simpler, but still came with plenty of sharp edges. As a new cryptographic construction, analysis was needed to ensure that it was sound and, importantly, to understand its limits. This analysis led to a number of important contributions to the community, including a <a href="https://eprint.iacr.org/2020/1499.pdf">formal analysis of HPKE</a>, new understanding of the <a href="https://dl.acm.org/doi/abs/10.1145/3460120.3484814">limits of ChaChaPoly1305 in a multi-user security setting</a>, as well as a new CFRG specification documenting <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-aead-limits/">limits for AEAD algorithms</a>. For more information about the analysis effort that went into HPKE, check out this <a href="https://www.benjaminlipp.de/p/hpke-cryptographic-standard/">companion blog</a> by Benjamin Lipp, an HPKE co-author.</p>
    <div>
      <h3>HPKE’s future</h3>
      <a href="#hpkes-future">
        
      </a>
    </div>
    <p>While HPKE may be a new standard, it has already seen a tremendous amount of adoption in the industry. As mentioned earlier, it’s an essential part of the TLS Encrypted Client Hello and Oblivious DoH standards, both of which are deployed protocols on the Internet today. Looking ahead, it’s also been integrated as part of the emerging <a href="https://datatracker.ietf.org/doc/charter-ietf-ohai/">Oblivious HTTP</a>, <a href="https://datatracker.ietf.org/wg/mls/about/">Message Layer Security</a>, and <a href="https://www.ietf.org/id/draft-gpew-priv-ppm-00.html">Privacy Preserving Measurement</a> standards. HPKE’s hallmark is its generic construction that lets it adapt to a wide variety of application requirements. If an application needs public-key encryption with a <a href="https://eprint.iacr.org/2020/1153.pdf">key-committing AEAD</a>, one can simply instantiate HPKE using a key-committing AEAD.</p><p>Moreover, there exists a huge assortment of interoperable implementations built on popular cryptographic libraries, including <a href="https://github.com/cisco/mlspp/tree/main/lib/hpke">OpenSSL</a>, <a href="https://boringssl.googlesource.com/boringssl/+/refs/heads/master/include/openssl/hpke.h">BoringSSL</a>, <a href="https://hg.mozilla.org/projects/nss/file/tip/lib/pk11wrap">NSS</a>, and <a href="https://github.com/cloudflare/circl/tree/master/hpke">CIRCL</a>. There are also formally verified implementations in <a href="https://www.franziskuskiefer.de/p/an-executable-hpke-specification/">hacspec and F*</a>; check out this <a href="https://tech.cryspen.com/hpke-spec">blog post</a> for more details. The complete set of known implementations is tracked <a href="https://github.com/cfrg/draft-irtf-cfrg-hpke#existing-hpke-implementations">here</a>. More implementations will undoubtedly follow in their footsteps.</p><p>HPKE is ready for prime time. I look forward to seeing how it simplifies protocol design and development in the future. Welcome, <a href="https://www.rfc-editor.org/rfc/rfc9180.html">RFC 9180</a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Standards]]></category>
            <guid isPermaLink="false">2y7fDoXJoE5tJvjDMO7kap</guid>
            <dc:creator>Christopher Wood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare and the IETF]]></title>
            <link>https://blog.cloudflare.com/cloudflare-and-the-ietf/</link>
            <pubDate>Wed, 13 Oct 2021 12:59:37 GMT</pubDate>
            <description><![CDATA[ Cloudflare helps build a better Internet through collaboration on open and interoperable standards. This post will describe how Cloudflare contributes to the standardization process to enable incremental innovation and drive long-term architectural change. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet, far from being just a series of tubes, is a huge, incredibly complex, decentralized system. Every action and interaction in the system is enabled by a complicated mass of protocols woven together to accomplish their task, each handing off to the next like trapeze artists high above a virtual circus ring. Stop to think about details, and it is a marvel.</p><p>Consider one of the simplest tasks enabled by the Internet: Sending a message from sender to receiver.</p><p>The location (address) of a receiver is discovered using <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>, a connection between sender and receiver is established using a transport protocol like TCP, and (hopefully!) secured with a protocol like TLS. The sender's message is encoded in a format that the receiver can recognize and parse, like HTTP, because the two disparate parties need a common language to communicate. Then, ultimately, the message is sent and carried in an IP datagram that is forwarded from sender to receiver based on routes established with BGP.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Z79TEfHR8kGEqa8qMWBCQ/eecb98d60c7bbcbf5baae72ee10d8357/image1-35.png" />
            
            </figure><p>Even an explanation this dense is laughably oversimplified. For example, the four protocols listed are just the start, and ignore many others with acronyms of their own. The truth is that things are complicated. And because things are complicated, how these protocols and systems interact and influence the user experience on the Internet is complicated. Extra round trips to establish a secure connection increase the amount of time before useful work is done, harming user performance. The use of unauthenticated or unencrypted protocols reveals potentially sensitive information to the network or, worse, to malicious entities, which harms user security and privacy. And finally, consolidation and centralization — seemingly a prerequisite for reducing costs and protecting against attacks — makes it challenging to provide high availability even for essential services. (What happens when that one system goes down or is otherwise unavailable, or to extend our earlier metaphor, when a trapeze isn’t there to catch?)</p><p>These four properties — performance, security, privacy, and availability — are crucial to the Internet. At Cloudflare, and especially in the Cloudflare Research team, where we use all these various protocols, we're committed to improving them at every layer in the stack. We work on problems as diverse as <a href="https://www.cloudflare.com/network-security/">helping network security</a> and privacy with <a href="https://datatracker.ietf.org/doc/html/rfc8446">TLS 1.3</a> and <a href="https://datatracker.ietf.org/doc/html/rfc9000">QUIC,</a> improving DNS privacy via <a href="/oblivious-dns/">Oblivious DNS-over-HTTPS</a>, reducing end-user CAPTCHA annoyances with Privacy Pass and <a href="/introducing-cryptographic-attestation-of-personhood/">Cryptographic Attestation of Personhood (CAP)</a>, performing Internet-wide measurements to understand how things work in the real world, and much, much more.</p><p>Above all else, these projects are meant to do one thing: focus beyond the horizon to help build a better Internet. We do that by developing, advocating, and advancing open standards for the many protocols in use on the Internet, all backed by implementation, experimentation, and analysis.</p>
    <div>
      <h3>Standards</h3>
      <a href="#standards">
        
      </a>
    </div>
    <p>The Internet is a network of interconnected autonomous networks. Computers attached to these networks have to be able to route messages to each other. However, even if we can send messages back and forth across the Internet, much like the storied Tower of Babel, to achieve anything those computers have to use a common language, a lingua franca, so to speak. And for the Internet, standards are that common language.</p><p>Many of the parts of the Internet that Cloudflare is interested in are standardized by the IETF, which is a standards development organization responsible for producing technical specifications for the Internet's most important protocols, including IP, BGP, DNS, TCP, TLS, QUIC, HTTP, and so on. The <a href="https://www.ietf.org/about/mission/">IETF's mission</a> is:</p><p>&gt; to make the Internet work better by producing high-quality, relevant technical documents that influence the way people design, use, and manage the Internet.</p><p>Our individual contributions to the IETF help further this mission, especially given our role on the Internet. We can only do so much on our own to improve the end-user experience. So, through standards, we engage with those who use, manage, and operate the Internet to achieve three simple goals that lead to a better Internet:</p><p>1. Incrementally improve existing and deployed protocols with innovative solutions;</p><p>2. Provide holistic solutions to long-standing architectural problems and enable new use cases; and</p><p>3. Identify key problems and help specify reusable, extensible, easy-to-implement abstractions for solving them.</p><p>Below, we’ll give an example of how we helped achieve each goal, touching on a number of important technical specifications produced in recent years, including DNS-over-HTTPS, QUIC, and (the still work-in-progress) TLS Encrypted Client Hello.</p>
    <div>
      <h3>Incremental innovation: metadata privacy with DoH and ECH</h3>
      <a href="#incremental-innovation-metadata-privacy-with-doh-and-ech">
        
      </a>
    </div>
    <p>The Internet is not only complicated — it is leaky. Metadata seeps like toxic waste from nearly every protocol in use, from DNS to TLS, and even to HTTP at the application layer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1t1ZVnKH9ZGQnKCgx6I8Pr/ada911c196fb971b19b8a4a3f7767362/image6-14.png" />
            
            </figure><p>One critically important piece of metadata that still leaks today is the name of the server that clients connect to. When a client opens a connection to a server, it reveals the name and identity of that server in many places, including DNS, TLS, and even sometimes at the IP layer (if the destination IP address is unique to that server). Linking client identity (IP address) to target server names enables third parties to build a profile of per-user behavior without end-user consent. The result is a set of protocols that does not respect end-user privacy.</p><p>Fortunately, it’s possible to incrementally address this problem without regressing security. For years, Cloudflare has been working with the standards community to plug all of these individual leaks through separate specialized protocols:</p><ul><li><p><a href="https://datatracker.ietf.org/doc/html/rfc8484">DNS-over-HTTPS</a> encrypts DNS queries between clients and recursive resolvers, ensuring only clients and trusted recursive resolvers see plaintext DNS traffic.</p></li><li><p><a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni-13">TLS Encrypted Client Hello</a> encrypts metadata in the TLS handshake, ensuring only the client and authoritative TLS server see sensitive TLS information.</p></li></ul><p>These protocols impose a barrier between the client and server and everyone else. However, neither of them prevent the server from building per-user profiles. Servers can track users via one critically important piece of information: the client IP address. Fortunately, for the overwhelming majority of cases, the IP address is not essential for providing a service. For example, DNS recursive resolvers do not need the full client IP address to provide accurate answers, as is evidenced by the <a href="https://datatracker.ietf.org/doc/html/rfc7871">EDNS(0) Client Subnet</a> extension. To further reduce information exposure on the web, we helped push further with two more incremental improvements:</p><ul><li><p><a href="https://datatracker.ietf.org/doc/html/draft-pauly-dprive-oblivious-doh-07">Oblivious DNS-over-HTTPS</a> (ODoH) uses cryptography and network proxies to break linkability between client identity (IP address) and DNS traffic, ensuring that recursive resolvers have only the minimal amount of information to provide DNS answers -- the queries themselves, without any per-client information.</p></li><li><p><a href="https://datatracker.ietf.org/doc/html/draft-ietf-masque-h3-datagram-04">MASQUE</a> is standardizing techniques for proxying UDP and IP protocols over QUIC connections, similar to the existing <a href="https://www.rfc-editor.org/rfc/rfc7231.html#section-4.3.6">HTTP CONNECT</a> method for TCP-based protocols. Generally, the CONNECT method allows clients to use services without revealing any client identity (IP address).</p></li></ul><p>While each of these protocols may seem only an incremental improvement over what we have today, together, they raise many possibilities for the future of the Internet. Are DoH and ECH sufficient for end-user privacy, or are technologies like ODoH and MASQUE necessary? How do proxy technologies like MASQUE complement or even subsume protocols like ODoH and ECH? These are questions the Cloudflare Research team strives to answer through experimentation, analysis, and deployment together with other stakeholders on the Internet through the IETF. And we could not ask the questions without first laying the groundwork.</p>
    <div>
      <h3>Architectural advancement: QUIC and HTTP/3</h3>
      <a href="#architectural-advancement-quic-and-http-3">
        
      </a>
    </div>
    <p><a href="https://quicwg.org">QUIC</a> and <a href="https://datatracker.ietf.org/doc/html/draft-ietf-quic-http-34">HTTP/3</a> are transformative technologies. Whilst the TLS handshake forms the heart of QUIC’s security model, QUIC is an improvement beyond TLS over TCP, in many respects, including more encryption (privacy), better protection against active attacks and ossification at the network layer, fewer round trips to establish a secure connection, and generally better security properties. QUIC and HTTP/3 give us a clean slate for future innovation.</p><p>Perhaps one of QUIC’s most important contributions is that it challenges and even breaks many established conventions and norms used on the Internet. For example, the antiquated socket API for networking, which treats the network connection as an in-order bit pipe is no longer appropriate for modern applications and developers. Modern networking APIs such as Apple’s <a href="https://developer.apple.com/documentation/network">Network.framework</a> provide high-level interfaces that take advantage of the new transport features provided by QUIC. Applications using this or even higher-level HTTP abstractions can take advantage of the many security, privacy, and performance improvements of QUIC and HTTP/3 today with minimal code changes, and without being constrained by sockets and their inherent limitations.</p><p>Another salient feature of QUIC is its wire format. Nearly every bit of every QUIC packet is encrypted and authenticated between sender and receiver. And within a QUIC packet, individual frames can be rearranged, repackaged, and otherwise transformed by the sender.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qpdpgnX8A8M6iHvf0ECWP/aae602a63abed5400ffa431b4ce3cdce/image2-22.png" />
            
            </figure><p>Together, these are powerful tools to help mitigate future network ossification and enable continued extensibility. (TLS’s wire format ultimately led to the <a href="https://datatracker.ietf.org/doc/html/rfc8446#appendix-D.4">middlebox compatibility mode</a> for TLS 1.3 due to the many middlebox ossification problems that were encountered during early deployment tests.)</p><p>Exercising these features of QUIC is important for the <a href="https://datatracker.ietf.org/doc/html/draft-iab-use-it-or-lose-it-03">long-term health</a> of the protocol and applications built on top of it. Indeed, this sort of extensibility is what enables innovation.</p><p>In fact, we've already seen a flurry of new work based on QUIC: extensions to enable multipath QUIC, different congestion control approaches, and ways to carry data unreliably in the DATAGRAM frame.</p><p>Beyond functional extensions, we’ve also seen a number of new use cases emerge as a result of QUIC. DNS-over-QUIC is an upcoming proposal that complements DNS-over-TLS for recursive to authoritative DNS query protection. As mentioned above, MASQUE is a working group focused on standardizing methods for proxying arbitrary UDP and IP protocols over QUIC connections, enabling a number of fascinating solutions and unlocking the future of proxy and VPN technologies. In the context of the web, the WebTransport working group is standardizing methods to use QUIC as a “supercharged WebSocket” for transporting data efficiently between client and server while also depending on the WebPKI for security.</p><p>By definition, these extensions are nowhere near complete. The future of the Internet with QUIC is sure to be a fascinating adventure.</p>
    <div>
      <h3>Specifying abstractions: Cryptographic algorithms and protocol design</h3>
      <a href="#specifying-abstractions-cryptographic-algorithms-and-protocol-design">
        
      </a>
    </div>
    <p>Standards allow us to build abstractions. An ideal standard is one that is usable in many contexts and contains all the information a sufficiently skilled engineer needs to build a compliant implementation that successfully interoperates with other independent implementations. Writing a new standard is sort of like creating a new Lego brick. Creating a new Lego brick allows us to build things that we couldn’t have built before. For example, one new “brick” that’s nearly finished (as of this writing) is <a href="https://www.ietf.org/archive/id/draft-irtf-cfrg-hpke-12.html">Hybrid Public Key Encryption (HPKE)</a>. HPKE allows us to efficiently encrypt arbitrary plaintexts under the recipient’s public key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eWRfVLYtCcnUohsI2X8SE/48ccbddb899b98e65baea220bd7c06f6/image4-21.png" />
            
            </figure><p>Mixing asymmetric and symmetric cryptography for efficiency is a common technique that has been used for many years in all sorts of protocols, from TLS to <a href="https://en.wikipedia.org/wiki/Pretty_Good_Privacy">PGP</a>. However, each of these applications has come up with their own design, each with its own security properties. HPKE is intended to be a single, standard, interoperable version of this technique that turns this complex and technical corner of protocol design into an easy-to-use black box. The standard has undergone extensive analysis by cryptographers throughout its development and has numerous implementations available. The end result is a simple abstraction that protocol designers can include without having to consider how it works under-the-hood. In fact, HPKE is already a dependency for a number of other draft protocols in the IETF, such as <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni-13">TLS Encrypted Client Hello</a>, <a href="https://datatracker.ietf.org/doc/html/draft-pauly-dprive-oblivious-doh-07">Oblivious DNS-over-HTTPS</a>, and <a href="https://datatracker.ietf.org/doc/html/draft-ietf-mls-architecture-07.html">Message Layer Security</a>.</p>
    <div>
      <h3>Modes of Interaction</h3>
      <a href="#modes-of-interaction">
        
      </a>
    </div>
    <p>We engage with the IETF in the specification, implementation, experimentation, and analysis phases of a standard to help achieve our three goals of incremental innovation, architectural advancement, and production of simple abstractions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0tbHRFLSsWV7qBNiKi4WN/34d3b7742fe21500bcaa4970729bd4e6/image3-20.png" />
            
            </figure><p>Our participation in the standards process hits all four phases. Individuals in Cloudflare bring a diversity of knowledge and domain expertise to each phase, especially in the production of technical specifications. Today, we also published <a href="/exported-authenticators-the-long-road-to-rfc/">a blog post</a> about an upcoming standard that we’ve been working on for a number of years and will be sharing details about how we used formal analysis to make sure that we ruled out as many security issues in the design as possible. We work in close collaboration with people from all around the world as an investment in the future of the Internet. Open standards mean that everyone can take advantage of the latest and greatest in protocol design, whether they use Cloudflare or not.</p><p>Cloudflare’s scale and perspective on the Internet are essential to the standards process. We have experience rapidly implementing, deploying, and experimenting with emerging technologies to gain confidence in their maturity. We also have a proven track record of publishing the results of these experiments to help inform the standards process. Moreover, we open source as much of the code we use for these experiments as possible to enable reproducibility and transparency. Our unique collection of engineering expertise and wide perspective allows us to help build standards that work in a wide variety of use cases. By investing time in developing standards that everyone can benefit from, we can make a clear contribution to building a better Internet.</p><p>One final contribution we make to the IETF is more procedural and based around building consensus in the community. A challenge to any open process is gathering consensus to make forward progress and avoiding deadlock. We help build consensus through the production of running code, leadership on technical documents such as QUIC and ECH, and even logistically by chairing working groups. (Working groups at the IETF are chaired by volunteers, and Cloudflare numbers a few working group chairs amongst its employees, covering a broad spectrum of the IETF (and its related research-oriented group, the <a href="https://irtf.org/">IRTF</a>) from security and privacy to transport and applications.) Collaboration is a cornerstone of the standards process and a hallmark of Cloudflare Research, and we apply it most prominently in the standards process.</p><p>If you too want to help build a better Internet, check out some IETF Working Groups and mailing lists. All you need to start contributing is an Internet connection and an email address, so why not give it a go? And if you want to join us on our mission to help build a better Internet through open and interoperable standards, check out our <a href="https://www.cloudflare.com/careers/jobs/?department=Technology%20Research&amp;location=default">open</a> <a href="https://boards.greenhouse.io/cloudflare/jobs/3271134?gh_jid=3271134">positions</a>, <a href="/visiting-researcher-program/">visiting researcher program</a>, and <a href="https://www.cloudflare.com/careers/jobs/?department=University&amp;location=default">many internship opportunities</a>!</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Protocols]]></category>
            <category><![CDATA[Standards]]></category>
            <guid isPermaLink="false">72sMlOH9eqnKfiCmxSGHnU</guid>
            <dc:creator>Jonathan Hoyland</dc:creator>
            <dc:creator>Christopher Wood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Exported Authenticators: The long road to RFC]]></title>
            <link>https://blog.cloudflare.com/exported-authenticators-the-long-road-to-rfc/</link>
            <pubDate>Wed, 13 Oct 2021 12:59:28 GMT</pubDate>
            <description><![CDATA[ Learn more about Exported Authenticators, a new extension to TLS, currently going through the IETF standardisation process. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Our earlier <a href="/cloudflare-and-the-ietf">blog post</a> talked in general terms about how we work with the IETF. In this post we’re going to talk about a particular IETF project we’ve been working on, Exported Authenticators (EAs). Exported Authenticators is a new extension to TLS that we think will prove really exciting. It unlocks all sorts of fancy new authentication possibilities, from TLS connections with multiple certificates attached, to logging in to a website without ever revealing your password.</p><p>Now, you might have thought that given the innumerable hours that went into the design of TLS 1.3 that it couldn’t possibly be improved, but it turns out that there are a number of places where the design falls a little short. TLS allows us to establish a secure connection between a client and a server. The TLS connection presents a certificate to the browser, which proves the server is authorised to use the name written on the certificate, for example <a href="/">blog.cloudflare.com</a>. One of the most common things we use that ability for is delivering webpages. In fact, if you’re reading this, your browser has already done this for you. The Cloudflare Blog is delivered over TLS, and by presenting a certificate for <a href="/">blog.cloudflare.com</a> the server proves that it’s allowed to deliver Cloudflare’s blog.</p><p>When your browser requests <a href="/">blog.cloudflare.com</a> you receive a big blob of HTML that your browser then starts to render. In the dim and distant past, this might have been the end of the story. Your browser would render the HTML, and display it. Nowadays, the web has become more complex, and the HTML your browser receives often tells it to go and load lots of other resources. For example, when I loaded the Cloudflare blog just now, my browser made 73 subrequests.</p><p>As we mentioned in our <a href="/connection-coalescing-experiments">connection coalescing</a> blog post, sometimes those resources are also served by Cloudflare, but on a different domain. In our connection coalescing experiment, we acquired certificates with a special extension, called a Subject Alternative Name (SAN), that tells the browser that the owner of the certificate can act as two different websites. Along with some further shenanigans that you can read about in our <a href="/connection-coalescing-experiments">blog post</a>, this lets us serve the resources for both the domains over a single TLS connection.</p><p>Cloudflare, however, services millions of domains, and we have millions of certificates. It’s possible to generate certificates that cover lots of domains, and in fact this is what Cloudflare used to do. We used to use so-called “<a href="https://dl.acm.org/doi/pdf/10.1145/2976749.2978301">cruise-liner</a>” certificates, with dozens of names on them. But for connection coalescing this quickly becomes impractical, as we would need to know what sub-resources each webpage might request, and acquire certificates to match. We switched away from this model because issues with individual domains could affect other customers.</p><p>What we’d like to be able to do is serve as much content as possible down a single connection. When a user requests a resource from a different domain they need to perform a new TLS handshake, <a href="/how-expensive-is-crypto-anyway/">costing valuable time and resources</a>. Our connection coalescing experiment showed the benefits when we know in advance what resources are likely to be requested, but most of the time we don’t know what subresources are going to be requested until the requests actually arrive. What we’d rather do is attach extra identities to a connection after it’s been established, and we know what extra domains the client actually wants. Because the TLS connection is just a transport mechanism and doesn’t understand the information being sent across it, it doesn’t actually know what domains might subsequently be requested. This is only available to higher-layer protocols such as HTTP. However, we don’t want any website to be able to impersonate another, so we still need to have strong authentication.</p>
    <div>
      <h3>Exported Authenticators</h3>
      <a href="#exported-authenticators">
        
      </a>
    </div>
    <p>Enter Exported Authenticators. They give us even more than we asked for. They allow us to do application layer authentication that’s just as strong as the authentication you get from TLS, and then tie it to the TLS channel. Now that’s a pretty complicated idea, so let’s break it down.</p><p>To understand application layer authentication we first need to explain what the application layer is. The application layer is a reference to the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a>. The OSI model describes the various layers of abstraction we use, to make things work across the Internet. When you’re developing your latest web application you don’t want to have to worry about how light is flickered down a fibre optic cable, or even how the TLS handshake is encoded (although that’s a fascinating topic in its own right, let’s leave that for another time.)</p><p>All you want to care about is having your content delivered to your end-user, and using TLS gives you a guaranteed in-order, reliable, authenticated channel over which you can communicate. You just shove bits in one end of the pipe, and after lots of blinky lights, fancy routing, maybe a touch of congestion control, and a little decoding, *poof*, your data arrives at the end-user.</p><p>The application layer is the top of the OSI stack, and contains things like HTTP. Because the TLS handshake is lower in the stack, the application is oblivious to this process. So, what Exported Authenticators give us is the ability for the very top of the stack to reliably authenticate their partner.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gnmKyKSeUeRR2kNpasByF/ce60689aede76d539b46a440ac9c87f8/osi-model-7-layers-1.png" />
            
            </figure><p>The seven-layered OSI model</p><p>Now let’s jump back a bit, and discuss what we mean when we say that EAs give us authentication that’s as strong as TLS authentication. TLS, as we know, is used to create a secure connection between two endpoints, but lots of us are hazy when we try and pin down exactly what we mean by “secure”. The TLS standard makes <a href="https://datatracker.ietf.org/doc/html/rfc8446#appendix-E.1">eight specific promises</a>, but rather than get buried in that particular ocean of weeds, let’s just pick out the one guarantee that we care about most: Peer Authentication.</p>
            <pre><code>Peer authentication: The client's view of the peer identity should reflect the server's identity. [...]</code></pre>
            <p>In other words, if the client thinks that it’s talking to <code>example.com</code> then it should, in fact, be talking to <code>example.com</code>.</p><p>What we want from EAs is that if I receive an EA then I have cryptographic proof that the person I’m talking to is the person I think I’m talking to. Now at this point you might be wondering what an EA actually looks like, and what it has to do with certificates. Well, an EA is actually a trio of messages, the first of which is a <code>Certificate</code>. The second is a <code>CertificateVerify</code>, a cryptographic proof that the sender knows the private key for the certificate. Finally there is a <code>Finished</code> message, which acts as a MAC, and proves the first two parts of the message haven’t been tampered with. If this structure sounds familiar to you, it’s because it’s the same structure as used by the server in the TLS handshake to prove it is the owner of the certificate.</p><p>The final piece of unpacking we need to do is explaining what we mean by tying the authentication to the TLS channel. Because EAs are an application layer construct they don’t provide any transport mechanism. So, whilst I know that the EA was created by the server I want to talk to, without binding the EA to a TLS connection I can’t be sure that I’m talking <i>directly</i> to the server I want.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1TE1tNAHGeIgpWWXLSo6d4/73aab69ccbfdcb00ba7819d8936df1d7/image5-16.png" />
            
            </figure><p>Without protection, a malicious server can move Exported Authenticators from one connection to another.</p><p>For all I know, the TLS server I’m talking to is creating a new TLS connection to the EA Server, and relaying my request, and then returning the response. This would be very bad, because it would allow a malicious server to impersonate any server that supports EAs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zeVsOVQkSEccH77eRqMQb/0868ebd0c27b34da59f32fa0837e3b29/image2-23.png" />
            
            </figure><p>Because EAs are bound to a single TLS connection, if a malicious server copies an EA from one connection to another it will fail to verify.</p><p>EAs therefore have an extra security feature. They use the fact that every TLS connection is guaranteed to produce a unique set of keys. EAs take one of these keys and use it to construct the EA. This means that if some malicious third-party copies an EA from one TLS session to another, the recipient wouldn’t be able to validate it. This technique is called <a href="https://datatracker.ietf.org/doc/html/rfc5056">channel binding</a>, and is another fascinating topic, but this post is already getting a bit long, so we’ll have to revisit channel binding in a future blog post.</p>
    <div>
      <h3>How the sausage is made</h3>
      <a href="#how-the-sausage-is-made">
        
      </a>
    </div>
    <p>OK, now we know what EAs do, let’s talk about how they were designed and built. EAs are going through the <a href="https://www.ietf.org/standards/process/informal/">IETF standardisation process</a>. Draft standards move through the IETF process starting as Internet Drafts (I-Ds), and ending up as published Requests For Comment (RFCs). RFCs are voluntary standards that underpin much of the global Internet plumbing, and not just for security protocols like TLS. RFCs define DNS, UDP, TCP, and many, many more.</p><p>The first step in producing a new IETF standard is coming up with a proposal. Designing security protocols is a very conservative business, firstly because it’s very easy to introduce really subtle bugs, and secondly, because if you do introduce a security issue, things can go very wrong, very quickly. A flaw in the design of a protocol can be especially problematic as it can be replicated across multiple independent implementations — for example the <a href="https://kryptera.se/Renegotiating%20TLS.pdf">TLS renegotiation vulnerabilities reported in 2009</a> and the <a href="https://dl.acm.org/doi/10.1145/2382196.2382206">custom EC(DH) parameters vulnerability from 2012</a>. To minimise the risks of design issues, EAs hew closely to the design of the TLS 1.3 handshake.</p>
    <div>
      <h3>Security and Assurance</h3>
      <a href="#security-and-assurance">
        
      </a>
    </div>
    <p>Before making a big change to how authentication works on the Internet, we want as much assurance as possible that we’re not going to break anything. To give us more confidence that EAs are secure, they reuse parts of the design of TLS 1.3. The TLS 1.3 design was carefully examined by dozens of experts, and underwent multiple rounds of formal analysis — more on that in a moment. Using well understood design patterns is a super important part of security protocols. Making something secure is incredibly difficult, because security issues can be introduced in thousands of ways, and an attacker only needs to find one. By starting from a well understood design we can leverage the years of expertise that went into it.</p><p>Another vital step in catching design errors early is baked into the IETF process: achieving rough consensus. Although the ins and outs of the IETF process are worthy of their own blog post, suffice it to say the IETF works to ensure that all technical objections get addressed, and even if they aren’t solved they are given due care and attention. Exported Authenticators were proposed way back in 2016, and after many rounds of comments, feedback, and analysis the TLS Working Group (WG) at the IETF has finally reached consensus on the protocol. All that’s left before the EA I-D becomes an RFC is for a final revision of the text to be submitted and sent to the RFC Editors, leading hopefully to a published standard very soon.</p><p>As we just mentioned, the WG has to come to a consensus on the design of the protocol. One thing that can hold up achieving consensus are worries about security. After the Snowden revelations there was a <a href="https://www.mitls.org/downloads/tlsauth.pdf">barrage</a> <a href="https://heartbleed.com/">of</a> <a href="https://www.openssl.org/~bodo/ssl-poodle.pdf">attacks</a> <a href="https://freakattack.com/">on</a> <a href="https://www.imperva.com/docs/HII_Attacking_SSL_when_using_RC4.pdf">TLS 1.2</a>, not to mention some even earlier attacks from academia. Changing how trust works on the Internet can be pretty scary, and the TLS WG didn’t want to be caught flat-footed. Luckily this coincided with the maturation of some tools and techniques we can use to get mathematical guarantees that a protocol is secure. This class of techniques is known as <a href="https://en.wikipedia.org/wiki/Formal_methods">formal methods</a>. To help ensure that people are confident in the security of EAs I performed a formal analysis.</p>
    <div>
      <h3>Formal Analysis</h3>
      <a href="#formal-analysis">
        
      </a>
    </div>
    <p>Formal analysis is a special technique that can be used to examine security protocols. It creates a mathematical description of the protocol, the security properties we want it to have, and a model attacker. Then, aided by some sophisticated software, we create a proof that the protocol has the properties we want even in the presence of our model attacker. This approach is able to catch incredibly subtle edge cases, which, if not addressed, could lead to attacks, as has <a href="https://cispa.saarland/group/cremers/downloads/papers/CHSV2016-TLS13.pdf">happened</a> <a href="https://hal.inria.fr/hal-01528752/document">before</a>. Trotting out a formal analysis gives us strong assurances that we haven’t missed any horrible issues. By sticking as closely as possible to the design of TLS 1.3 we were able to repurpose much of the original analysis for EAs, giving us a big leg up in our ability to prove their security. Our EA model is <a href="https://bitbucket.org/jhoyla/tamarin-exported-authenticators/src/master/">available in Bitbucket</a>, along with the proofs. You can check it out using <a href="https://tamarin-prover.github.io/">Tamarin</a>, a theorem prover for security protocols.</p><p>Formal analysis, and formal methods in general, give very strong guarantees that rule out entire classes of attack. However, they are not a panacea. TLS 1.3 was subject to a number of rounds of formal analysis, and yet <a href="https://eprint.iacr.org/2019/347.pdf">an attack</a> was still found. However, this attack in many ways confirms our faith in formal methods. The attack was found in a blind spot of the proof, showing that attackers have been pushed to the very edges of the protocol. As our formal analyses get more and more rigorous, attackers will have fewer and fewer places to search for attacks. As formal analysis has become more and more practical, more and more groups at the IETF have been asking to see proofs of security before standardising new protocols. This hopefully will mean that future attacks on protocol design will become rarer and rarer.</p><p>Once the EA I-D becomes an RFC, then all sorts of cool stuff gets unlocked — for example <a href="https://datatracker.ietf.org/doc/html/draft-sullivan-tls-opaque-01">OPAQUE-EA</a>s, which will allow us to do password-based login on the web without the server ever seeing the password! Watch this space.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QK1yWnP1qWPVURzf9ZlIk/63325574b90a74a60ed147994cc197fc/image4-22.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Protocols]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">6DwixIOBiqkrJfubkZlOGa</guid>
            <dc:creator>Jonathan Hoyland</dc:creator>
        </item>
        <item>
            <title><![CDATA[QUIC Version 1 is live on Cloudflare]]></title>
            <link>https://blog.cloudflare.com/quic-version-1-is-live-on-cloudflare/</link>
            <pubDate>Fri, 28 May 2021 21:06:55 GMT</pubDate>
            <description><![CDATA[ QUIC is a new fast and secure transport protocol. Version 1 has just been published as RFC 9000 and today Cloudflare has enabled support for all customers, come try it out.    ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On May 27, 2021, the Internet Engineering Task Force published RFC 9000 - the standardized version of the QUIC transport protocol. The QUIC Working Group declared themselves done by issuing a <a href="/last-call-for-quic/">Last Call</a> 7 months ago. The i's have been dotted and the t's crossed, RFC 8999 - RFC 9002 are a suite of documents that capture years of engineering design and testing of QUIC. This marks a big occasion.</p><p>And today, one day later, we’ve made the standardized version of QUIC available to Cloudflare customers.</p><p>Transport protocols have a history of being hard to deploy on the Internet. QUIC overcomes this challenge by basing itself on top of UDP. Compared to TCP, QUIC has security by default, protecting almost all bytes from prying eyes or "helpful" middleboxes that can end up making things worse. It has designed-in features that speed up connection handshakes and mitigate the performance perils that can strike on networks that suffer loss or delays. It is pluggable, providing clear standardised extensions point that will allow smooth, iterative development and deployment of new features or performance enhancements for years to come.</p><p>The killer feature of QUIC, however, is that it is deployable in reality. We are excited to announce that QUIC version 1, <a href="https://www.rfc-editor.org/rfc/rfc9000.html">RFC 9000</a>, is available to all Cloudflare customers.  We started with a <a href="/the-quicening/">limited beta in 2018</a>, we made it <a href="/http3-the-past-present-and-future/">general availability in 2019</a>, and we've been tracking new document revisions every step of the way. In that time we've seen User-Agents like browsers join us in this merry march and prove that this thing works on the Internet.</p><p>QUIC is just a transport protocol. To make it do anything you need an application protocol to be mapped onto it. In parallel to the QUIC specification, the Working Group has defined an HTTP mapping called <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>. The design is all done, but we're waiting for a few more i's to be crossed before it too is published as an RFC. That doesn't prevent people from testing it though, and for the 3+ years that we've supported QUIC, we have supported HTTP on the top of it.</p><p>According to <a href="https://radar.cloudflare.com/">Cloudflare Radar</a>, we're seeing around 12% of Internet traffic using QUIC with HTTP/3 already. We look forward to this increasing now that RFC 9000 is out and raising awareness of the stability of things.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44hkcMdHbgfl5XZk42hRUv/328f93d4e472216c75880da4db50b60a/image3-8.png" />
            
            </figure>
    <div>
      <h2>How do I enable QUIC and HTTP/3 for my domain?</h2>
      <a href="#how-do-i-enable-quic-and-http-3-for-my-domain">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4CC9znbBgFKJGTw6QN8YOL/094c27a60a4b9c4271d4b6cc6b610dcb/image4-12.png" />
            
            </figure><p>HTTP/3 and QUIC are controlled from the "Network" tab of your dashboard. Turn it on and start testing.</p>
    <div>
      <h2>But what does that actually do?</h2>
      <a href="#but-what-does-that-actually-do">
        
      </a>
    </div>
    <p>Cloudflare servers sit and listen for QUIC traffic on UDP port 443. Clients send an Initial QUIC packet in a UDP datagram which kicks off the handshake process. The Initial packet contains a version identifier, which the server checks and selects from. The client also provides, via the TLS Application Layer Negotiation Protocol extension, a list of application protocols it speaks. Today Cloudflare supports clients that directly connect to us and attempt to speak QUIC version 1 using the ALPN identifier "h3".</p><p>Over the years, as the draft wire format of the protocol has changed, new version and ALPN identifiers have been coined, helping to ensure the client and server pick something they can agree on. RFC 9000 coins the version 0x00000001. Since it's so new, we expect clients to continue sending some of the old ones as it takes time to support. These look like 0xff00001d, 0xff00001c, and 0xff00001b, which mark draft 29, 28, and 27 respectively. Version identifiers are 32-bits, which is conspicuous because a lot of other fields use QUICs <i>variable-length integer encoding</i> (<a href="https://www.rfc-editor.org/rfc/rfc9000.html#name-variable-length-integer-enc">see here</a>).</p><p>Before a client can even send an Initial QUIC packet however, it needs to know we're sat here listening for them! In the old days, HTTP relied on the URL to determine which TCP port to speak to. By default, it picked 80 for an http scheme, 443 for an https scheme, or used the value supplied in the authority component e.g. <a href="https://example.com:1234">https://example.com:1234</a>. Nobody wanted to change the URL schemes to support QUIC; that would have added tremendous friction to deployment.</p><p>While developing QUIC and HTTP/3, the Working Group generally relied on prior knowledge that a server would talk QUIC. And if something went wrong, we'd just ping each other directly on Slack. This kind of model obviously doesn't scale. For widespread real-world deployment we instead rely on HTTP Alternative Services (RFC 7838) to tell TCP-based clients that HTTP/3 is available. This is the method that web browsers will use to determine what protocols to use.When a client makes HTTP requests to a zone that has Cloudflare QUIC and HTTP/3 support enabled, we return an Alt-Svc header that tells it about all the QUIC versions we support. Here's an example:</p>
            <pre><code>$ curl -sI https://www.cloudflare.com/ | grep alt-svc
alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400</code></pre>
            <p>The entry "h3-29" tells clients that we support HTTP/3 over QUIC draft version 29 on UDP port 443. If they support that they might just send us a QUIC Initial with the single version <b>0xff00001</b> and identifier "h3-29". Or to hedge their bets, they might send an initial with all versions that they support. Whichever type of Initial Cloudflare receives, we'll pick the highest version. They also might choose _not_ to use QUIC, for whatever reason they like, in which case they can just carry on as normal.</p><p>Previously, you needed to <a href="/how-to-test-http-3-and-quic-with-firefox-nightly/">enable experimental support</a> if you wanted to test it out in browsers. But now many of them have QUIC enabled by default and we expect them to start enabling QUIC v1 support soon. So today we've begun rolling out changes to our Alt-Svc advertisements to also include the "h3" identifier and we'll have complete world-wide support early next week. All of these protocol upgrade behaviours are done behind the scenes, hopefully your browsing experiences just appear to get magically faster. If you want to check what's happening, you can, for example, use a browser's network tools - just be sure to enable the Protocol column. Here's how it looks in Firefox:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40t2IICcOJjpNbYclJVm3N/9b3390d0b88137917610abeb140e56da/image2-8.png" />
            
            </figure>
    <div>
      <h2>All powered by delicious Quiche!</h2>
      <a href="#all-powered-by-delicious-quiche">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4uiSku4MIBNz7VHzAR1H5B/e28eb3e55adb5be7f6d204168662db98/image5-5.png" />
            
            </figure><p>Cloudflare's QUIC and HTTP/3 support is powered by quiche, <a href="/enjoy-a-slice-of-quic-and-rust/">our own open-source implementation written in Rust</a>. You can find it on GitHub at <a href="https://github.com/cloudflare/quiche">github.com/cloudflare/quiche</a>.</p><p>Quiche is a Rust library that exposes a C API. We've designed it from day one to be easily integratable into many types of projects. Our <a href="/experiment-with-http-3-using-nginx-and-quiche/">edge servers</a> use it, <a href="https://developers.cloudflare.com/http3/curl-brew">curl uses it</a>, Mozilla uses our <a href="https://crates.io/crates/qlog">qlog</a> sub-crate in <a href="https://github.com/mozilla/neqo">neqo</a> (which powers Firefox's QUIC), <a href="https://github.com/netty/netty-incubator-codec-quic">netty uses it</a>, the list is quite long. We're excited to support this project and grow support for QUIC and HTTP/3 wherever we can. And it won't surprise you to hear that we have built some of our own tools to help us during development. quiche-client is a tool we use to get detailed information on all the nitty gritty details of QUIC connections. It also integrates into the <a href="https://interop.seemann.io/">interoperability testing matrix</a> that lets us continually assess interoperability and performance.</p><p>You can find <a href="https://developers.cloudflare.com/http3/quiche-http3-client">quiche-client</a> in the <a href="https://github.com/cloudflare/quiche/tree/master/tools">/tools folder</a> of the quiche repository. Here's an example of running it with all trace information turned on, I've highlighted the Initial version and selected ALPN. A corresponding client-{connection ID}.qlog file will be written out.</p>
            <pre><code>RUST_LOG=trace QLOG_DIR=$PWD cargo run --manifest-path tools/apps/Cargo.toml --bin quiche-client -- --wire-version 00000001  https://www.cloudflare.com

[2021-05-28T18:59:30.506616991Z INFO  quiche_apps::client] connecting to 104.16.123.96:443 from 192.168.0.50:41238 with scid 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b
[2021-05-28T18:59:30.506842369Z TRACE quiche::tls] 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b write message lvl=Initial len=310
[2021-05-28T18:59:30.506891348Z TRACE quiche] 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b tx pkt Initial version=1 dcid=ed8de2d33a2e830279dfeaae8a7ad674 scid=5875ecd13154429e5c618eee35b6bbd9ecfe8c6b len=330 pn=0
[2021-05-28T18:59:30.506982912Z TRACE quiche] 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b tx frm CRYPTO off=0 len=310
[2021-05-28T18:59:30.507044916Z TRACE quiche::recovery] 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b timer=998.815785ms latest_rtt=0ns srtt=None min_rtt=0ns rttvar=166.5ms loss_time=[None, None, None] loss_probes=[0, 0, 0] cwnd=13500 ssthresh=18446744073709551615 bytes_in_flight=377 app_limited=true congestion_recovery_start_time=None delivered=0 delivered_time=204.765µs recent_delivered_packet_sent_time=206.283µs app_limited_at_pkt=0  pacing_rate=0 last_packet_scheduled_time=Some(Instant { tv_sec: 620877, tv_nsec: 428024928 }) hystart=window_end=None last_round_min_rtt=None current_round_min_rtt=None rtt_sample_count=0 lss_start_time=None  
[2021-05-28T18:59:30.507160963Z TRACE quiche_apps::client] written 1200
[2021-05-28T18:59:30.537123997Z TRACE quiche_apps::client] got 1200 bytes
[2021-05-28T18:59:30.537194566Z TRACE quiche] 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b rx pkt Initial version=1 dcid=5875ecd13154429e5c618eee35b6bbd9ecfe8c6b scid=017022e8618952fd8c7177e863894eb0447b85f4 token= len=117 pn=0
&lt;snip&gt;
[2021-05-28T18:59:30.542581460Z TRACE quiche] 5875ecd13154429e5c618eee35b6bbd9ecfe8c6b connection established: proto=Ok("h3") cipher=Some(AES128_GCM) curve=Some("X25519") sigalg=Some("ecdsa_secp256r1_sha256") resumed=false TransportParams { original_destination_connection_id: Some(ed8de2d33a2e830279dfeaae8a7ad674), max_idle_timeout: 180000, stateless_reset_token: None, max_udp_payload_size: 65527, initial_max_data: 10485760, initial_max_stream_data_bidi_local: 0, initial_max_stream_data_bidi_remote: 1048576, initial_max_stream_data_uni: 1048576, initial_max_streams_bidi: 256, initial_max_streams_uni: 3, ack_delay_exponent: 3, max_ack_delay: 25, disable_active_migration: false, active_conn_id_limit: 2, initial_source_connection_id: Some(017022e8618952fd8c7177e863894eb0447b85f4), retry_source_connection_id: None, max_datagram_frame_size: None }</code></pre>
            
    <div>
      <h2>So if QUIC is done, what's next?</h2>
      <a href="#so-if-quic-is-done-whats-next">
        
      </a>
    </div>
    <p>The road has been long, and we should celebrate the success of the community's efforts over many years to dream big and deliver something.  But as far as we're concerned, we're far from done. We've learned a lot from our early QUIC deployments - a big thank you to everyone in the wider Cloudflare team that supported the Protocols team getting here today. We'll continue to invest that back into our implementation and standardisation activities. <a href="/author/alessandro-ghedini/">Alessandro</a>, <a href="/author/junho/">Junho</a>, <a href="/author/lohith/">Lohith</a> and I will continue to participate in our respective areas of expertise in the IETF. Speaking for myself, I'll be continuing to co-chair the QUIC Working Group and help guide it through a new chapter focused on maintenance, operations, extensibility and… QUIC version 2. And I'll be moonlighting in other places like the <a href="https://datatracker.ietf.org/wg/httpbis/about/">HTTP</a> WG to push Prioritization over the line, and the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE</a> WG to help define how we can use <a href="https://datatracker.ietf.org/doc/html/draft-ietf-masque-h3-datagram-02">unreliable DATAGRAMS</a> to tunnel almost anything over QUIC and HTTP/3.</p>
    <div>
      <h2>A weekend riddle</h2>
      <a href="#a-weekend-riddle">
        
      </a>
    </div>
    <p>If you've made it this far, you are obviously very interested in QUIC. My colleague, Chris Wood, is like that too. He was very excited about the RFCs being shipped. So excited that he sent me this cryptic message:</p><p><i>"QUIC is finally here -- RFC </i><b><i>8999</i></b><i>, 9000, </i><b><i>9001</i></b><i>, and 9002. Are we and the rest of the Internet ready to turn it on? 0404d3f63f040214574904010a5735!</i></p><p>I have no clue what this means, can you help me out?</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">jMlsubMfOLintHjT8HGts</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Last Call for QUIC, a giant leap for the Internet]]></title>
            <link>https://blog.cloudflare.com/last-call-for-quic/</link>
            <pubDate>Thu, 22 Oct 2020 14:08:51 GMT</pubDate>
            <description><![CDATA[ QUIC and HTTP/3 are open standards that have been under development in the IETF for almost exactly 4 years. On October 21, 2020, following two rounds of Working Group Last Call, draft 32 of the family of documents that describe QUIC and HTTP/3 were put into IETF Last Call. ]]></description>
            <content:encoded><![CDATA[ <p>QUIC is a new Internet transport protocol for secure, reliable and multiplexed communications. <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> builds on top of QUIC, leveraging the new features to fix performance problems such as Head-of-Line blocking. This enables web pages to load faster, especially over troublesome networks.</p><p>QUIC and HTTP/3 are open standards that have been under development in the IETF <a href="/http-3-from-root-to-tip">for almost exactly 4 years</a>. On October 21, 2020, following two rounds of Working Group Last Call, draft 32 of the family of documents that describe QUIC and HTTP/3 were put into <a href="https://mailarchive.ietf.org/arch/msg/quic/ye1LeRl7oEz898RxjE6D3koWhn0/">IETF Last Call</a>. This is an important milestone for the group. We are now telling the entire IETF community that we think we're almost done and that we'd welcome their final review.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78vaeSkXoriOtIbPyOU5rw/8ce2942b6542d94b5d0d42fc9e91d7b7/image2-24.png" />
            
            </figure><p>Speaking personally, I've been involved with QUIC in some shape or form for many years now. Earlier this year I was honoured to be asked to help co-chair the Working Group. I'm pleased to help shepherd the documents through this important phase, and grateful for the efforts of everyone involved in getting us there, especially the editors. I'm also excited about future opportunities to evolve on top of QUIC v1 to help build a better Internet.</p><p>There are two aspects to protocol development. One aspect involves writing and iterating upon the documents that describe the protocols themselves. Then, there's implementing, deploying and testing libraries, clients and/or servers. These aspects operate hand in hand, helping the Working Group move towards satisfying the goals listed in its charter. IETF Last Call marks the point that the group and their responsible Area Director (in this case Magnus Westerlund) believe the job is almost done. Now is the time to solicit feedback from the wider IETF community for review. At the end of the Last Call period, the stakeholders will take stock, address feedback as needed and, fingers crossed, go onto the next step of requesting the documents be published as RFCs on the Standards Track.</p><p>Although specification and implementation work hand in hand, they often progress at different rates, and that is totally fine. The QUIC specification has been mature and deployable for a long time now. HTTP/3 has been <a href="/http3-the-past-present-and-future/">generally available</a> on the Cloudflare edge since September 2019, and we've been delighted to see support roll out in user agents such as Chrome, Firefox, Safari, curl and so on. Although draft 32 is the latest specification, the community has for the time being settled on draft 29 as a solid basis for interoperability. This shouldn't be surprising, as foundational aspects crystallize the scope of changes between iterations decreases. For the average person in the street, there's not really much difference between 29 and 32.</p><p>So today, if you visit a website with HTTP/3 enabled—such as <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a>—you’ll probably see response headers that contain Alt-Svc: h3-29="… . And in a while, once Last Call completes and the RFCs ship, you'll start to see websites simply offer Alt-Svc: h3="… (note, no draft version!).</p>
    <div>
      <h3>Need a deep dive?</h3>
      <a href="#need-a-deep-dive">
        
      </a>
    </div>
    <p>We've collected a bunch of resource links at <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a>. If you're more of an interactive visual learner, you might be pleased to hear that I've also been hosting a series on <a href="https://cloudflare.tv/live">Cloudflare TV</a> called "Levelling up Web Performance with HTTP/3". There are over 12 hours of content including the basics of QUIC, ways to measure and debug the protocol in action using tools like Wireshark, and several deep dives into specific topics. I've also been lucky to have some guest experts join me along the way. The table below gives an overview of the episodes that are available on demand.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41Oavd19lBk474V1BOr1ZQ/3b6d0466c42b3940e6329c754c63863d/image1-36.png" />
            
            </figure><p>Episode</p><p>Description</p><p><a href="https://cloudflare.tv/event/6jJjzbBoFwvARsoaNiUt9i">1</a></p><p>Introduction to QUIC.</p><p><a href="https://cloudflare.tv/event/5rcGVibHCKs9l9xUUMdJqg">2</a></p><p>Introduction to HTTP/3.</p><p><a href="https://cloudflare.tv/event/3OM7upT7p3vpAdzphFdhnx">3</a></p><p>QUIC &amp; HTTP/3 logging and analysis using qlog and qvis. Featuring Robin Marx.</p><p><a href="https://cloudflare.tv/event/45tQd4UPkZGULg59BZPl1p">4</a></p><p>QUIC &amp; HTTP/3 packet capture and analysis using Wireshark. Featuring Peter Wu.</p><p><a href="https://cloudflare.tv/event/4YgvMrif2yma7pM6Srv6wi">5</a></p><p>The roles of Server Push and Prioritization in HTTP/2 and HTTP/3. Featuring Yoav Weiss.</p><p><a href="https://cloudflare.tv/event/7ufIyfjZfn2aQ2K635EH3t">6</a></p><p>"After dinner chat" about curl and QUIC. Featuring Daniel Stenberg.</p><p><a href="https://cloudflare.tv/event/6vMyFU2jyx2iKXZVp7YjHW">7</a></p><p>Qlog vs. Wireshark. Featuring Robin Marx and Peter Wu.</p><p><a href="https://cloudflare.tv/event/3miIPtXnktpzjslJlnkD9c">8</a></p><p>Understanding protocol performance using WebPageTest. Featuring Pat Meenan and Andy Davies.</p><p><a href="https://cloudflare.tv/event/6Qv7zmY2oi6j28M5HZNZmV">9</a></p><p>Handshake deep dive.</p><p><a href="https://cloudflare.tv/event/3gqUUBcl40LvThxO7UQH0T">10</a></p><p>Getting to grips with quiche, Cloudflare's QUIC and HTTP/3 library.</p><p><a href="https://cloudflare.tv/event/3Mrq6DHoA9fy4ATT3Wigrv">11</a></p><p>A review of SIGCOMM's EPIQ workshop on evolving QUIC.</p><p><a href="https://cloudflare.tv/event/CHrSpig5nqKeFGFA3fzLq">12</a></p><p>Understanding the role of congestion control in QUIC. Featuring Junho Choi.</p><p></p>
    <div>
      <h3>Whither QUIC?</h3>
      <a href="#whither-quic">
        
      </a>
    </div>
    <p>So does Last Call mean QUIC is "done"? Not by a long shot. The new protocol is a giant leap for the Internet, because it enables new opportunities and innovation. QUIC v1 is basically the set of documents that have gone into Last Call. We'll continue to see people gain experience deploying and testing this, and no doubt cool blog posts about tweaking parameters for efficiency and performance are on the radar. But QUIC and HTTP/3 are extensible, so we'll see people interested in trying new things like multipath, different congestion control approaches, or new ways to carry data unreliably such as the <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-datagram/">DATAGRAM frame</a>.</p><p>We're also seeing people interested in using QUIC for other use cases. Mapping other application protocols like DNS to QUIC is a rapid way to get its improvements. We're seeing people that want to use QUIC as a substrate for carrying other transport protocols, hence the formation of the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a>. There's folks that want to use QUIC and HTTP/3 as a "supercharged WebSocket", hence the formation of the <a href="https://datatracker.ietf.org/wg/webtrans/documents/">WebTransport Working Group</a>.</p><p>Whatever the future holds for QUIC, we're just getting started, and I'm excited.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">4kkjRctgxi0uvF46ddKnp6</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Access: now for SaaS apps, too]]></title>
            <link>https://blog.cloudflare.com/cloudflare-access-for-saas/</link>
            <pubDate>Tue, 13 Oct 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Teams can now secure SaaS applications with Zero Trust rules using Cloudflare Access. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/J2gwvS7B7LcOGJLm4cyHG/71a6b6a57db1739be9b7330ccdffedd5/Teams-for-SAAS-thumb.png" />
            
            </figure><div>
      
    </div>
<p></p><p>We built Cloudflare Access™ as a tool to solve a problem we had inside of Cloudflare. We rely on a set of applications to manage and monitor our network. Some of these are popular products that we self-host, like the Atlassian suite, and others are tools we built ourselves. We deployed those applications on a private network. To reach them, you had to either connect through a secure WiFi network in a Cloudflare office, or use a VPN.</p><p>That VPN added friction to how we work. We had to dedicate part of Cloudflare’s onboarding just to teaching users how to connect. If someone received a PagerDuty alert, they had to rush to their laptop and sit and wait while the VPN connected. Team members struggled to work while mobile. New offices had to backhaul their traffic. In 2017 and early 2018, our IT team triaged hundreds of help desk tickets with titles like these:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wxEscGECJL0EvhQW4N6Ot/17e8c65c7de912c0d90309f0b6c8824d/image4-4.png" />
            
            </figure><p>While our IT team wrestled with usability issues, our Security team decided that poking holes in our private network was too much of a risk to maintain. Once on the VPN, users almost always had too much access. We had limited visibility into what happened on the private network. We tried to <a href="https://www.cloudflare.com/learning/access-management/what-is-network-segmentation/">segment the network</a>, but that was error-prone.</p><p>Around that time, Google published its BeyondCorp paper that outlined a model of what has become known as <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust Security</a>. Instead of trusting any user on a private network, a Zero Trust perimeter evaluates every request and connection for user identity and other variables.</p><p>We decided to create our own <a href="https://www.cloudflare.com/learning/access-management/how-to-implement-zero-trust/">implementation</a> by building on top of Cloudflare. Despite BeyondCorp being a new concept, we had experience in this field. For nearly a decade, Cloudflare’s global network had been operating like a Zero Trust perimeter for applications on the Internet - we just didn’t call it that. For example, products like our <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> evaluated requests to public-facing applications. We could add identity as a new layer and use the same network to protect applications teams used internally.</p><p>We began moving our self-hosted applications to this new project. Users logged in with our SSO provider from any network or location, and the experience felt like any other SaaS app. Our Security team gained the control and visibility they needed, and our IT team became more productive. Specifically, our IT teams have seen <a href="https://www.cloudflare.com/case-studies/how-cloudflare-uses-cloudflare-access-to-secure-our-global-team">~80% reduction in the time they spent servicing VPN-related tickets</a>, which unlocked over $100K worth of help desk efficiency annually. Later in 2018, we launched this as a product that our customers could use as well.</p><p>By shifting security to Cloudflare's network, we could also make the <a href="https://www.cloudflare.com/learning/access-management/what-is-the-network-perimeter/">perimeter</a> smarter. We could require that users <a href="/require-hard-key-auth-with-cloudflare-access/">login with a hard key</a>, something that our identity provider couldn't support. We could restrict connections to applications from <a href="/two-clicks-to-enable-regional-zero-trust-compliance/">specific countries</a>. We added <a href="/tanium-cloudflare-teams/">device posture</a> integrations. Cloudflare Access became an aggregator of identity signals in this Zero Trust model.</p><p>As a result, our internal tools suddenly became more secure than the SaaS apps we used. We could only add rules to the applications we could place on Cloudflare’s reverse proxy. When users connected to popular SaaS tools, they did not pass through Cloudflare’s network. We lacked a consistent level of visibility and security across all of our applications. So did our customers.</p><p>Starting today, our team and yours can fix that. We’re excited to announce that you can now bring the Zero Trust security features of Cloudflare Access to your SaaS applications. You can protect any SaaS application that can integrate with a SAML identity provider with Cloudflare Access.</p><p>Even though that SaaS application is not deployed on Cloudflare, we can still add security rules to every login. You can begin using this feature today and, in the next couple of months, you’ll be able to ensure that all traffic to these SaaS applications connects through Cloudflare Gateway.</p>
    <div>
      <h3>Standardizing and aggregating identity in Cloudflare’s network</h3>
      <a href="#standardizing-and-aggregating-identity-in-cloudflares-network">
        
      </a>
    </div>
    <p>Support for SaaS applications in Cloudflare Access starts with standardizing identity. Cloudflare Access  aggregates different sources of identity: username, password, location, and device. Administrators build rules to determine what requirements a user must meet to reach an application. When users attempt to connect, Cloudflare enforces every rule in that checklist before the user ever reaches the app.</p><p>The primary rule in that checklist is user identity. Cloudflare Access is not an identity provider; instead, we source identity from SSO services like Okta, Ping Identity, OneLogin, or public apps like GitHub. When a user attempts to access a resource, we prompt them to login with the provider configured. If successful, the provider shares the user’s identity and other metadata with Cloudflare Access.</p><p>A username is just one part of a Zero Trust decision. We consider additional rules, like country restrictions or device posture via partners like Tanium or, soon, additional partners CrowdStrike and VMware Carbon Black. If the user meets all of those criteria, Cloudflare Access summarizes those variables into a standard proof of identity that our network trusts: a JSON Web Token (JWT).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5aupuEmhR8lgWRZjvdfqlN/0c7e0f8b95d5c2f589e18eadeca17cb0/image5-4.png" />
            
            </figure><p>A JWT is a secure, information-dense way to share information. Most importantly, JWTs <a href="https://tools.ietf.org/html/rfc7519">follow a standard</a>, so that different systems can trust one another. When users login to Cloudflare Access, we generate and sign a JWT that contains the decision and information about the user. We store that information in the user’s browser and treat that as proof of identity for the duration of their session.</p><p>Every JWT must consist of three Base64-URL strings: the header, the payload, and the signature.</p><ul><li><p>The <b>header</b> defines the cryptographic operation that encrypts the data in the JWT.</p></li><li><p>The <b>payload</b> consists of name-value pairs for at least one and typically multiple claims, encoded in JSON. For example, the payload can contain the identity of a user.</p></li><li><p>The <b>signature</b> allows the receiving party to confirm that the payload is authentic.</p></li></ul><p>We store the identity data inside of the payload and include the following details:</p><ul><li><p><b>User identity</b>: typically the email address of the user retrieved from your identity provider.</p></li><li><p><b>Authentication domain</b>: the domain that signs the token. For Access, we use “example.cloudflareaccess.com” where “example” is a subdomain you can configure.</p></li><li><p><b>amr</b>: If available, the multifactor authentication method the login used, like a hard key or a TOTP code.</p></li><li><p><b>Country</b>: The country where the user is connecting from.</p></li><li><p><b>Audience</b>: The domain of the application you are attempting to reach.</p></li><li><p><b>Expiration</b>: the time at which the token is no longer valid for use.</p></li></ul><p>Some applications support JWTs natively for SSO. We can send the token to the application and the user can login. In other cases, we’ve released plugins for popular providers like <a href="/cloudflare-access-sharing-our-single-sign-on-plugin-for-atlassian/">Atlassian</a> and <a href="/open-sourcing-our-sentry-sso-plugin/">Sentry</a>. However, most applications lack JWT support and rely on a different standard: SAML.</p>
    <div>
      <h3>Converting JWT to SAML with Cloudflare Workers</h3>
      <a href="#converting-jwt-to-saml-with-cloudflare-workers">
        
      </a>
    </div>
    <p>You can deploy Cloudflare’s reverse proxy to protect the applications you host, which puts Cloudflare Access in a position to add identity checks when those requests hit our edge. However, the SaaS applications you use are hosted and managed by the vendors themselves as part of the value they offer. In the same way that I cannot decide who can walk into the front door of the bakery downstairs, you can’t build rules about what requests should and shouldn’t be allowed.</p><p>When those applications support integration with your SSO provider, you do have control over the login flow. Many applications rely on a popular standard, SAML, to securely exchange identity data and user attributes between two systems. The SaaS application does not need to know the details of the identity provider’s rules.</p><p>Cloudflare Access uses that relationship to force SaaS logins through Cloudflare’s network. The application itself thinks of Cloudflare Access as the SAML identity provider. When users attempt to login, the application sends the user to login with Cloudflare Access.</p><p>That said, Cloudflare Access is not an identity provider - it’s an identity aggregator. When the user reaches Access, we will redirect them to the identity provider in the same way that we do today when users request a site that uses Cloudflare’s reverse proxy. By adding that hop through Access, though, we can layer the additional contextual rules and log the event.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1yv8hYAwCeV36Hhu6of08T/b7ba08f4b379b7f8028c3168d86c561c/image6-4.png" />
            
            </figure><p>We still generate a JWT for every login providing a standard proof of identity. Integrating with SaaS applications required us to convert that JWT into a SAML assertion that we can send to the SaaS application. Cloudflare Access runs in every one of Cloudflare’s data centers around the world to improve availability and avoid slowing down users. We did not want to lose those advantages for this flow. To solve that, we turned to Cloudflare Workers.</p><p>The core login flow of Cloudflare Access already <a href="https://workers.cloudflare.com/built-with/projects/cloudflare-access">runs on Cloudflare Workers</a>. We built support for SaaS applications by using Workers to take the JWT and convert its content into SAML assertions that are sent to the SaaS application. The application thinks that Cloudflare Access is the identity provider, even though we’re just aggregating identity signals from your SSO provider and other sources into the JWT, and sending that summary to the app via SAML.</p>
    <div>
      <h3>Integrate with Gateway for comprehensive logging (coming soon)</h3>
      <a href="#integrate-with-gateway-for-comprehensive-logging-coming-soon">
        
      </a>
    </div>
    <p>Cloudflare Gateway keeps your users and data safe from threats on the Internet by filtering Internet-bound connections that leave laptops and offices. Gateway gives administrators the ability to block, allow, or log every connection and request to SaaS applications.</p><p>However, users are connecting from personal devices and home WiFi networks, potentially bypassing Internet security filtering available on corporate networks. If users have their password and MFA token, they can bypass security requirements and reach into SaaS applications from their own, unprotected devices at home.</p><p>To ensure traffic to your SaaS apps only connects over Gateway-protected devices, Cloudflare Access will add a new rule type that requires Gateway when users login to your SaaS applications. Once enabled, users will only be able to connect to your SaaS applications when they use Cloudflare Gateway. Gateway will log those connections and provide visibility into every action within SaaS apps and the Internet.</p>
    <div>
      <h3>Every identity provider is now capable of SAML SSO</h3>
      <a href="#every-identity-provider-is-now-capable-of-saml-sso">
        
      </a>
    </div>
    <p>Identity providers come in two flavors and you probably use both every day. One type is purpose-built to be an identity provider, and the other accidentally became one. With this release, Cloudflare Access can convert either into a SAML-compliant SSO option.</p><p><b>Corporate identity providers</b>, like Okta or Azure AD, manage your business identity. Your IT department creates and maintains the account. They can integrate it with SaaS Applications for SSO.</p><p>The second type of login option consists of SaaS providers that began as consumer applications and evolved into <b>public identity providers</b>. LinkedIn, GitHub, and Google required users to create accounts in their applications for networking, coding, or email.</p><p>Over the last decade, other applications began to trust those public identity provider logins. You could use your Google account to log into a news reader and your GitHub account to authenticate to DigitalOcean. Services like Google and Facebook became SSO options for everyone. However, most corporate applications only supported integration with a single SAML provider, something public identity providers do not provide. To rely on SSO as a team, you still needed a corporate identity provider.</p><p>Cloudflare Access converts a user login from any identity provider into a JWT. With this release, we also generate a standard SAML assertion. Your team can now use the SAML SSO features of a corporate identity provider with public providers like LinkedIn or GitHub.</p>
    <div>
      <h3>Multi-SSO meets SaaS applications</h3>
      <a href="#multi-sso-meets-saas-applications">
        
      </a>
    </div>
    <p>We <a href="/multi-sso-and-cloudflare-access-adding-linkedin-and-github-teams/">describe Cloudflare Access as a Multi-SSO</a> service because you can integrate multiple identity providers, and their SSO flows, into Cloudflare’s Zero Trust network. That same capability now extends to integrating multiple identity providers with a single SaaS application.</p><p>Most SaaS applications will only integrate with a single identity provider, limiting your team to a single option. We know that our customers work with partners, contractors, or acquisitions which can make it difficult to standardize around a single identity option for SaaS logins.</p><p>Cloudflare Access can connect to multiple identity providers simultaneously, including multiple instances of the same provider. When users are prompted to login, they can choose the option that their particular team uses.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fmJ0nY6O5wSBvOfQdg3NX/0b76c98cf6de4cccf76a76d40be9d0e4/image2-11.png" />
            
            </figure><p>We’ve taken that ability and extended it into the Access for SaaS feature. Access generates a consistent identity from any provider, which we can now extend for SSO purposes to a SaaS application. Even if the application only supports a single identity provider, you can still integrate Cloudflare Access and merge identities across multiple sources. Now, team members who use your Okta instance and contractors who use LinkedIn can both SSO into your Atlassian suite.</p>
    <div>
      <h3>All of your apps in one place</h3>
      <a href="#all-of-your-apps-in-one-place">
        
      </a>
    </div>
    <p>Cloudflare Access released the <a href="/announcing-the-cloudflare-access-app-launch/">Access App Launch</a> as a single destination for all of your internal applications. Your team members visit a URL that is unique to your organization and the App Launch displays all of the applications they can reach. The feature requires no additional administrative configuration; Cloudflare Access reads the user’s JWT and returns only the applications they are allowed to reach.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1tIwkxCD532J0fTugVeAmE/dd689ea011733577061226a597d926ae/image1-14.png" />
            
            </figure><p>That experience now extends to all applications in your organization. When you integrate SaaS applications with Cloudflare Access, your users will be able to discover them in the App Launch. Like the flow for internal applications, this requires no additional configuration.</p>
    <div>
      <h3>How to get started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>To get started, you’ll need a Cloudflare Access account and a SaaS application that supports SAML SSO. Navigate to the Cloudflare for Teams dashboard and choose the “SaaS” application option to start integrating your applications. Cloudflare Access will walk through the steps to configure the application to trust Cloudflare Access as the SSO option.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Sc3CA1krWC7UPnvrdwxQ0/0aebf47c33e0aa152757820d78feb3ab/image7-2.png" />
            
            </figure><p>Do you have an application that needs additional configuration? Please let us know.</p>
    <div>
      <h3>Protect SaaS applications with Cloudflare for Teams today</h3>
      <a href="#protect-saas-applications-with-cloudflare-for-teams-today">
        
      </a>
    </div>
    <p>Cloudflare Access for SaaS is available to all Cloudflare for Teams customers, including organizations on the <a href="https://www.cloudflare.com/plans/free/">free plan</a>. <a href="https://dash.cloudflare.com/sign-up/teams">Sign up for a Cloudflare for Teams account</a> and <a href="https://developers.cloudflare.com/learning-paths/secure-internet-traffic/">follow the steps in the documentation</a> to get started.</p><p>We will begin expanding the Gateway beta program to integrate Gateway’s logging and <a href="https://www.cloudflare.com/learning/access-management/what-is-url-filtering/">web filtering</a> with the Access for SaaS feature before the end of the year.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Zero Trust Week]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">gTiCBipqAsm9Jm3gtrChU</guid>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[NTS is now an RFC]]></title>
            <link>https://blog.cloudflare.com/nts-is-now-rfc/</link>
            <pubDate>Thu, 01 Oct 2020 14:53:04 GMT</pubDate>
            <description><![CDATA[ After much hard work, NTS finally becomes an official RFC.This means that Network Time Security (NTS) is officially part of the collection of protocols that makes the Internet work.  ]]></description>
            <content:encoded><![CDATA[ <p>Earlier today the document describing Network Time Security for NTP officially became RFC 8915. This means that Network Time Security (NTS) is officially part of the collection of protocols that makes the Internet work. We’ve changed our time service to use the officially assigned port of 4460 for NTS key exchange, so you can use our service with ease. This is big progress towards securing a ubiquitous Internet protocol.</p><p>Over the past months we’ve seen many users of our time service, but very few using Network Time Security. This leaves computers vulnerable to attacks that imitate the server they use to obtain NTP. Part of the problem was the lack of available NTP daemons that supported NTS. That problem is now solved: <a href="https://chrony.tuxfamily.org/">chrony</a> and <a href="https://www.ntpsec.org/">ntpsec</a> both support NTS.</p><p>Time underlies the security of many of the protocols such as TLS that we rely on to secure our online lives. Without accurate time, there is no way to determine whether or not credentials have expired. The absence of an easily deployed secure time protocol has been a problem for Internet security.</p><p>Without NTS or symmetric key authentication there is no guarantee that your computer is actually talking NTP with the computer you think it is. Symmetric key authentication is difficult and painful to set up, but until recently has been the only secure and standardized mechanism for authenticating NTP.  NTS uses the work that goes into the Web Public Key Infrastructure to authenticate NTP servers and ensure that when you set up your computer to talk to time.cloudflare.com, that’s the server your computer gets the time from.</p><p>Our involvement in developing and promoting NTS included making a specialized server and releasing the source code, participation in the standardization process, and much working with implementers to hunt down bugs. We also set up <a href="/secure-time/">our time service</a> with support for NTS from the beginning, and it was a useful resource for implementers to test interoperability.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12xa0bYS9ER14HPccVDE3g/f4565fc6e070cce69096e665912b7db1/pasted-image-0.png" />
            
            </figure><p>NTS operation diagram</p><p>When Cloudflare supported TLS 1.3 browsers were actively updating, and so deployment quickly took hold. However, the long tail of legacy installs and extended support releases slowed adoption. Similarly until Let’s Encrypt made encryption easy for webservers most web traffic was not encrypted.</p><p>By contrast <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">ssh</a> quickly displaced telnet as the way to access remote systems: the security benefits were substantial, and the experience was better. Adoption of protocols is slow, but when there is a real security need it can be much faster. NTS is a real security improvement that is vital to adopt. We’re proud to continue making the Internet a better place by supporting secure protocols.</p><p>We hope that operating systems will incorporate NTS support and TLS 1.3 in their supplied NTP daemons. We also urge administrators to deploy NTS as quickly as possible, and NTP server operators to adopt NTS. With Let’s Encrypt provided certificates this is simpler than it has been in the past</p><p>We’re continuing our work in this area with the continued development of the Roughtime protocol for even better security as well as engagement with the standardization process to help develop the future of Internet time.</p><p>Cloudflare is willing to allow any device to point to time.cloudflare.com and supports NTS. Just as our Universal SSL made it easy for any website to get the security benefits of TLS, our time service makes it easy for any computer to get the benefits of secure time.</p> ]]></content:encoded>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[DNS Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">7dwrp6Zcx5kyy5iR0OwoLf</guid>
            <dc:creator>Watson Ladd</dc:creator>
        </item>
        <item>
            <title><![CDATA[CUBIC and HyStart++ Support in quiche]]></title>
            <link>https://blog.cloudflare.com/cubic-and-hystart-support-in-quiche/</link>
            <pubDate>Fri, 08 May 2020 12:46:12 GMT</pubDate>
            <description><![CDATA[ Congestion control and loss recovery play a big role in the QUIC transport protocol performance. We recently added support for CUBIC and HyStart++ to quiche, the library powering Cloudflare's QUIC, and lab-based testing shows promising results for performance in lossy network conditions. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://github.com/cloudflare/quiche">quiche</a>, Cloudflare's IETF QUIC implementation has been running <a href="https://tools.ietf.org/html/rfc8312">CUBIC congestion control</a> for a while in our production environment as mentioned in <a href="/http-3-vs-http-2/">Comparing HTTP/3 vs. HTTP/2 Performance</a>. Recently we also added <a href="https://tools.ietf.org/html/draft-balasubramanian-tcpm-hystartplusplus-03">HyStart++</a>  to the congestion control module for further improvements.</p><p>In this post, we will talk about QUIC congestion control and loss recovery briefly and CUBIC and HyStart++ in the quiche congestion control module. We will also discuss lab test results and how to visualize those using <a href="https://tools.ietf.org/html/draft-marx-qlog-event-definitions-quic-h3-01">qlog</a> which was recently added to the quiche library as well.</p>
    <div>
      <h3>QUIC Congestion Control and Loss Recovery</h3>
      <a href="#quic-congestion-control-and-loss-recovery">
        
      </a>
    </div>
    <p>In the network transport area, congestion control is how to decide how much data the connection can send into the network. It has an important role in networking so as not to overrun the link but also at the same time it needs to play nice with other connections in the same network to ensure that the overall network, the Internet, doesn’t collapse. Basically congestion control is trying to detect the current capacity of the link and tune itself in real time and it’s one of the core algorithms for running the Internet.</p><p>QUIC congestion control has been written based on many years of TCP experience, so it is little surprise that the two have mechanisms that bear resemblance. It’s based on the CWND (congestion window, the limit of how many bytes you can send to the network) and the SSTHRESH (slow start threshold, sets a limit when slow start will stop). Congestion control mechanisms can have complicated edge cases and can be hard to tune. Since QUIC is a new transport protocol that people are implementing from scratch, the current draft recommends Reno as a relatively simple mechanism to get people started. However, it has known limitations and so QUIC is designed to have pluggable congestion control; it’s up to implementers to adopt any more advanced ones of their choosing.</p><p>Since Reno became the standard for TCP congestion control, many congestion control algorithms have been proposed by academia and industry. Largely there are two categories: loss-based congestion control such as Reno and CUBIC, where the congestion control responds to a packet loss event, and delay-based congestion control, such as <a href="https://www.cs.princeton.edu/courses/archive/fall06/cos561/papers/vegas.pdf">Vegas</a> and <a href="https://queue.acm.org/detail.cfm?id=3022184">BBR</a> , which the algorithm tries to find a balance between the bandwidth and <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">RTT</a> increase and tune the packet send rate.</p><p>You can port TCP based congestion control algorithms to QUIC without much change by implementing a few hooks. quiche provides a modular API to add a new congestion control module easily.</p><p>Loss detection is how to detect packet loss at the sender side. It’s usually separated from the congestion control algorithm but helps the congestion control to quickly respond to the congestion. Packet loss can be a result of the congestion on the link, but the link layer may also drop a packet without congestion due to the characteristics of the physical layer, such as on a WiFi or mobile network.</p><p>Traditionally TCP uses 3 DUP ACKs for ACK based detection, but delay-based loss detection such as <a href="https://tools.ietf.org/html/draft-ietf-tcpm-rack-08">RACK</a>  has also been used over the years. QUIC combines the lesson from TCP into <a href="https://tools.ietf.org/html/draft-ietf-quic-recovery-27#section-5">two categories</a> . One is based on the packet threshold (similar to 3 DUP ACK detection) and the other is based on a time threshold (similar to RACK). QUIC also has <a href="https://tools.ietf.org/html/draft-ietf-quic-recovery-27#section-3.1.5">ACK Ranges</a> similar to TCP SACK to provide a status of the received packets but ACK Ranges can keep a longer list of received packets in the ACK frame than TCP SACK. This simplifies the implementation overall and helps provide quick recovery when there is multiple loss.</p>
    <div>
      <h3>Reno</h3>
      <a href="#reno">
        
      </a>
    </div>
    <p>Reno (often referred as NewReno) is a standard <a href="https://tools.ietf.org/html/rfc5681">congestion control for TCP</a> and <a href="https://tools.ietf.org/id/draft-ietf-quic-recovery-27.html#section-6">QUIC</a> .</p><p>Reno is easy to understand and doesn't need additional memory to store the state so can be implemented in low spec hardware too. However, its slow start can be very aggressive because it keeps increasing the CWND quickly until it sees congestion. In other words, it doesn’t stop until it sees the packet loss.</p><p>Note that there are multiple states for Reno; Reno starts from "slow start" mode which increases the CWND very aggressively, roughly 2x for every RTT until the congestion is detected or CWND &gt; SSTHRESH. When packet loss is detected, it enters into the “recovery” mode until packet loss is recovered.</p><p>When it exits from recovery (no lost ranges) and CWND &gt; SSTHRESH, it enters into the "congestion avoidance" mode where the CWND grows slowly (roughly a full packet per RTT) and tries to converge on a stable CWND. As a result you will see a “sawtooth” pattern when you make a graph of the CWND over time.</p><p>Here is an example of Reno congestion control CWND graph. See the “Congestion Window” line.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7oTUuWLimS5jC3imLTcZY1/c7008deac0b62ac774735f2dda0a0324/reno-nohs.png" />
            
            </figure>
    <div>
      <h3>CUBIC</h3>
      <a href="#cubic">
        
      </a>
    </div>
    <p>CUBIC was announced in 2008 and became the default congestion control in the Linux kernel. Currently it's defined in <a href="https://tools.ietf.org/html/rfc8312">RFC8312</a>  and implemented in many OS including Linux, BSD and Windows. quiche's CUBIC implementation follows RFC8312 with a fix made by <a href="https://github.com/torvalds/linux/commit/30927520dbae297182990bb21d08762bcc35ce1d">Google in the Linux kernel</a> .</p><p>What makes the difference from Reno is during congestion avoidance  its CWND growth is based on a cubic function as follows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1KI70rYLDzS02VWarowfyi/981f066dfbf86f11c0a2cc5c697d8d84/1J22foaznVbLs-waPSxKZusUf88svZfepM2QeTeDGe3kvjnkYKN73L861cTQu2eaxLBNtQhW-WEvmlqv3YWf_tRdfTLFXMcWDmUuNzSFanrz53N-9c4insL5kXnj.png" />
            
            </figure><p>(from the CUBIC paper: <a href="https://www.cs.princeton.edu/courses/archive/fall16/cos561/papers/Cubic08.pdf">https://www.cs.princeton.edu/courses/archive/fall16/cos561/papers/Cubic08.pdf</a>)</p><p><i>Wmax</i> is the value of CWND when the congestion is detected. Then it will reduce the CWND by 30% and then the CWND starts to grow again using a cubic function as in the graph, approaching <i>Wmax</i> aggressively in the beginning in the first half but slowly converging to <i>Wmax</i> later. This makes sure that CWND growth approaches the previous point carefully and once we pass <i>Wmax</i>, it starts to grow aggressively again after some time to find a new CWND (this is called "Max Probing").</p><p>Also it has a "TCP-friendly" (actually a Reno-friendly) mode to make sure CWND growth is always bigger than Reno. When the congestion event happens, CUBIC reduces its CWND by 30%, where Reno cuts down CWND by 50%. This makes CUBIC a little more aggressive on packet loss.</p><p>Note that the original CUBIC only defines how to update the CWND during congestion avoidance. Slow start mode is exactly the same as Reno.</p>
    <div>
      <h3>HyStart++</h3>
      <a href="#hystart">
        
      </a>
    </div>
    <p>The authors of CUBIC made a separate effort to improve slow start because CUBIC only changed the way the CWND grows during congestion avoidance. They came up with the idea of <a href="https://www.sciencedirect.com/science/article/abs/pii/S1389128611000363">HyStart</a> .</p><p>HyStart is based on two ideas and basically changes how the CWND is updated during slow start:</p><ul><li><p>RTT delay samples: when the RTT is increased during slow start and over the threshold, it exits slow start early and enters into congestion avoidance.</p></li><li><p>ACK train: When ACK inter-arrival time gets higher and over the threshold, it exits slow start early and enters into congestion avoidance.</p></li></ul><p>However in the real world, ACK train may not be very useful because of ACK compression (merging multiple ACKs into one). Also RTT delay may not work well when the network is unstable.</p><p>To improve such situations there is a new IETF draft proposed by Microsoft engineers named <a href="https://tools.ietf.org/html/draft-balasubramanian-tcpm-hystartplusplus-03">HyStart++</a> . HyStart++ is included in the Windows 10 TCP stack with CUBIC.</p><p>It's a little different from original HyStart:</p><ul><li><p>No ACK Train, only RTT sampling.</p></li><li><p>Add a LSS (Limited Slow Start) phase after exiting slow start. LSS grows the CWND faster than congestion avoidance but slower than Reno slow start. Instead of going into congestion avoidance directly, slow start exits to LSS and LSS exits to congestion avoidance when packet loss happens.</p></li><li><p>Simpler implementation.</p></li></ul><p>In quiche, HyStart++ is turned on by default for both Reno and CUBIC congestion control and can be configured via API.</p>
    <div>
      <h3>Lab Test</h3>
      <a href="#lab-test">
        
      </a>
    </div>
    <p>Here is a test result using <a href="/a-cost-effective-and-extensible-testbed-for-transport-protocol-development/">the test lab</a> . The test condition is as follows:</p><ul><li><p>5Mbps bandwidth, 60ms RTT with a different packet loss from 0% to 8%</p></li><li><p>Measure download time of 8MB file</p></li><li><p>NGINX 1.16.1 server with the <a href="https://github.com/cloudflare/quiche/tree/master/extras/nginx">HTTP3 patch</a></p></li><li><p>TCP: CUBIC in Linux kernel 4.14</p></li><li><p>QUIC: Cloudflare quiche</p></li><li><p>Download 20 times and take a median download time</p></li></ul><p>I run the test with the following combination:</p><ul><li><p>TCP CUBIC (TCP-CUBIC)</p></li><li><p>QUIC Reno (QUIC-RENO)</p></li><li><p>QUIC Reno with Hystart++ (QUIC-RENO-HS)</p></li><li><p>QUIC CUBIC (QUIC-CUBIC)</p></li><li><p>QUIC CUBIC with Hystart++ (QUIC-CUBIC-HS)</p></li></ul>
    <div>
      <h3>Overall Test Result</h3>
      <a href="#overall-test-result">
        
      </a>
    </div>
    <p>Here is a chart of overall test results:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9p0dppTGft9jL6vBZrpI7/edf89c790e406e83b0df0c556b3c45fb/image6-2.png" />
            
            </figure><p>In these tests, TCP-CUBIC (blue bars) is the baseline to which we compare the performance of QUIC congestion control variants. We include QUIC-RENO (red and yellow bars) because that is the default QUIC baseline. Reno is simpler so we expect it to perform worse than TCP-CUBIC. QUIC-CUBIC (green and orange bars) should perform the same or better than TCP-CUBIC.</p><p>You can see with 0% packet loss TCP and QUIC are almost doing the same (but QUIC is slightly slower). As  packet loss increases QUIC CUBIC performs better than TCP CUBIC. QUIC loss recovery looks to work well, which is great news for real-world networks that do encounter loss.</p><p>With HyStart++, overall performance doesn’t change but that is to be expected, because the main goal of HyStart++ is to prevent overshooting the network. We will see that in the next section.</p>
    <div>
      <h3>The impact of HyStart++</h3>
      <a href="#the-impact-of-hystart">
        
      </a>
    </div>
    <p>HyStart++ may not improve the download time but it will reduce packet loss while maintaining the same performance without it. Since slow start will exit to congestion avoidance when packet loss is detected, we focus on 0% packet loss where only network congestion creates packet loss.</p>
    <div>
      <h3>Packet Loss</h3>
      <a href="#packet-loss">
        
      </a>
    </div>
    <p>For each test, the number of detected packets lost (not the retransmit count) is shown in the following chart. The lost packets number is the average of 20 runs for each test.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IMpVqPBHkCvQhDQnrJE81/b23f89e3a15cb314700532b371898a46/lost_pkt_hs.png" />
            
            </figure><p>As shown above, you can see that HyStart++ reduces a lot of packet loss.</p><p>Note that compared with Reno, CUBIC can create more packet loss in general. This is because the CUBIC CWND can grow faster than Reno during congestion avoidance and also reduces the CWND less (30%) than Reno (50%) at the congestion event.</p>
    <div>
      <h3>Visualization using qlog and qvis</h3>
      <a href="#visualization-using-qlog-and-qvis">
        
      </a>
    </div>
    <p><a href="https://qvis.edm.uhasselt.be">qvis</a>  is a visualization tool based on <a href="https://tools.ietf.org/html/draft-marx-qlog-event-definitions-quic-h3-01">qlog</a> . Since quiche has implemented <a href="https://github.com/cloudflare/quiche/pull/379">qlog support</a> , we can take qlogs from a QUIC connection and use the qvis tool to visualize connection stats. This is a very useful tool for protocol development. We already used qvis for the Reno graph but let’s see a few more examples to understand how HyStart++ works.</p>
    <div>
      <h3>CUBIC without HyStart++</h3>
      <a href="#cubic-without-hystart">
        
      </a>
    </div>
    <p>Here is a qvis congestion chart for a 16MB transfer in the same lab test conditions, with 0% packet loss. You can see a high peak of CWND in the beginning due to slow start. After some time, it starts to show the CUBIC window growth pattern (concave function).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/147wv6vAfFBfVunzmgqi4Z/037337856e3edf6983e7d88f27f21d76/cubic-nohs.png" />
            
            </figure><p>When we zoom into the slow start section (the first 0.7 seconds), we can see there is a linear increase of CWND during slow start. This continues until we see a packet lost around 500ms and enters into congestion avoidance after recovery, as you can see in the following chart:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2N7Oiud9tfiQq8JQ1ReRxV/a656abfba8f9dc62778600274861d83b/cubic-nohs-zoom-legend.png" />
            
            </figure>
    <div>
      <h3>CUBIC with HyStart++</h3>
      <a href="#cubic-with-hystart">
        
      </a>
    </div>
    <p>Let’s see the same graph when HyStart++ is enabled. You can see the slow start peak is smaller than without HyStart++, which will lead to less overshooting and packet loss:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g0T9fYiwD4biGawOh7LTS/ed5f2dfe915751e0fbad12ffd532624c/cubic-hs.png" />
            
            </figure><p>When we zoom in the slow start part again, now we can see that the slow start exits to Limited Slow Start (LSS) around 390ms and exit to congestion avoidance at the congestion event around 500ms.</p><p>As a result you can see the slope is less steep until congestion is detected. It will lead to less packet loss due to less overshooting the network and faster convergence to a stable CWND.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49le5NmySGIuvRcZETrR8Z/67236043dd84b2932e23d0ea5db3d95d/cubic-hs-zoom-legend.png" />
            
            </figure>
    <div>
      <h3>Conclusions and Future Tasks</h3>
      <a href="#conclusions-and-future-tasks">
        
      </a>
    </div>
    <p>The QUIC draft spec already has integrated a lot of experience from TCP congestion control and loss recovery. It recommends the simple Reno mechanism as a means to get people started implementing the protocol but is under no illusion that there are better performing ones out there. So QUIC is designed to be pluggable in order for it to adopt mechanisms that are being deployed in state-of-the-art TCP implementations.</p><p>CUBIC and HyStart++ are known implementations in the TCP world and give better performance (faster download and less packet loss) than Reno. We've made quiche pluggable and have added CUBIC and HyStart++ support. Our lab testing shows that QUIC is a clear performance winner in lossy network conditions, which is the very thing it is designed for.</p><p>In the future, we also plan to work on advanced features in quiche, such as packet pacing, advanced recovery and BBR congestion control for better QUIC performance. Using quiche you can switch among multiple congestion control algorithms using the config API at the connection level, so you can play with it and choose the best one depending on your need. qlog endpoint logging can be visualized to provide high accuracy insight into how QUIC is behaving, greatly helping understanding and development.</p><p>CUBIC and HyStart++ code is available in the <a href="https://github.com/cloudflare/quiche">quiche primary branch today</a>. Please try it!</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">525vEcQ9ys8D8JIFjgvliO</guid>
            <dc:creator>Junho Choi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Releasing kubectl support in Access]]></title>
            <link>https://blog.cloudflare.com/releasing-kubectl-support-in-access/</link>
            <pubDate>Mon, 27 Apr 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Starting today, you can use Cloudflare Access and Argo Tunnel to securely manage your Kubernetes cluster with the kubectl command-line tool. SSO requirements and a zero-trust model to your Kubernetes management in under 30 minutes. ]]></description>
            <content:encoded><![CDATA[ <p>Starting today, you can use Cloudflare Access and Argo Tunnel to securely manage your Kubernetes cluster with the kubectl command-line tool.</p><p>We built this to address one of the edge cases that stopped all of Cloudflare, as well as some of our customers, from disabling the VPN. With this workflow, you can add SSO requirements and a zero-trust model to your Kubernetes management in under 30 minutes.</p><p>Once deployed, you can migrate to Cloudflare Access for controlling Kubernetes clusters without disrupting your current <code>kubectl</code> workflow, a lesson we learned the hard way from dogfooding here at Cloudflare.</p>
    <div>
      <h3>What is kubectl?</h3>
      <a href="#what-is-kubectl">
        
      </a>
    </div>
    <p>A Kubernetes <a href="https://kubernetes.io/docs/concepts/overview/components/">deployment consists</a> of a cluster that contains nodes, which run the containers, as well as a control plane that can be used to manage those nodes. Central to that control plane is the Kubernetes API server, which interacts with components like the scheduler and manager.</p><p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/">kubectl</a> is the Kubernetes command-line tool that developers can use to interact with that API server. Users run <code>kubectl</code> commands to perform actions like starting and stopping the nodes, or modifying other elements of the control plane.</p><p>In most deployments, users connect to a VPN that allows them to run commands against that API server by addressing it over the same local network. In that architecture, user traffic to run these commands must be backhauled through a physical or virtual VPN appliance. More concerning, in most cases the user connecting to the API server will also be able to connect to other addresses and ports in the private network where the cluster runs.</p>
    <div>
      <h3>How does Cloudflare Access apply?</h3>
      <a href="#how-does-cloudflare-access-apply">
        
      </a>
    </div>
    <p>Cloudflare Access can secure web applications as well as non-HTTP connections like <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a>, RDP, and the commands sent over <code>kubectl</code>. Access deploys Cloudflare’s network in front of all of these resources. Every time a request is made to one of these destinations, Cloudflare’s network checks for identity like a bouncer in front of each door.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/135CckjPYvwTnJPzEwqwjq/ecafea64a120ae117e7b63e192eb778b/image1-21.png" />
            
            </figure><p>If the request lacks identity, we send the user to your team’s SSO provider, like Okta, AzureAD, and G Suite, where the user can login. Once they login, they are redirected to Cloudflare where we check their identity against a list of users who are allowed to connect. If the user is permitted, we let their request reach the destination.</p><p>In most cases, those granular checks on every request would slow down the experience. However, Cloudflare Access completes the entire check in just a few milliseconds. The authentication flow relies on Cloudflare’s serverless product, <a href="https://workers.cloudflare.com/">Workers</a>, and runs in every one of our data centers in 200 cities around the world. With that distribution, we can improve performance for your applications while also authenticating every request.</p>
    <div>
      <h3>How does it work with kubectl?</h3>
      <a href="#how-does-it-work-with-kubectl">
        
      </a>
    </div>
    <p>To replace your VPN with Cloudflare Access for <code>kubectl</code>, you need to complete two steps:</p><ul><li><p>Connect your cluster to Cloudflare with Argo Tunnel</p></li><li><p>Connect from a client machine to that cluster with Argo Tunnel</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/8oP2nLOQ22C1KHNxRH6AG/d5dbd3feecad56115db653a9250aa407/kubectl.png" />
            
            </figure>
    <div>
      <h3>Connecting the cluster to Cloudflare</h3>
      <a href="#connecting-the-cluster-to-cloudflare">
        
      </a>
    </div>
    <p>On the cluster side, Cloudflare Argo Tunnel connects those resources to our network by creating a secure tunnel with the Cloudflare daemon, <code>cloudflared</code>. As an administrator, you can run <code>cloudflared</code> in any space that can connect to the kubectl API server over TCP.</p><p>Once installed, an administrator authenticates the instance of <code>cloudflared</code> by logging in to a browser with their Cloudflare account and choosing a hostname to use. Once selected, Cloudflare will issue a certificate to <code>cloudflared</code> that can be used to create a subdomain for the cluster.</p><p>Next, an administrator starts the tunnel. In the example below, the <code>hostname</code> value can be any subdomain of the hostname selected in Cloudflare; the <code>url</code> value should be the API server for the cluster.</p>
            <pre><code>cloudflared tunnel --hostname cluster.site.com --url tcp://kubernetes.docker.internal:6443 --socks5=true </code></pre>
            <p>This should be run as a <code>systemd</code> process to ensure the tunnel reconnects if the resource restarts.</p>
    <div>
      <h3>Connecting as an end user</h3>
      <a href="#connecting-as-an-end-user">
        
      </a>
    </div>
    <p>End users do not need an agent or client application to connect to web applications secured by Cloudflare Access. They can authenticate to on-premise applications through a browser, without a VPN, like they would for SaaS tools. When we apply that same security model to non-HTTP protocols, we need to establish that secure connection from the client with an alternative to the web browser.</p><p>Unlike our SSH flow, end users cannot modify <code>kubeconfig</code> to proxy requests through <code>cloudflared</code>. <a href="https://github.com/kubernetes/kubernetes/pull/81443">Pull requests</a> have been submitted to add this functionality to <code>kubeconfig</code>, but in the meantime users can set an alias to serve a similar function.</p><p>First, users need <a href="https://developers.cloudflare.com/argo-tunnel/quickstart/">to download</a> the same <code>cloudflared</code> tool that administrators deploy on the cluster. Once downloaded, they will need to run a corresponding command to create a local SOCKS proxy. When the user runs the command, <code>cloudflared</code> will launch a browser window to prompt them to login with their SSO and check that they are allowed to reach this hostname.</p>
            <pre><code>$ cloudflared access tcp --hostname cluster.site.com url --127.0.0.1:1234</code></pre>
            <p>The proxy allows your local kubectl tool to connect to <code>cloudflared</code> via a SOCKS5 proxy, which helps avoid issues with TLS handshakes to the cluster itself. In this model, TLS verification can still be exchanged with the <code>kubectl</code> API server without disabling or modifying that flow for end users.</p><p>Users can then create an alias to save time when connecting. The example below aliases all of the steps required to connect in a single command. This can be added to the user’s bash profile so that it persists between restarts.</p>
            <pre><code>$ alias kubeone="env HTTPS_PROXY=socks5://127.0.0.1:1234 kubectl"</code></pre>
            
    <div>
      <h3>A (hard) lesson when dogfooding</h3>
      <a href="#a-hard-lesson-when-dogfooding">
        
      </a>
    </div>
    <p>When we build products at Cloudflare, we release them to our own organization first. The entire company becomes a feature’s first customer, and we ask them to submit feedback in a candid way.</p><p>Cloudflare Access began as a product we built <a href="/dogfooding-from-home/">to solve our own challenges</a> with security and connectivity. The product impacts every user in our team, so as we’ve grown, we’ve been able to gather more expansive feedback and catch more edge cases.</p><p>The <code>kubectl</code> release was no different. At Cloudflare, we have a team that manages our own Kubernetes deployments and we went to them to discuss the prototype. However, they had more than just some casual feedback and notes for us.</p><p>They told us to stop.</p><p>We had started down an implementation path that was technically sound and solved the use case, but did so in a way that engineers who spend all day working with pods and containers would find to be a real irritant. The flow required a small change in presenting certificates, which did not feel cumbersome when we tested it, but we do not use it all day. That grain of sand would cause real blisters as a new requirement in the workflow.</p><p>With their input, we stopped the release, and changed that step significantly. We worked through ideas, iterated with them, and made sure the Kubernetes team at Cloudflare felt this was not just good enough, but better.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Support for <code>kubectl</code> is available in the latest release of the <code>cloudflared</code> tool. You can begin using it today, on any plan. More <a href="https://developers.cloudflare.com/access/other-protocols/kubectl/">detailed instructions are available</a> to get started.</p><p>If you try it out, <a href="https://community.cloudflare.com/t/feedback-for-cloudflare-access-support-for-kubectl/168530">please send us your feedback</a>! We’re focused on improving the ease of use for this feature, and other non-HTTP workflows in Access, and need your input.</p><p>New to Cloudflare for Teams? You can use all of the Teams products for free through September, including Cloudflare Access and Argo Tunnel. You can learn more about the program, and request a dedicated onboarding session, <a href="https://teams.cloudflare.com/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Kubernetes]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">Q0Qzl6N8Ct6pqsN9MMbUE</guid>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[Adopting a new approach to HTTP prioritization]]></title>
            <link>https://blog.cloudflare.com/adopting-a-new-approach-to-http-prioritization/</link>
            <pubDate>Tue, 31 Dec 2019 19:13:58 GMT</pubDate>
            <description><![CDATA[ HTTP prioritization is important for web performance. This is the story behind a new approach recently adopted for further work in the IETF. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Friday the 13th is a lucky day for Cloudflare for <a href="https://twitter.com/eastdakota/status/566276309433602048">many reasons</a>. On December 13, 2019 Tommy Pauly, co-chair of the IETF HTTP Working Group, <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019OctDec/0181.html">announced</a> the adoption of the "Extensible Prioritization Scheme for HTTP" - a new approach to HTTP prioritization.</p><p>Web pages are made up of many resources that must be downloaded before they can be presented to the user. The role of HTTP prioritization is to load the right bytes at the right time in order to achieve the best performance. This is a collaborative process between client and server, a client sends priority signals that the server can use to schedule the delivery of response data. In HTTP/1.1 the signal is basic, clients order requests smartly across a pool of about 6 connections. In HTTP/2 a single connection is used and clients send a signal per request, as a frame, which describes the <i>relative dependency and weighting</i> of the response. HTTP/3 tried to use the same approach but dependencies don't work well when signals can be delivered out of order.</p><p><a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is being standardised as part of the <a href="/the-road-to-quic/">QUIC</a> effort. As a Working Group (WG) we've been trying to fix the problems that non-deterministic ordering poses for HTTP priorities. However, in parallel some of us have been working on an alternative solution, the Extensible Prioritization Scheme, which fixes problems by dropping dependencies and using an <i>absolute weighting</i>. This is signalled in an HTTP header field meaning it can be backported to work with HTTP/2 or carried over HTTP/1.1 hops. The alternative proposal is documented in the Individual-Draft <a href="https://tools.ietf.org/html/draft-kazuho-httpbis-priority-04">draft-kazuho-httpbis-priority-04</a>, co-authored by Kazuho Oku (Fastly) and myself. This has now been adopted by the IETF HTTP WG as the basis of further work; It's adopted name will be draft-ietf-httpbis-priority-00.</p><p>To some extent document adoption is the end of one journey and the start of the next; sometimes the authors of the original work are not the best people to oversee the next phase. However, I'm pleased to say that Kazuho and I have been selected as co-editors of this new document. In this role we will reflect the consensus of the WG and help steward the next chapter of HTTP prioritization standardisation. Before the next journey begins in earnest, I wanted to take the opportunity to share my thoughts on the story of developing the alternative prioritization scheme through 2019.</p><p>I'd love to explain all the details of this new approach to HTTP prioritization but the truth is I expect the standardization process to refine the design and for things to go stale quickly. However, it doesn't hurt to give a taste of what's in store, just be aware that it is all subject to change.</p>
    <div>
      <h2>A recap on priorities</h2>
      <a href="#a-recap-on-priorities">
        
      </a>
    </div>
    <p>The essence of HTTP prioritization comes down to trying to download many things over constrained connectivity. To borrow some text from Pat Meenan: <i>Web pages are made up of</i> <a href="https://discuss.httparchive.org/t/whats-the-distribution-of-requests-per-page/21/10?u=patmeenan"><i>dozens (sometimes hundreds)</i></a> <i>of separate resources that are loaded and assembled by a browser into the final displayed content.</i> Since it is not possible to download everything immediately, we prefer to fetch more important things before less important ones. The challenge comes in signalling the importance from client to server.</p><p>In HTTP/2, every connection has a priority tree that expresses the relative importance between requests. Servers use this to determine how to schedule sending response data. The tree starts with a single root node and as requests are made they either depend on the root or each other. Servers may use the tree to decide how to schedule sending resources but clients cannot force a server to behave in any particular way.</p><p>To illustrate, imagine a client that makes three simple GET requests that all depend on root. As the server receives each request it grows its view of the priority tree:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2eAdxmuUQxOJLbtqnmP0BS/2379ea92642242d027465218bc39d0bc/treebuilding1.png" />
            
            </figure><p>The server starts with only the root node of the priority tree. As requests arrive, the tree grows. In this case all requests depend on the root, so the requests are priority siblings.</p><p>Once all requests are received, the server determines all requests have equal priority and that it should send response data using <a href="https://en.wikipedia.org/wiki/Round-robin_scheduling">round-robin scheduling</a>: send some fraction of response 1, then a fraction of response 2, then a fraction of response 3, and repeat until all responses are complete.</p><p>A single HTTP/2 request-response exchange is made up of frames that are sent on a stream. A simple GET request would be sent using a single <a href="https://tools.ietf.org/html/rfc7540#section-6.2">HEADERS</a> frame:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76uj7WpKKMF0lczd3FfkSY/b49c96761908edf66d482859b3cfa62f/H2_HEADERS_frame.png" />
            
            </figure><p>HTTP/2 HEADERS frame, Each region of a frame is a named field</p><p>Each region of a frame is a named field, a '?' indicates the field is optional and the value in parenthesis is the length in bytes with '*' meaning variable length. The <i>Header Block Fragment</i> field holds compressed HTTP header fields (using <a href="/hpack-the-silent-killer-feature-of-http-2/">HPACK</a>), <i>Pad Length</i> and <i>Padding</i> relate to optional padding, and <i>E</i>, <i>Stream Dependency</i> and <i>Weight</i> combined are the priority signal that controls the priority tree.</p><p>The <i>Stream Dependency</i> and <i>Weight</i> fields are optional but their absence is interpreted as a signal to use the default values; dependency on the root with a weight of 16 meaning that the default priority scheduling strategy is round-robin . However, this is often a bad choice because important resources like HTML, CSS and JavaScript are tied up with things like large images. The following animation demonstrates this in the Edge browser, causing the page to be blank for 19 seconds. Our <a href="/better-http-2-prioritization-for-a-faster-web/">deep dive blog post</a> explains the problem further.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/WHsjjksenLoE7ocl8SgRI/d2f8dab33a3688f4357b12b6646cb593/Edge_loading-1.gif" />
            
            </figure><p>The HEADERS frame <i>E</i> field is the interesting bit (pun intended). A request with the field set to 1 (true) means that the dependency is exclusive and nothing else can depend on the indicated node. To illustrate, imagine a client that sends three requests which set the <i>E</i> field to 1. As the server receives each request, it interprets this as an exclusive dependency on the root node. Because all requests have the same dependency on root, the tree has to be shuffled around to satisfy the exclusivity rules.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/i6NA77FtHzckq638a42kh/25013c743bff2c0279d4324f69ee9a64/treebuilding2.png" />
            
            </figure><p>Each request has an exclusive dependency on the root node. The tree is shuffled as each request is received by the server.</p><p>The final version of the tree looks very different from our previous example. The server would schedule all of response 3, then all of response 2, then all of response 1. This could help load all of an HTML file before an image and thus improve the visual load behaviour.</p><p>In reality, clients load a lot more than three resources and use a mix of priority signals. To understand the priority of any single request, we need to understand all requests. That presents some technological challenges, especially for servers that act like proxies such as the Cloudflare edge network. Some servers have <a href="https://github.com/andydavies/http2-prioritization-issues">problems</a> applying prioritization effectively.</p><p>Because not all clients send the most optimal priority signals we were motivated to develop <a href="/better-http-2-prioritization-for-a-faster-web/">Cloudflare's Enhanced HTTP/2 Prioritization</a>, announced last May during <a href="/tag/speed-week/">Speed Week</a>. This was a joint project between the Speed team (Andrew Galloni, Pat Meenan, Kornel Lesiński) and Protocols team (Nick Jones, Shih-Chiang Chien) and others. It replaces the complicated priority tree with a simpler scheme that is well suited to web resources. Because the feature is implemented on the server side, we avoid requiring any modification of clients or the HTTP/2 protocol itself. Be sure to check out my colleague Nick's blog post that details some of the <a href="/nginx-structural-enhancements-for-http-2-performance/">technical challenges and changes</a> needed to let our servers deliver smarter priorities.</p>
    <div>
      <h2>The Extensible Prioritization Scheme proposal</h2>
      <a href="#the-extensible-prioritization-scheme-proposal">
        
      </a>
    </div>
    <p>The scheme specified in <a href="https://tools.ietf.org/html/draft-kazuho-httpbis-priority-04">draft-kazuho-httpbis-priority-04</a>, defines a way for priorities to be expressed in absolute terms. It replaces HTTP/2's dependency-based relative prioritization, the priority of a request is independent of others, which makes it easier to reason about and easier to schedule.</p><p>Rather than send the priority signal in a frame, the scheme defines an HTTP header - tentatively named "Priority" - that can carry an urgency on a scale of 0 (highest) to 7 (lowest). For example, a client could express the priority of an important resource by sending a request with:</p><p><code>Priority: u=0</code></p><p>And a less important background resource could be requested with:</p><p><code>Priority: u=7</code></p><p>While Kazuho and I are the main authors of this specification, we were inspired by several ideas in the Internet community, and we have incorporated feedback or direct input from many of our peers in the Internet community over several drafts. The text today reflects the efforts-so-far of cross-industry work involving many engineers and researchers including organizations such Adobe, Akamai, Apple, Cloudflare, Fastly, Facebook, Google, Microsoft, Mozilla and UHasselt. Adoption in the HTTP Working Group means that we can help improve the design and specification by spending some IETF time and resources for broader discussion, feedback and implementation experience.</p>
    <div>
      <h2>The backstory</h2>
      <a href="#the-backstory">
        
      </a>
    </div>
    <p>I work in Cloudflare's Protocols team which is responsible for terminating HTTP at the edge. We deal with things like TCP, TLS, QUIC, HTTP/1.x, HTTP/2 and HTTP/3 and since joining the company I've worked with Alessandro Ghedini, Junho Choi and Lohith Bellad to make <a href="/http3-the-past-present-and-future/">QUIC and HTTP/3 generally available</a> last September.</p><p>Working on emerging standards is fun. It involves an eclectic mix of engineering, meetings, document review, specification writing, time zones, personalities, and organizational boundaries. So while working on the codebase of <a href="https://github.com/cloudflare/quiche">quiche</a>, our open source implementation of QUIC and HTTP/3, I am also mulling over design details of the protocols and discussing them in cross-industry venues like the IETF.</p><p>Because of <a href="/http-3-from-root-to-tip/">HTTP/3's lineage</a>, it carries over a lot of features from HTTP/2 including the priority signals and tree described earlier in the post.</p><p>One of the key benefits of HTTP/3 is that it is more resilient to the effect of lossy network conditions on performance; head-of-line blocking is limited because requests and responses can progress independently. This is, however, a double-edged sword because sometimes ordering is important. In HTTP/3 there is no guarantee that the requests are received in the same order that they were sent, so the priority tree can get out of sync between client and server. Imagine a client that makes two requests that include priority signals stating request 1 depends on root, request 2 depends on request 1. If request 2 arrives before request 1, the dependency cannot be resolved and becomes dangling. In such a case what is the best thing for a server to do? Ambiguity in behaviour leads to assumptions and disappointment. We should try to avoid that.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68BpLWmGQLImEc9Bj4BczY/d37a2841eca4bb18f80c473db2e7440f/h3tree.png" />
            
            </figure><p>Request 1 depends on root and request 2 depends on request 1. If an HTTP/3 server receives request 2 first, the dependency cannot be resolved.</p><p>This is just one example where things get tricky quickly. Unfortunately the WG kept finding edge case upon edge case with the priority tree model. We tried to find solutions but each additional fix seemed to create further complexity to the HTTP/3 design. This is a problem because it makes it hard to implement a server that handles priority correctly.</p><p>In parallel to Cloudflare's work on implementing a <a href="/better-http-2-prioritization-for-a-faster-web/">better prioritization for HTTP/2</a>, in January 2019 Pat posted his proposal for an <a href="https://github.com/pmeenan/http3-prioritization-proposal/blob/master/README.md">alternative prioritization scheme for HTTP/3</a> in a <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019JanMar/0073.html">message to the IETF HTTP WG</a>.</p><p>Arguably HTTP/2 prioritization never lived up to its hype. However, replacing it with something else in HTTP/3 is a challenge because the QUIC WG charter required us to try and maintain parity between the protocols. Mark Nottingham, co-chair of the HTTP and QUIC WGs <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019JanMar/0074.html">responded with</a> a good summary of the situation. To quote part of that response:</p><blockquote><p>My sense is that people know that we need to do something about prioritisation, but we're not yet confident about any particular solution. Experimentation with new schemes as HTTP/2 extensions would be very helpful, as it would give us some data to work with. If you'd like to propose such an extension, this is the right place to do it.</p></blockquote><p>And so started a very interesting year of cross-industry discussion on the future of HTTP prioritization.</p>
    <div>
      <h2>A year of prioritization</h2>
      <a href="#a-year-of-prioritization">
        
      </a>
    </div>
    <p>The following is an account of my personal experiences during 2019. It's been a busy year and there may be unintentional errors or omissions, please let me know if you think that is the case. But I hope it gives you a taste of the standardization process and a look behind the scenes of how new Internet protocols that benefit everyone come to life.</p>
    <div>
      <h3>January</h3>
      <a href="#january">
        
      </a>
    </div>
    <p>Pat's email came at the same time that I was attending the QUIC WG Tokyo interim meeting hosted at Akamai (thanks to Mike Bishop for arrangements). So I was able to speak to a few people face-to-face on the topic. There was a bit of mailing list chatter but it tailed off after a few days.</p>
    <div>
      <h3>February to April</h3>
      <a href="#february-to-april">
        
      </a>
    </div>
    <p>Things remained quiet in terms of prioritization discussion. I knew the next best opportunity to get the ball rolling would be the <a href="https://httpwork.shop/">HTTP Workshop 2019</a> held in April. The workshop is a multi-day event not associated with a standards-defining-organization (even if many of the attendees also go to meetings such as the IETF or W3C). It is structured in a way that allows the agenda to be more fluid than a typical standards meeting and gives plenty of time for organic conversation. This sometimes helps overcome gnarly problems, such as the community finding a path forward for <a href="https://tools.ietf.org/html/rfc8441">WebSockets over HTTP/2</a> due to a productive discussion during the 2017 workshop. HTTP prioritization is a gnarly problem, so I was inspired to pitch it as a talk idea. It was selected and you can find the <a href="https://github.com/HTTPWorkshop/workshop2019/blob/master/talks/pardue-jones-priorities.pdf">full slide deck here</a>.</p><p>During the presentation I recounted the history of HTTP prioritization. The great thing about working on open standards is that many email threads, presentation materials and meeting materials are publicly archived. It's fun digging through this history. Did you know: HTTP/2 is based on SPDY and inherited its weight-based prioritization scheme, the tree-based scheme we are familiar with today was only introduced in <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-11">draft-ietf-httpbis-http2-11</a>? One of the reasons for the more-complicated tree was to help HTTP intermediaries (a.k.a. proxies) implement clever resource management. However, it became clear during the discussion that no intermediaries implement this, and none seem to plan to. I also explained a bit more about Pat's alternative scheme and Nick described his implementation experiences. Despite some interesting discussion around the topic however, we didn't come to any definitive solution. There were a lot of other <a href="https://github.com/HTTPWorkshop/workshop2019/blob/master/talks/pardue-pcaps.pdf">interesting topics</a> to discover <a href="https://daniel.haxx.se/blog/2019/04/02/the-http-workshop-2019-begins/">that week</a>.</p>
    <div>
      <h3>May</h3>
      <a href="#may">
        
      </a>
    </div>
    <p>In early May, Ian Swett (Google) <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019AprJun/0107.html">restarted interest</a> in Pat's mailing list thread. Unfortunately he was not present at the HTTP Workshop so had some catching up to do. A little while later Ian submitted a <a href="https://github.com/quicwg/base-drafts/pull/2700">Pull Request to the HTTP/3 specification</a> called "Strict Priorities". This incorporated Pat's proposal and attempted to fix a number of those prioritization edge cases that I mentioned earlier.</p><p>In late May, another QUIC WG interim meeting was held in London at the new Cloudflare offices, here is the view from the meeting room window. Credit to Alessandro for handling the meeting arrangements.</p><blockquote><p>Thanks to <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@cloudflare</a> for hosting our interop and interim meetings in London this week! <a href="https://t.co/LIOA3OqEjr">pic.twitter.com/LIOA3OqEjr</a></p><p>— IETF QUIC WG (@quicwg) <a href="https://twitter.com/quicwg/status/1131467406059212801?ref_src=twsrc%5Etfw">May 23, 2019</a></p></blockquote><p>Mike, the editor of the HTTP/3 specification <a href="https://github.com/quicwg/wg-materials/blob/master/interim-19-05/h3issues.pdf">presented some of the issues</a> with prioritization and we attempted to solve them with the conventional tree-based scheme. Ian, with contribution from Robin Marx (UHasselt), also <a href="https://github.com/quicwg/wg-materials/blob/master/interim-19-05/priorities.pdf">presented</a> an explanation about his "Strict Priorities" proposal. I recommend taking a look at Robin's priority tree visualisations which do a great job of explaining things. From that presentation I particularly liked "The prioritization spectrum", it's a concise snapshot of the state of things at that time:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JveyeZmmt60QOPXKbnZym/dc50d185e0c9f3afc0e077961050b1ae/priotizationspectrum.png" />
            
            </figure><p>An overview of HTTP/3 prioritization issues, fixes and possible alternatives. Presented by Ian Swett at the QUIC Interim Meeting May 2019.</p>
    <div>
      <h3>June and July</h3>
      <a href="#june-and-july">
        
      </a>
    </div>
    <p>Following the interim meeting, the prioritization "debate" continued electronically across GitHub and email. Some time in June Kazuho started work on a proposal that would use a scheme similar to Pat and Ian's absolute priorities. The major difference was that rather than send the priority signal in an HTTP frame, it would use a header field. This isn't a new concept, <a href="https://tools.ietf.org/agenda/83/slides/slides-83-httpbis-5.pdf">Roy Fielding proposed</a> something similar at IETF 83.</p><p>In HTTP/2 and HTTP/3 requests are made up of frames that are sent on streams. Using a simple GET request as an example: a client sends a HEADERS frame that contains the scheme, method, path, and other request header fields. A server responds with a HEADERS frame that contains the status and response header fields, followed by DATA frame(s) that contain the payload.</p><p>To signal priority, a client could also send a PRIORITY frame. In the tree-based scheme the frame carries several fields that express dependencies and weights. Pat and Ian's proposals changed the contents of the PRIORITY frame. Kazuho's proposal encodes the priority as a header field that can be carried in the HEADERS frame as normal metadata, removing the need for the PRIORITY frame altogether.</p><p>I liked the simplification of Kazuho's approach and the new opportunities it might create for application developers. HTTP/2 and HTTP/3 implementations (in particular browsers) abstract away a lot of connection-level details such as stream or frames. That makes it hard to understand what is happening or to tune it.</p><p>The lingua franca of the Web is HTTP requests and responses, which are formed of header fields and payload data. In browsers, APIs such as <a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API">Fetch</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API">Service Worker</a> allow handling of these primitives. In servers, there may be ways to interact with the primitives via configuration or programming languages. As part of <a href="/better-http-2-prioritization-for-a-faster-web/">Enhanced HTTP/2 Prioritization</a>, we have exposed prioritization to Cloudflare Workers to allow rich behavioural customization. If a Worker adds the "cf-priority" header to a response, Cloudflare’s edge servers use the specified priority to serve the response. This might be used to boost the priority of a resource that is important to the load time of a page. To help inform this decision making, the incoming browser priority signal is encapsulated in the request object passed to a Worker's fetch event listener (request.cf.requestPriority).</p><p>Standardising approaches to problems is part of helping to build a better Internet. Because of the resonance between Cloudflare's work and Kazuho's proposal, I asked if he would consider letting me come aboard as a co-author. He kindly accepted and on July 8th we published the <a href="https://tools.ietf.org/html/draft-kazuho-httpbis-priority-00">first version</a> as an Internet-Draft.</p><p>Meanwhile, Ian was helping to drive the overall prioritization discussion and proposed that we use time during IETF 105 in Montreal to speak to a wider group of people. We kicked off the week with a short <a href="https://github.com/httpwg/wg-materials/blob/gh-pages/ietf105/priorities.pdf">presentation to the HTTP WG</a> from Ian, and Kazuho and I <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019JulSep/0095.html">presented</a> our draft in a side-meeting that saw a healthy discussion. There was a realization that the concepts of prioritization scheme, priority signalling and server resource scheduling (enacting prioritization) were conflated and made effective communication and progress difficult. HTTP/2's model was seen as one aspect, and two different I-Ds were created to deprecate it in some way (<a href="https://tools.ietf.org/html/draft-lassey-priority-setting-00">draft-lassey-priority-setting</a>, <a href="https://tools.ietf.org/html/draft-peon-httpbis-h2-priority-one-less-00">draft-peon-httpbis-h2-priority-one-less</a>). Martin Thomson (Mozilla) also created a Pull Request that simply <a href="https://github.com/quicwg/base-drafts/pull/2922">removed the PRIORITY frame from HTTP/3</a>.</p><p>To round off the week, in the second HTTP session it was decided that there was sufficient interest in resolving the prioritization debate via the creation of a design team. I joined the team led by Ian Swett along with others from Adobe, Akamai, Apple, Cloudflare, Fastly, Facebook, Google, Microsoft, and UHasselt.</p>
    <div>
      <h3>August to October</h3>
      <a href="#august-to-october">
        
      </a>
    </div>
    <p>Martin's PR generated a lot of conversation. It was merged under proviso that <i>some</i> solution be found before the HTTP/3 specification was finalized. Between May and August we went from something very complicated (e.g. <i>Orphan placeholder, with PRIORITY only on control stream, plus exclusive priorities</i>) to a blank canvas. The pressure was now on!</p><p>The design team held several teleconference meetings across the months. Logistics are a bit difficult when you have team members distributed across West Coast America, East Coast America, Western Europe, Central Europe, and Japan. However, thanks to some late nights and early mornings we managed to all get on the call at the same time.</p><p>In October most of us travelled to Cupertino, CA to attend another QUIC interim meeting hosted at Apple's Infinite Loop (Eric Kinnear helping with arrangements).  The first two days of the meeting were used for interop testing and were loosely structured, so the design team took the opportunity to hold the first face-to-face meeting. We made some progress and helped Ian to form up some <a href="https://github.com/quicwg/wg-materials/blob/master/interim-19-10/HTTP%20Priorities%20Update.pdf">new slides to present</a> later in the week. Again, there was some useful discussion and signs that we should put some time in the agenda in IETF 106.</p>
    <div>
      <h3>November</h3>
      <a href="#november">
        
      </a>
    </div>
    <p>The design team came to agreement that draft-kazuho-httpbis-priority was a good basis for a new prioritization scheme. We decided to consolidate the various I-Ds that had sprung up during IETF 105 into the document, making it a single source that was easier for people to track progress and open issues if required. This is why, even though Kazuho and I are the named authors, the document reflects a broad input from the community. We published <a href="https://tools.ietf.org/html/draft-kazuho-httpbis-priority-03">draft 03</a> in November, just ahead of the deadline for IETF 106 in Singapore.</p><p>Many of us travelled to Singapore ahead of the actual start of IETF 106. This wasn't to squeeze in some sightseeing (sadly) but rather to attend the IETF Hackathon. These are events where engineers and researchers can really put the concept of "running code" to the test. I really enjoy attending and I'm grateful to Charles Eckel and the team that organised it. If you'd like to read more about the event, Charles wrote up a nice blog post that, through some strange coincidence, features a picture of me, Kazuho and Robin talking at the QUIC table.</p><blockquote><p>Link: <a href="https://t.co/8qP78O6cPS">https://t.co/8qP78O6cPS</a></p><p>— Lucas Pardue (@SimmerVigor) <a href="https://twitter.com/SimmerVigor/status/1207049013301796864?ref_src=twsrc%5Etfw">December 17, 2019</a></p></blockquote><p>The design team held another face-to-face during a Hackathon lunch break and decided that we wanted to make some tweaks to the design written up in draft 03. Unfortunately the freeze was still in effect so we could not issue a new draft. Instead, we presented the most recent thinking to the HTTP session on Monday where Ian put forward draft-kazuho-httpbis-priority as the group's proposed design solution. Ian and Robin also shared results of <a href="https://github.com/httpwg/wg-materials/blob/gh-pages/ietf106/priorities.pdf">prioritization experiments</a>. We received some great feedback in the meeting and during the week pulled out all the stops to issue a new draft <a href="https://tools.ietf.org/html/draft-kazuho-httpbis-priority-04">04</a> before the next HTTP session on Thursday. The question now was: Did the WG think this was suitable to adopt as the basis of an alternative prioritization scheme? I think we addressed a lot of the feedback in this draft and there was a general feeling of support in the room. However, in the IETF consensus is declared via mailing lists and so Tommy Pauly, co-chair of the HTTP WG, put out a <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019OctDec/0125.html">Call for Adoption</a> on November 21st.</p>
    <div>
      <h3>December</h3>
      <a href="#december">
        
      </a>
    </div>
    <p>In the Cloudflare London office, preparations begin for mince pie <a href="/imdb-2017/">acquisition</a> and <a href="/internet-mince-pie-database/">assessment</a>.</p><p>The HTTP priorities team played the waiting game and watched the mailing list discussion. On the whole people supported the concept but there was one topic that divided opinion. Some people loved the use of headers to express priorities, some people didn't and wanted to stick to frames.</p><p>On December 13th Tommy <a href="https://lists.w3.org/Archives/Public/ietf-http-wg/2019OctDec/0181.html">announced</a> that the group had decided to adopt our document and assign Kazuho and I as editors. The header/frame divide was noted as something that needed to be resolved.</p>
    <div>
      <h2>The next step of the journey</h2>
      <a href="#the-next-step-of-the-journey">
        
      </a>
    </div>
    <p>Just because the document has been adopted does not mean we are done. In some ways we are just getting started. Perfection is often the enemy of getting things done and so sometimes adoption occurs at the first incarnation of a "good enough" proposal.</p><p>Today HTTP/3 has no prioritization signal. Without priority information there is a small danger that servers pick a scheduling strategy that is not optimal, that could cause the web performance of HTTP/3 to be worse than HTTP/2. To avoid that happening we'll refine and complete the design of the Extensible Priority Scheme. To do so there are open issues that we have to resolve, we'll need to square the circle on headers vs. frames, and we'll no doubt hit unknown unknowns. We'll need the input of the WG to make progress and their help to document the design that fits the need, and so I look forward to continued collaboration across the Internet community.</p><p>2019 was quite a ride and I'm excited to see what 2020 brings.</p><p>If working on protocols is your interest and you like what Cloudflare is doing, please visit our <a href="https://www.cloudflare.com/careers/">careers page</a>. Our journey isn’t finished, in fact far from it.</p> ]]></content:encoded>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">7FCWckuesFb3p3M4fjhJy</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP/3: From root to tip]]></title>
            <link>https://blog.cloudflare.com/http-3-from-root-to-tip/</link>
            <pubDate>Thu, 24 Jan 2019 17:57:09 GMT</pubDate>
            <description><![CDATA[ Explore HTTP/3 from root to tip and discover the backstory of this new HTTP syntax that works on top of the IETF QUIC transport. ]]></description>
            <content:encoded><![CDATA[ <p>HTTP is the application protocol that powers the Web. It began life as the so-called HTTP/0.9 protocol in 1991, and by 1999 had evolved to HTTP/1.1, which was standardised within the IETF (Internet Engineering Task Force). HTTP/1.1 was good enough for a long time but the ever changing needs of the Web called for a better suited protocol, and HTTP/2 emerged in 2015. More recently it was announced that the IETF is intending to deliver a new version - <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>. To some people this is a surprise and has caused a bit of confusion. If you don't track IETF work closely it might seem that HTTP/3 has come out of the blue. However,  we can trace its origins through a lineage of experiments and evolution of Web protocols; specifically the QUIC transport protocol.</p><p>If you're not familiar with QUIC, my colleagues have done a great job of tackling different angles. John's <a href="/the-quicening/">blog</a> describes some of the real-world annoyances of today's HTTP, Alessandro's <a href="/the-road-to-quic/">blog</a> tackles the nitty-gritty transport layer details, and Nick's blog covers <a href="/head-start-with-quic/">how to get hands on</a> with some testing. We've collected these and more at <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a>. And if that tickles your fancy, be sure to check out <a href="/enjoy-a-slice-of-quic-and-rust/">quiche</a>, our own open-source implementation of the QUIC protocol written in Rust.</p><p>HTTP/3 is the HTTP application mapping to the QUIC transport layer. This name was made official in the recent draft version 17 (<a href="https://tools.ietf.org/html/draft-ietf-quic-http-17">draft-ietf-quic-http-17</a>), which was proposed in late October 2018, with discussion and rough consensus being formed during the IETF 103 meeting in Bangkok in November. HTTP/3 was previously known as HTTP over QUIC, which itself was previously known as HTTP/2 over QUIC. Before that we had HTTP/2 over gQUIC, and way back we had SPDY over gQUIC. The fact of the matter, however, is that HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport.</p><p>In this blog post we'll explore the history behind some of HTTP/3's previous names and present the motivation behind the most recent name change. We'll go back to the early days of HTTP and touch on all the good work that has happened along the way. If you're keen to get the full picture you can jump to the end of the article or open this <a href="/content/images/2019/01/web_timeline_large1.svg">highly detailed SVG version</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rKByCE0o19Q1zSD9Huliu/7f3540be1ff8f02da311c4df909def42/http3-stack.png" />
            
            </figure><p>An HTTP/3 layer cake</p>
    <div>
      <h2>Setting the scene</h2>
      <a href="#setting-the-scene">
        
      </a>
    </div>
    <p>Just before we focus on HTTP, it is worth reminding ourselves that there are two protocols that share the name QUIC. As we explained <a href="/the-road-to-quic/">previously</a>, gQUIC is commonly used to identify Google QUIC (the original protocol), and QUIC is commonly used to represent the IETF standard-in-progress version that diverges from gQUIC.</p><p>Since its early days in the 90s, the web’s needs have changed. We've had new versions of HTTP and added user security in the shape of Transport Layer Security (TLS). We'll only touch on TLS in this post, our other <a href="/tag/tls/">blog posts</a> are a great resource if you want to explore that area in more detail.</p><p>To help me explain the history of HTTP and TLS, I started to collate details of protocol specifications and dates. This information is usually presented in a textual form such as a list of bullets points stating document titles, ordered by date. However, there are branching standards, each overlapping in time and a simple list cannot express the real complexity of relationships. In HTTP, there has been parallel work that refactors core protocol definitions for easier consumption, extends the protocol for new uses, and redefines how the protocol exchanges data over the Internet for performance. When you're trying to join the dots over nearly 30 years of Internet history across different branching work streams you need a visualisation. So I made one -  the Cloudflare Secure Web Timeline. (NB: Technically it is a <a href="https://en.wikipedia.org/wiki/Cladogram">Cladogram</a>, but the term timeline is more widely known).</p><p>I have applied some artistic license when creating this, choosing to focus on the successful branches in the IETF space. Some of the things not shown include efforts in the W3 Consortium <a href="https://www.w3.org/Protocols/HTTP-NG/">HTTP-NG</a> working group, along with some exotic ideas that their authors are keen on explaining how to pronounce:  <a href="https://blog.jgc.org/2012/12/speeding-up-http-with-minimal-protocol.html">HMURR (pronounced 'hammer')</a> and <a href="https://github.com/HTTPWorkshop/workshop2017/blob/master/talks/waka.pdf">WAKA (pronounced “wah-kah”)</a>.</p><p>In the next few sections I'll walk this timeline to explain critical chapters in the history of HTTP. To enjoy the takeaways from this post, it helps to have an appreciation of why standardisation is beneficial, and how the IETF approaches it. Therefore we'll start with a very brief overview of that topic before returning to the timeline itself. Feel free to skip the next section if you are already familiar with the IETF.</p>
    <div>
      <h2>Types of Internet standard</h2>
      <a href="#types-of-internet-standard">
        
      </a>
    </div>
    <p>Generally, standards define common terms of reference, scope, constraint, applicability, and other considerations. Standards exist in many shapes and sizes, and can be informal (aka de facto) or formal (agreed/published by a Standards Defining Organisation such as IETF, ISO or MPEG). Standards are used in many fields, there is even a formal British Standard for making tea - BS 6008.</p><p>The early Web used HTTP and SSL protocol definitions that were published outside the IETF, these are marked as <b>red lines</b> on the Secure Web Timeline. The uptake of these protocols by clients and servers made them de facto standards.</p><p>At some point, it was decided to formalise these protocols (some motivating reasons are described in a later section). Internet standards are commonly defined in the IETF, which is guided by the informal principle of "rough consensus and running code". This is grounded in experience of developing and deploying things on the Internet. This is in contrast to a "clean room" approach of trying to develop perfect protocols in a vacuum.</p><p>IETF Internet standards are commonly known as RFCs. This is a complex area to explain so I recommend reading the blog post "<a href="https://www.ietf.org/blog/how-read-rfc/">How to Read an RFC</a>" by the QUIC Working Group Co-chair Mark Nottingham. A Working Group, or WG, is more or less just a mailing list.</p><p>Each year the IETF hold three meetings that provide the time and facilities for all WGs to meet in person if they wish. The agenda for these weeks can become very congested, with limited time available to discuss highly technical areas in depth. To overcome this, some WGs choose to also hold interim meetings in the months between the the general IETF meetings. This can help to maintain momentum on specification development. The QUIC WG has held several interim meetings since 2017, a full list is available on their <a href="https://datatracker.ietf.org/wg/quic/meetings/">meeting page</a>.</p><p>These IETF meetings also provide the opportunity for other IETF-related collections of people to meet, such as the <a href="https://www.iab.org/">Internet Architecture Board</a> or <a href="https://irtf.org/">Internet Research Task Force</a>. In recent years, an <a href="https://www.ietf.org/how/runningcode/hackathons/">IETF Hackathon</a> has been held during the weekend preceding the IETF meeting. This provides an opportunity for the community to develop running code and, importantly, to carry out interoperability testing in the same room with others. This helps to find issues in specifications that can be discussed in the following days.</p><p>For the purposes of this blog, the important thing to understand is that RFCs don't just spring into existence. Instead, they go through a process that usually starts with an IETF Internet Draft (I-D) format that is submitted for consideration of adoption. In the case where there is already a published specification, preparation of an I-D might just be a simple reformatting exercise. I-Ds have a 6 month active lifetime from the date of publish. To keep them active, new versions need to be published. In practice, there is not much consequence to letting an I-D elapse and it happens quite often. The documents continue to be hosted on the <a href="https://datatracker.ietf.org/doc/recent">IETF document’s website</a> for anyone that wants to read them.</p><p>I-Ds are represented on the Secure Web Timeline as <b>purple lines</b>. Each one has a unique name that takes the form of <i>draft-{author name}-{working group}-{topic}-{version}</i>. The working group field is optional, it might predict IETF WG that will work on the piece and sometimes this changes. If an I-D is adopted by the IETF, or if the I-D was initiated directly within the IETF, the name is <i>draft-ietf-{working group}-{topic}-{version}</i>. I-Ds may branch, merge or die on the vine. The version starts at 00 and increases by 1 each time a new draft is released. For example, the 4th draft of an I-D will have the version 03. Any time that an I-D changes name, its version resets back to 00.</p><p>It is important to note that anyone can submit an I-D to the IETF; you should not consider these as standards. But, if the IETF standardisation process of an I-D does reach consensus, and the final document passes review, we finally get an RFC. The name changes again at this stage. Each RFC gets a unique number e.g. <a href="https://tools.ietf.org/html/rfc7230">RFC 7230</a>. These are represented as <b>blue lines</b> on the Secure Web Timeline.</p><p>RFCs are immutable documents. This means that changes to the RFC require a completely new number. Changes might be done in order to incorporate fixes for errata (editorial or technical errors that were found and reported) or simply to refactor the specification to improve layout. RFCs may <b>obsolete</b> older versions (complete replacement), or just <b>update</b> them (substantively change).</p><p>All IETF documents are openly available on <a href="http://tools.ietf.org">http://tools.ietf.org</a>. Personally I find the <a href="https://datatracker.ietf.org">IETF Datatracker</a> a little more user friendly because it provides a visualisation of a documents progress from I-D to RFC.</p><p>Below is an example that shows the development of <a href="https://tools.ietf.org/html/rfc1945">RFC 1945</a> - HTTP/1.0 and it is a clear source of inspiration for the Secure Web Timeline.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5SlEeIaoaU7r9PkklUSIOj/c24f0ba70885244920a29bea1daffd68/RFC-1945-datatracker.png" />
            
            </figure><p>IETF Datatracker view of RFC 1945</p><p>Interestingly, in the course of my work I found that the above visualisation is incorrect. It is missing <a href="https://tools.ietf.org/html/draft-ietf-http-v10-spec-05">draft-ietf-http-v10-spec-05</a> for some reason. Since the I-D lifetime is 6 months, there appears to be a gap before it became an RFC, whereas in reality draft 05 was still active through until August 1996.</p>
    <div>
      <h2>Exploring the Secure Web Timeline</h2>
      <a href="#exploring-the-secure-web-timeline">
        
      </a>
    </div>
    <p>With a small appreciation of how Internet standards documents come to fruition, we can start to walk the the Secure Web Timeline. In this section are a number of excerpt diagrams that show an important part of the timeline. Each dot represents the date that a document or capability was made available. For IETF documents, draft numbers are omitted for clarity. However, if you want to see all that detail please check out the <a href="/content/images/2019/01/web_timeline_large1.svg">complete timeline</a>.</p><p>HTTP began life as the so-called HTTP/0.9 protocol in 1991, and in 1994 the I-D <a href="https://tools.ietf.org/html/draft-fielding-http-spec-00">draft-fielding-http-spec-00</a> was published. This was adopted by the IETF soon after, causing the name change to <a href="https://tools.ietf.org/html/draft-ietf-http-v10-spec-00">draft-ietf-http-v10-spec-00</a>. The I-D went through 6 draft versions before being published as <a href="https://tools.ietf.org/html/rfc1945">RFC 1945</a> - HTTP/1.0 in 1996.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fSsHSEtXc1HA38jJorxhO/d1a86966735f27d3b2ccfbcc1c8ec38d/http11-standardisation.png" />
            
            </figure><p>However, even before the HTTP/1.0 work completed, a separate activity started on HTTP/1.1. The I-D <a href="https://tools.ietf.org/html/draft-ietf-http-v11-spec-00">draft-ietf-http-v11-spec-00</a> was published in November 1995 and was formally published as <a href="https://tools.ietf.org/html/rfc2068">RFC 2068</a> in 1997. The keen eyed will spot that the Secure Web Timeline doesn't quite capture that sequence of events, this is an unfortunate side effect of the tooling used to generate the visualisation. I tried to minimise such problems where possible.</p><p>An HTTP/1.1 revision exercise was started in mid-1997 in the form of <a href="https://tools.ietf.org/html/draft-ietf-http-v11-spec-rev-00">draft-ietf-http-v11-spec-rev-00</a>. This completed in 1999 with the publication of <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>. Things went quiet in the IETF HTTP world until 2007. We'll come back to that shortly.</p>
    <div>
      <h2>A History of SSL and TLS</h2>
      <a href="#a-history-of-ssl-and-tls">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pnAQiXiXCFpQpSkznYI1T/80a5516f4dafa64d1a5b60733b14c913/ssl-tls-standardisation.png" />
            
            </figure><p>Switching tracks to SSL. We see that the SSL 2.0 specification was released sometime around 1995, and that SSL 3.0 was released in November 1996. Interestingly, SSL 3.0 is described by <a href="https://tools.ietf.org/html/rfc6101">RFC 6101</a>, which was released in August 2011. This sits in <b>Historic</b> category, which "is usually done to document ideas that were considered and discarded, or protocols that were already historic when it was decided to document them." according to the <a href="https://www.ietf.org/blog/iesg-statement-designating-rfcs-historic/?primary_topic=7&amp;">IETF</a>. In this case it is advantageous to have an IETF-owned document that describes SSL 3.0 because it can be used as a canonical reference elsewhere.</p><p>Of more interest to us is how SSL inspired the development of TLS, which began life as <a href="https://tools.ietf.org/html/draft-ietf-tls-protocol-00">draft-ietf-tls-protocol-00</a> in November 1996. This went through 6 draft versions and was published as <a href="https://tools.ietf.org/html/rfc2246">RFC 2246</a> - TLS 1.0 at the start of 1999.</p><p>Between 1995 and 1999, the SSL and TLS protocols were used to secure HTTP communications on the Internet. This worked just fine as a de facto standard. It wasn't until January 1998 that the formal standardisation process for HTTPS was started with the publication of I-D <a href="https://tools.ietf.org/html/draft-ietf-tls-https-00">draft-ietf-tls-https-00</a>. That work concluded in May 2000 with the publication of <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a> - HTTP over TLS.</p><p>TLS continued to evolve between 2000 and 2007, with the standardisation of TLS 1.1 and 1.2. There was a gap of 7 years until work began on the next version of TLS, which was adopted as <a href="https://tools.ietf.org/html/draft-ietf-tls-tls13-00">draft-ietf-tls-tls13-00</a> in April 2014 and, after 28 drafts, completed as <a href="https://tools.ietf.org/html/rfc8446">RFC 8446</a> - TLS 1.3 in August 2018.</p>
    <div>
      <h2>Internet standardisation process</h2>
      <a href="#internet-standardisation-process">
        
      </a>
    </div>
    <p>After taking a small look at the timeline, I hope you can build a sense of how the IETF works. One generalisation for the way that Internet standards take shape is that researchers or engineers design experimental protocols that suit their specific use case. They experiment with protocols, in public or private, at various levels of scale. The data helps to identify improvements or issues. The work may be published to explain the experiment, to gather wider input or to help find other implementers. Take up of this early work by others may make it a de facto standard; eventually there may be sufficient momentum that formal standardisation becomes an option.</p><p>The status of a protocol can be an important consideration for organisations that may be thinking about implementing, deploying or in some way using it. A formal standardisation process can make a de facto standard more attractive because it tends to provide stability. The stewardship and guidance is provided by an organisation, such as the IETF, that reflects a wider range of experiences. However, it is worth highlighting that not all all formal standards succeed.</p><p>The process of creating a final standard is almost as important as the standard itself. Taking an initial idea and inviting contribution from people with wider knowledge, experience and use cases can to help produce something that will be of more use to a wider population. However, the standardisation process is not always easy. There are pitfalls and hurdles. Sometimes the process takes so long that the output is no longer relevant.</p><p>Each Standards Defining Organisation tends to have its own process that is geared around its field and participants. Explaining all of the details about how the IETF works is well beyond the scope of this blog. The IETF's "<a href="https://www.ietf.org/how/">How we work</a>" page is an excellent starting point that covers many aspects. The best method to forming understanding, as usual, is to get involved yourself. This can be as easy as joining an email list or adding to discussion on a relevant GitHub repository.</p>
    <div>
      <h2>Cloudflare's running code</h2>
      <a href="#cloudflares-running-code">
        
      </a>
    </div>
    <p>Cloudflare is proud to be early an adopter of new and evolving protocols. We have a long record of adopting new standards early, such as <a href="/introducing-http2/">HTTP/2</a>. We also test  features that are experimental or yet to be final, like <a href="/introducing-tls-1-3/">TLS 1.3</a> and <a href="/introducing-spdy/">SPDY</a>.</p><p>In relation to the IETF standardisation process, deploying this running code on real networks across a diverse body of websites helps us understand how well the protocol will work in practice. We combine our existing expertise with experimental information to help improve the running code and, where it makes sense, feedback issues or improvements to the WG that is standardising a protocol.</p><p>Testing new things is not the only priority. Part of being an innovator is knowing when it is time to move forward and put older innovations in the rear view mirror. Sometimes this relates to security-oriented protocols, for example, Cloudflare <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">disabled SSLv3 by default</a> due of the POODLE vulnerability. In other cases, protocols become superseded by a more technologically advanced one; Cloudflare <a href="/deprecating-spdy/">deprecated SPDY</a> support in favour of HTTP/2.</p><p>The introduction and deprecation of relevant protocols are represented on the Secure Web Timeline as <b>orange lines</b>. Dotted vertical lines help correlate Cloudflare events to relevant IETF documents. For example, Cloudflare introduced TLS 1.3 support in September 2016, with the final document, <a href="https://tools.ietf.org/html/rfc8446">RFC 8446</a>, being published almost two years later in August 2018.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ptcxVRf8P4wmKMz6uSzAk/4d37f0581a865bc4f120282e7d9d5ebf/cf-events.png" />
            
            </figure>
    <div>
      <h2>Refactoring in HTTPbis</h2>
      <a href="#refactoring-in-httpbis">
        
      </a>
    </div>
    <p>HTTP/1.1 is a very successful protocol and the timeline shows that there wasn't much activity in the IETF after 1999. However, the true reflection is that years of active use gave implementation experience that unearthed latent issues with <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>, which caused some interoperability issues. Furthermore, the protocol was extended by other RFCs like 2817 and 2818. It was decided in 2007 to kickstart a new activity to improve the HTTP protocol specification. This was called HTTPbis (where "bis" stems from Latin meaning "two", "twice" or "repeat") and it took the form of a new Working Group. The original <a href="https://tools.ietf.org/wg/httpbis/charters?item=charter-httpbis-2007-10-23.txt">charter</a> does a good job of describing the problems that were trying to be solved.</p><p>In short, HTTPbis decided to refactor <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>. It would incorporate errata fixes and buy in some aspects of other specifications that had been published in the meantime. It was decided to split the document up into parts. This resulted in 6 I-Ds published in December 2007:</p><ul><li><p>draft-ietf-httpbis-p1-messaging</p></li><li><p>draft-ietf-httpbis-p2-semantics</p></li><li><p>draft-ietf-httpbis-p4-conditional</p></li><li><p>draft-ietf-httpbis-p5-range</p></li><li><p>draft-ietf-httpbis-p6-cache</p></li><li><p>draft-ietf-httpbis-p7-auth</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5cCDzKbc2DBLJgD1cdLCor/c928470939df5acd6112503c41c78db3/http11-refactor.png" />
            
            </figure><p>The diagram shows how this work progressed through a lengthy drafting process of 7 years, with 27 draft versions being released, before final standardisation. In June 2014, the so-called RFC 723x series was released (where x ranges from 0 to 5). The Chair of the HTTPbis WG celebrated this achievement with the acclimation "<a href="https://www.mnot.net/blog/2014/06/07/rfc2616_is_dead">RFC2616 is Dead</a>". If it wasn't clear, these new documents obsoleted the older <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>.</p>
    <div>
      <h2>What does any of this have to do with HTTP/3?</h2>
      <a href="#what-does-any-of-this-have-to-do-with-http-3">
        
      </a>
    </div>
    <p>While the IETF was busy working on the RFC 723x series the world didn't stop. People continued to enhance, extend and experiment with HTTP on the Internet. Among them were Google, who had started to experiment with something called SPDY (pronounced speedy). This protocol was touted as improving the performance of web browsing, a principle use case for HTTP. At the end of 2009 SPDY v1 was announced, and it was quickly followed by SPDY v2 in 2010.</p><p>I want to avoid going into the technical details of SPDY. That's a topic for another day. What is important, is to understand that SPDY took the core paradigms of HTTP and modified the interchange format slightly in order to gain improvements. With hindsight, we can see that HTTP has clearly delimited semantics and syntax. Semantics describe the concept of request and response exchanges including: methods, status codes, header fields (metadata) and bodies (payload). Syntax describe how to map semantics to bytes on the wire.</p><p>HTTP/0.9, 1.0 and 1.1 share many semantics. They also share syntax in the form of character strings that are sent over TCP connections. SPDY took HTTP/1.1 semantics and changed the syntax from strings to binary. This is a really interesting topic but we will go no further down that rabbit hole today.</p><p>Google's experiments with SPDY showed that there was promise in changing HTTP syntax, and value in keeping the existing HTTP semantics. For example, keeping the format of URLs to use  https:// avoided many problems that could have affected adoption.</p><p>Having seen some of the positive outcomes, the IETF decided it was time to consider what HTTP/2.0 might look like. The <a href="https://github.com/httpwg/wg-materials/blob/gh-pages/ietf83/HTTP2.pdf">slides</a> from the HTTPbis session held during IETF 83 in March 2012 show the requirements, goals and measures of success that were set out. It is also clearly states that "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x".</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3EQMui9QKRGR8Vzcu5wMZA/229e6a1cb1cc1fa78da96dc34f59c220/http2-standardisation.png" />
            
            </figure><p>During that meeting the community was invited to share proposals. I-Ds that were submitted for consideration included <a href="https://tools.ietf.org/html/draft-mbelshe-httpbis-spdy-00">draft-mbelshe-httpbis-spdy-00</a>, <a href="https://tools.ietf.org/html/draft-montenegro-httpbis-speed-mobility-00">draft-montenegro-httpbis-speed-mobility-00</a> and <a href="https://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00">draft-tarreau-httpbis-network-friendly-00</a>. Ultimately, the SPDY draft was adopted and in November 2012 work began on <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-00">draft-ietf-httpbis-http2-00</a>. After 18 drafts across a period of just over 2 years, <a href="https://tools.ietf.org/html/rfc7540">RFC 7540</a> - HTTP/2 was published in 2015. During this specification period, the precise syntax of HTTP/2 diverged just enough to make HTTP/2 and SPDY incompatible.</p><p>These years were a very busy period for the HTTP-related work at the IETF, with the HTTP/1.1 refactor and HTTP/2 standardisation taking place in parallel. This is in stark contrast to the many years of quiet in the early 2000s. Be sure to check out the full timeline to really appreciate the amount of work that took place.</p><p>Although HTTP/2 was in the process of being standardised, there was still benefit to be had from using and experimenting with SPDY. Cloudflare <a href="/spdy-now-one-click-simple-for-any-website/">introduced support for SPDY</a> in August 2012 and only deprecated it in February 2018 when our statistics showed that less than 4% of Web clients continued to want SPDY. Meanwhile, we <a href="/introducing-http2/">introduced HTTP/2</a> support in December 2015, not long after the RFC was published, when our analysis indicated that a meaningful proportion of Web clients could take advantage of it.</p><p>Web client support of the SPDY and HTTP/2 protocols preferred the secure option of using TLS. The introduction of <a href="/introducing-universal-ssl/">Universal SSL</a> in September 2014 helped ensure that all websites signed up to Cloudflare were able to take advantage of these new protocols as we introduced them.</p>
    <div>
      <h3>gQUIC</h3>
      <a href="#gquic">
        
      </a>
    </div>
    <p>Google continued to experiment between 2012 and 2015 they released SPDY v3 and v3.1. They also started working on gQUIC (pronounced, at the time, as quick) and the initial public specification was made available in early 2012.</p><p>The early versions of gQUIC made use of the SPDY v3 form of HTTP syntax. This choice made sense because HTTP/2 was not yet finished. The SPDY binary syntax was packaged into QUIC packets that could sent in UDP datagrams. This was a departure from the TCP transport that HTTP traditionally relied on. When stacked up all together this looked like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40oAdMSmePG37lMEb8Odfi/b5cf02bbe9256889bd0cc103a34484a9/gquic-stack.png" />
            
            </figure><p>SPDY over gQUIC layer cake</p><p>gQUIC used clever tricks to achieve performance. One of these was to break the clear layering between application and transport. What this meant in practice was that gQUIC only ever supported HTTP. So much so that gQUIC, termed "QUIC" at the time, was synonymous with being the next candidate version of HTTP. Despite the continued changes to QUIC over the last few years, which we'll touch on momentarily, to this day, the term QUIC is understood by people to mean that initial HTTP-only variant. Unfortunately this is a regular source of confusion when discussing the protocol.</p><p>gQUIC continued to experiment and eventually switched over to a syntax much closer to HTTP/2. So close in fact that most people simply called it "HTTP/2 over QUIC". However, because of technical constraints there were some very subtle differences. One example relates to how the HTTP headers were serialized and exchanged. It is a minor difference but in effect means that HTTP/2 over gQUIC was incompatible with the IETF's HTTP/2.</p><p>Last but not least, we always need to consider the security aspects of Internet protocols. gQUIC opted not to use TLS to provide security. Instead Google developed a different approach called QUIC Crypto. One of the interesting aspects of this was a new method for speeding up security handshakes. A client that had previously established a secure session with a server could reuse information to do a "zero <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trip time</a>", or 0-RTT, handshake. 0-RTT was later incorporated into TLS 1.3.</p>
    <div>
      <h2>Are we at the point where you can tell me what HTTP/3 is yet?</h2>
      <a href="#are-we-at-the-point-where-you-can-tell-me-what-http-3-is-yet">
        
      </a>
    </div>
    <p>Almost.</p><p>By now you should be familiar with how standardisation works and gQUIC is not much different. There was sufficient interest that the Google specifications were written up in I-D format. In June 2015 <a href="https://tools.ietf.org/html/draft-tsvwg-quic-protocol-00">draft-tsvwg-quic-protocol-00</a>, entitled "QUIC: A UDP-based Secure and Reliable Transport for HTTP/2" was submitted. Keep in mind my earlier statement that the syntax was almost-HTTP/2.</p><p>Google <a href="https://groups.google.com/a/chromium.org/forum/#!topic/proto-quic/otGKB4ytAyc">announced</a> that a Bar BoF would be held at IETF 93 in Prague. For those curious about what a "Bar BoF" is, please consult <a href="https://tools.ietf.org/html/rfc6771">RFC 6771</a>. Hint: BoF stands for Birds of a Feather.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/364tlrFLOtgAYBYxHrOXy8/18f5cebace8c29778e06109fe878c3b8/quic-standardisation.png" />
            
            </figure><p>The outcome of this engagement with the IETF was, in a nutshell, that QUIC seemed to offer many advantages at the transport layer and that it should be decoupled from HTTP. The clear separation between layers should be re-introduced. Furthermore, there was a preference for returning back to a TLS-based handshake (which wasn't so bad since TLS 1.3 was underway at this stage, and it was incorporating 0-RTT handshakes).</p><p>About a year later, in 2016, a new set of I-Ds were submitted:</p><ul><li><p><a href="https://tools.ietf.org/html/draft-hamilton-quic-transport-protocol-00">draft-hamilton-quic-transport-protocol-00</a></p></li><li><p><a href="https://tools.ietf.org/html/draft-thomson-quic-tls-00">draft-thomson-quic-tls-00</a></p></li><li><p><a href="https://tools.ietf.org/html/draft-iyengar-quic-loss-recovery-00">draft-iyengar-quic-loss-recovery-00</a></p></li><li><p><a href="https://tools.ietf.org/html/draft-shade-quic-http2-mapping-00">draft-shade-quic-http2-mapping-00</a></p></li></ul><p>Here's where another source of confusion about HTTP and QUIC enters the fray. <a href="https://tools.ietf.org/html/draft-shade-quic-http2-mapping-00">draft-shade-quic-http2-mapping-00</a> is entitled "HTTP/2 Semantics Using The QUIC Transport Protocol" and it describes itself as "a mapping of HTTP/2 semantics over QUIC". However, this is a misnomer. HTTP/2 was about changing syntax while maintaining semantics. Furthermore, "HTTP/2 over gQUIC" was never an accurate description of the syntax either, for the reasons I outlined earlier. Hold that thought.</p><p>This IETF version of QUIC was to be an entirely new transport protocol. That's a large undertaking and before diving head-first into such commitments, the IETF likes to gauge actual interest from its members. To do this, a formal <a href="https://www.ietf.org/how/bofs/">Birds of a Feather</a> meeting was held at the IETF 96 meeting in Berlin in 2016. I was lucky enough to attend the session in person and the <a href="https://datatracker.ietf.org/meeting/96/materials/slides-96-quic-0">slides</a> don't give it justice. The meeting was attended by hundreds, as shown by Adam Roach's <a href="https://www.flickr.com/photos/adam-roach/28343796722/in/photostream/">photograph</a>. At the end of the session consensus was reached; QUIC would be adopted and standardised at the IETF.</p><p>The first IETF QUIC I-D for mapping HTTP to QUIC, <a href="https://tools.ietf.org/html/draft-ietf-quic-http-00">draft-ietf-quic-http-00</a>, took the Ronseal approach and simplified its name to "HTTP over QUIC". Unfortunately, it didn't finish the job completely and there were many instances of the term HTTP/2 throughout the body. Mike Bishop, the I-Ds new editor, identified this and started to fix the HTTP/2 misnomer. In the 01 draft, the description changed to "a mapping of HTTP semantics over QUIC".</p><p>Gradually, over time and versions, the use of the term "HTTP/2" decreased and the instances became mere references to parts of <a href="https://tools.ietf.org/html/rfc7540">RFC 7540</a>. Roll forward two years to October 2018 and the I-D is now at version 16. While HTTP over QUIC bares similarity to HTTP/2 it ultimately is an independent, non-backwards compatible HTTP syntax. However, to those that don't track IETF development very closely (a very very large percentage of the Earth's population), the document name doesn't capture this difference. One of the main points of standardisation is to aid communication and interoperability. Yet a simple thing like naming is a major contributor to confusion in the community.</p><p>Recall what was said in 2012, "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x". The IETF followed that existing cue. After much deliberation in the lead up to, and during, IETF 103, consensus was reached to rename "HTTP over QUIC" to HTTP/3. The world is now in a better place and we can move on to more important debates.</p>
    <div>
      <h2>But RFC 7230 and 7231 disagree with your definition of semantics and syntax!</h2>
      <a href="#but-rfc-7230-and-7231-disagree-with-your-definition-of-semantics-and-syntax">
        
      </a>
    </div>
    <p>Sometimes document titles can be confusing. The present HTTP documents that describe syntax and semantics are:</p><ul><li><p><a href="https://tools.ietf.org/html/rfc7230">RFC 7230</a> - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing</p></li><li><p><a href="https://tools.ietf.org/html/rfc7231">RFC 7231</a> - Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content</p></li></ul><p>It is possible to read too much into these names and believe that fundamental HTTP semantics are specific for versions of HTTP i.e. HTTP/1.1. However, this is an unintended side effect of the HTTP family tree. The good news is that the HTTPbis Working Group are trying to address this. Some brave members are going through another round of document revision, as Roy Fielding put it, "one more time!". This work is underway right now and is known as the HTTP Core activity (you may also have heard of this under the moniker HTTPtre or HTTPter; naming things is hard). This will condense the six drafts down to three:</p><ul><li><p>HTTP Semantics (draft-ietf-httpbis-semantics)</p></li><li><p>HTTP Caching (draft-ietf-httpbis-caching)</p></li><li><p>HTTP/1.1 Message Syntax and Routing (draft-ietf-httpbis-messaging)</p></li></ul><p>Under this new structure, it becomes more evident that HTTP/2 and HTTP/3 are syntax definitions for the common HTTP semantics. This doesn't mean they don't have their own features beyond syntax but it should help frame discussion going forward.</p>
    <div>
      <h2>Pulling it all together</h2>
      <a href="#pulling-it-all-together">
        
      </a>
    </div>
    <p>This blog post has taken a shallow look at the standardisation process for HTTP in the IETF across the last three decades. Without touching on many technical details, I've tried to explain how we have ended up with HTTP/3 today. If you skipped the good bits in the middle and are looking for a one liner here it is: HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport. There are many interesting technical areas to explore further but that will have to wait for another day.</p><p>In the course of this post, we explored important chapters in the development of HTTP and TLS but did so in isolation. We close out the blog by pulling them all together into the complete Secure Web Timeline presented below. You can use this to investigate the detailed history at your own comfort. And for the super sleuths, be sure to check out the <a href="/content/images/2019/01/web_timeline_large1.svg">full version including draft numbers</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eL8vJkYylVmR4T5Aa1Zdf/2f2929308ee42e450917639874835c1d/cf-secure-web-timeline-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">1upTpaZ3pXyoDXxMvZoEC8</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Road to QUIC]]></title>
            <link>https://blog.cloudflare.com/the-road-to-quic/</link>
            <pubDate>Thu, 26 Jul 2018 15:04:36 GMT</pubDate>
            <description><![CDATA[ QUIC (Quick UDP Internet Connections) is a new encrypted-by-default Internet transport protocol, that provides a number of improvements designed to accelerate HTTP traffic as well as make it more secure, with the intended goal of eventually replacing TCP and TLS on the web. ]]></description>
            <content:encoded><![CDATA[ <p>QUIC (Quick UDP Internet Connections) is a new encrypted-by-default Internet transport protocol, that provides a number of improvements designed to accelerate HTTP traffic as well as make it more secure, with the intended goal of eventually replacing TCP and TLS on the web. In this blog post we are going to outline some of the key features of QUIC and how they benefit the web, and also some of the challenges of supporting this radical new protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XUtUi1ckk243XN0PWk2zo/68dcab940f1bcbb37cb3d3b7f10d1ddb/QUIC-Badge-Dark-RGB-Horiz.png" />
            
            </figure><p>There are in fact two protocols that share the same name: “Google QUIC” (“gQUIC” for short), is the original protocol that was designed by Google engineers several years ago, which, after years of experimentation, has now been adopted by the <a href="https://ietf.org/">IETF</a> (Internet Engineering Task Force) for standardization.</p><p>“IETF QUIC” (just “QUIC” from now on) has already diverged from gQUIC quite significantly such that it can be considered a separate protocol. From the wire format of the packets, to the handshake and the mapping of HTTP, QUIC has improved the original gQUIC design thanks to open collaboration from many organizations and individuals, with the shared goal of making the Internet faster and more secure.</p><p>So, what are the improvements QUIC provides?</p>
    <div>
      <h3>Built-in security (and performance)</h3>
      <a href="#built-in-security-and-performance">
        
      </a>
    </div>
    <p>One of QUIC’s more radical deviations from the now venerable TCP, is the stated design goal of providing a secure-by-default transport protocol. QUIC accomplishes this by providing security features, like authentication and encryption, that are typically handled by a higher layer protocol (like TLS), from the transport protocol itself.</p><p>The initial QUIC handshake combines the typical three-way handshake that you get with TCP, with the TLS 1.3 handshake, which provides authentication of the end-points as well as negotiation of cryptographic parameters. For those familiar with the TLS protocol, QUIC replaces the TLS record layer with its own framing format, while keeping the same TLS handshake messages.</p><p>Not only does this ensure that the connection is always authenticated and encrypted, but it also makes the initial connection establishment faster as a result: the typical QUIC handshake only takes a single round-trip between client and server to complete, compared to the two round-trips required for the TCP and TLS 1.3 handshakes combined.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CWL4LYn5pKIq6nJjHfPeK/0683aa058799594bf35d41605e05b4c1/http-request-over-tcp-tls_2x.png" />
            
            </figure><p> </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rT3ge707TKiFEIiyEL4IW/3588b3d9d434ca9b705d75c2a72de090/http-request-over-quic_2x.png" />
            
            </figure><p>But QUIC goes even further, and also encrypts additional connection metadata that could be abused by middle-boxes to interfere with connections. For example packet numbers could be used by passive on-path attackers to correlate users activity over multiple network paths when connection migration is employed (see below). By encrypting packet numbers QUIC ensures that they can't be used to correlate activity by any entity other than the end-points in the connection.</p><p>Encryption can also be an effective remedy to ossification, which makes flexibility built into a protocol (like for example being able to negotiate different versions of that protocol) impossible to use in practice due to wrong assumptions made by implementations (ossification is what <a href="/why-tls-1-3-isnt-in-browsers-yet/">delayed deployment of TLS 1.3</a> for so long, which <a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3">was only possible</a> after several changes, designed to prevent ossified middle-boxes from incorrectly blocking the new revision of the TLS protocol, were adopted).</p>
    <div>
      <h3>Head-of-line blocking</h3>
      <a href="#head-of-line-blocking">
        
      </a>
    </div>
    <p>One of the main improvements delivered by <a href="/introducing-http2/">HTTP/2</a> was the ability to multiplex different HTTP requests onto the same TCP connection. This allows HTTP/2 applications to process requests concurrently and better utilize the network bandwidth available to them.</p><p>This was a big improvement over the then status quo, which required applications to initiate multiple TCP+TLS connections if they wanted to process multiple HTTP/1.1 requests concurrently (e.g. when a browser needs to fetch both CSS and Javascript assets to render a web page). Creating new connections requires repeating the initial handshakes multiple times, as well as going through the initial congestion window ramp-up, which means that rendering of web pages is slowed down. Multiplexing HTTP exchanges avoids all that.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FcudeYzlfQKPqRI8YXk45/d271d04cc9150d0debd301c6481157d5/multiplexing.svg" />
            
            </figure><p>This however has a downside: since multiple requests/responses are transmitted over the same TCP connection, they are all equally affected by packet loss (e.g. due to network congestion), even if the data that was lost only concerned a single request. This is called “head-of-line blocking”.</p><p>QUIC goes a bit deeper and provides first class support for multiplexing such that different HTTP streams can in turn be mapped to different QUIC transport streams, but, while they still share the same QUIC connection so no additional handshakes are required and congestion state is shared, QUIC streams are delivered independently, such that in most cases packet loss affecting one stream doesn't affect others.</p><p>This can dramatically reduce the time required to, for example, render complete web pages (with CSS, Javascript, images, and other kinds of assets) particularly when crossing highly congested networks, with high packet loss rates.</p>
    <div>
      <h3>That easy, uh?</h3>
      <a href="#that-easy-uh">
        
      </a>
    </div>
    <p>In order to deliver on its promises, the QUIC protocol needs to break some of the assumptions that were taken for granted by many network applications, potentially making implementations and deployment of QUIC more difficult.</p><p>QUIC is designed to be delivered on top of UDP datagrams, to ease deployment and avoid problems coming from network appliances that drop packets from unknown protocols, since most appliances already support UDP. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.</p><p>However despite the intended goal of avoiding breakage, it also makes preventing abuse and correctly routing packets to the correct end-points more challenging.</p>
    <div>
      <h3>One NAT to bring them all and in the darkness bind them</h3>
      <a href="#one-nat-to-bring-them-all-and-in-the-darkness-bind-them">
        
      </a>
    </div>
    <p>Typical NAT routers can keep track of TCP connections passing through them by using the traditional 4-tuple (source IP address and port, and destination IP address and port), and by observing TCP SYN, ACK and FIN packets transmitted over the network, they can detect when a new connection is established and when it is terminated. This allows them to precisely manage the lifetime of NAT bindings, the association between the internal IP address and port, and the external ones.</p><p>With QUIC this is not yet possible, since NAT routers deployed in the wild today do not understand QUIC yet, so they typically fallback to the default and less precise handling of UDP flows, which usually involves using <a href="https://conferences.sigcomm.org/imc/2010/papers/p260.pdf">arbitrary, and at times very short, timeouts</a>, which could affect long-running connections.</p><p>When a NAT rebinding happens (due to a timeout for example), the end-point on the outside of the NAT perimeter will see packets coming from a different source port than the one that was observed when the connection was originally established, which makes it impossible to track connections by only using the 4-tuple.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nozSxNZ6rsrj02Tb1IACy/fa90467738d8e89865b59be3ee4e5cf6/NAT-timeout-_2x.png" />
            
            </figure><p>And it's not just NAT! One of the features QUIC is intended to deliver is called “connection migration” and will allow QUIC end-points to migrate connections to different IP addresses and network paths at will. For example, a mobile client will be able to migrate QUIC connections between cellular data networks and WiFi when a known WiFi network becomes available (like when its user enters their favorite coffee shop).</p><p>QUIC tries to address this problem by introducing the concept of a connection ID: an arbitrary opaque blob of variable length, carried by QUIC packets, that can be used to identify a connection. End-points can use this ID to track connections that they are responsible for without the need to check the 4-tuple (in practice there might be multiple IDs identifying the same connection, for example to avoid linking different paths when connection migration is used, but that behavior is controlled by the end-points not the middle-boxes).</p><p>However this also poses a problem for network operators that use anycast addressing and <a href="/path-mtu-discovery-in-practice/">ECMP routing</a>, where a single destination IP address can potentially identify hundreds or even thousands of servers. Since edge routers used by these networks also don't yet know how to handle QUIC traffic, it might happen that UDP packets belonging to the same QUIC connection (that is, with the same connection ID) but with different 4-tuple (due to NAT rebinding or connection migration) might end up being routed to different servers, thus breaking the connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3y7Ae7Hm6zUy6SyL315Oxa/366eee8f4d5a89458556b77c53aacecb/anycast-cdn.png" />
            
            </figure><p>In order to address this, network operators might need to employ smarter layer 4 load balancing solutions, which can be implemented in software and deployed without the need to touch edge routers (see for example Facebook's <a href="https://github.com/facebookincubator/katran">Katran</a> project).</p>
    <div>
      <h3>QPACK</h3>
      <a href="#qpack">
        
      </a>
    </div>
    <p>Another benefit introduced by HTTP/2 was <a href="/hpack-the-silent-killer-feature-of-http-2/">header compression (or HPACK)</a> which allows HTTP/2 end-points to reduce the amount of data transmitted over the network by removing redundancies from HTTP requests and responses.</p><p>In particular, among other techniques, HPACK employs dynamic tables populated with headers that were sent (or received) from previous HTTP requests (or responses), allowing end-points to reference previously encountered headers in new requests (or responses), rather than having to transmit them all over again.</p><p>HPACK's dynamic tables need to be synchronized between the encoder (the party that sends an HTTP request or response) and the decoder (the one that receives them), otherwise the decoder will not be able to decode what it receives.</p><p>With HTTP/2 over TCP this synchronization is transparent, since the transport layer (TCP) takes care of delivering HTTP requests and responses in the same order they were sent in, the instructions for updating the tables can simply be sent by the encoder as part of the request (or response) itself, making the encoding very simple. But for QUIC this is more complicated.</p><p>QUIC can deliver multiple HTTP requests (or responses) over different streams independently, which means that while it takes care of delivering data in order as far as a single stream is concerned, there are no ordering guarantees across multiple streams.</p><p>For example, if a client sends HTTP request A over QUIC stream A, and request B over stream B, it might happen, due to packet reordering or loss in the network, that request B is received by the server before request A, and if request B was encoded such that it referenced a header from request A, the server will be unable to decode it since it didn't yet see request A.</p><p>In the gQUIC protocol this problem was solved by simply serializing all HTTP request and response headers (but not the bodies) over the same gQUIC stream, which meant headers would get delivered in order no matter what. This is a very simple scheme that allows implementations to reuse a lot of their existing HTTP/2 code, but on the other hand it increases the head-of-line blocking that QUIC was designed to reduce. The IETF QUIC working group thus designed a new mapping between HTTP and QUIC (“HTTP/QUIC”) as well as a new header compression scheme called “QPACK”.</p><p>In the latest draft of the HTTP/QUIC mapping and the QPACK spec, each HTTP request/response exchange uses its own bidirectional QUIC stream, so there's no head-of-line blocking. In addition, in order to support QPACK, each peer creates two additional unidirectional QUIC streams, one used to send QPACK table updates to the other peer, and one to acknowledge updates received by the other side. This way, a QPACK encoder can use a dynamic table reference only after it has been explicitly acknowledged by the decoder.</p>
    <div>
      <h3>Deflecting Reflection</h3>
      <a href="#deflecting-reflection">
        
      </a>
    </div>
    <p>A common problem among <a href="/ssdp-100gbps/">UDP-based</a> <a href="/memcrashed-major-amplification-attacks-from-port-11211/">protocols</a> is their susceptibility to <a href="/reflections-on-reflections/">reflection attacks</a>, where an attacker tricks an otherwise innocent server into sending large amounts of data to a third-party victim, by spoofing the source IP address of packets targeted to the server to make them look like they came from the victim.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LfQICIemcnM4AgyhDb6kD/0649164a476896ca3a58de5390c08f09/ip-spoofing.png" />
            
            </figure><p>This kind of attack can be very effective when the response sent by the server happens to be larger than the request it received, in which case we talk of “amplification”.</p><p>TCP is not usually used for this kind of attack due to the fact that the initial packets transmitted during its handshake (SYN, SYN+ACK, …) have the same length so they don’t provide any amplification potential.</p><p>QUIC’s handshake on the other hand is very asymmetrical: like for TLS, in its first flight the QUIC server generally sends its own certificate chain, which can be very large, while the client only has to send a few bytes (the TLS ClientHello message embedded into a QUIC packet). For this reason, the initial QUIC packet sent by a client has to be padded to a specific minimum length (even if the actual content of the packet is much smaller). However this mitigation is still not sufficient, since the typical server response spans multiple packets and can thus still be far larger than the padded client packet.</p><p>The QUIC protocol also defines an explicit source-address verification mechanism, in which the server, rather than sending its long response, only sends a much smaller “retry” packet which contains a unique cryptographic token that the client will then have to echo back to the server inside a new initial packet. This way the server has a higher confidence that the client is not spoofing its own source IP address (since it received the retry packet) and can complete the handshake. The downside of this mitigation is that it increases the initial handshake duration from a single round-trip to two.</p><p>An alternative solution involves reducing the server's response to the point where a reflection attack becomes less effective, for example by using <a href="/ecdsa-the-digital-signature-algorithm-of-a-better-internet/">ECDSA certificates</a> (which are typically much smaller than their RSA counterparts). We have also been experimenting with a mechanism for <a href="https://tools.ietf.org/html/draft-ietf-tls-certificate-compression">compressing TLS certificates</a> using off-the-shelf compression algorithms like zlib and brotli, which is a feature originally introduced by gQUIC but not currently available in TLS.</p>
    <div>
      <h3>UDP performance</h3>
      <a href="#udp-performance">
        
      </a>
    </div>
    <p>One of the recurring issues with QUIC involves existing hardware and software deployed in the wild not being able to understand it. We've already looked at how QUIC tries to address network middle-boxes like routers, but another potentially problematic area is the performance of sending and receiving data over UDP on the QUIC end-points themselves. Over the years a lot of work has gone into optimizing TCP implementations as much as possible, including building off-loading capabilities in both software (like in operating systems) and hardware (like in network interfaces), but none of that is currently available for UDP.</p><p>However it’s only a matter of time until QUIC implementations can take advantage of these capabilities as well. Look for example at the recent efforts to implement <a href="https://lwn.net/Articles/752184/">Generic Segmentation Offloading for UDP on LInux</a>, which would allow applications to bundle and transfer multiple UDP segments between user-space and the kernel-space networking stack at the cost of a single one (or close enough), as well as the one to add <a href="https://lwn.net/Articles/655299/">zerocopy socket support also on Linux</a> which would allow applications to avoid the cost of copying user-space memory into kernel-space.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Like HTTP/2 and TLS 1.3, QUIC is set to deliver a lot of new features designed to improve performance and security of web sites, as well as other Internet-based properties. The IETF working group is currently set to deliver the first version of the QUIC specifications by the end of the year and Cloudflare engineers are already hard at work to provide the benefits of QUIC to all of our customers.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">4ZyUVtsRDEiNCkr2iwov88</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Changing Internet Standards to Build A Secure Internet]]></title>
            <link>https://blog.cloudflare.com/dk-dnssec/</link>
            <pubDate>Wed, 12 Apr 2017 15:06:07 GMT</pubDate>
            <description><![CDATA[ We’ve been working with registrars and registries in the IETF on making DNSSEC easier for domain owners, and over the next two weeks we’ll be starting out by enabling DNSSEC automatically for .dk domains. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’ve been working with <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> and registries in the IETF on making DNSSEC easier for domain owners, and over the next two weeks we’ll be starting out by enabling DNSSEC automatically for .dk domains.</p>
    <div>
      <h3>DNSSEC: A Primer</h3>
      <a href="#dnssec-a-primer">
        
      </a>
    </div>
    <p>Before we get into the details of how we've improved the DNSSEC experience, we should explain why DNSSEC is important and the function it plays in keeping the web safe.</p><p>DNSSEC’s role is to verify the integrity of DNS answers. When DNS was written in the early 1980’s, it was only a few researchers and academics on the internet. They all knew and trusted each other, and couldn’t imagine a world in which someone malicious would try to operate online. As a result, DNS relies on trust to operate. When a client asks for the address of a hostname like <a href="http://www.cloudflare.com">www.cloudflare.com</a>, without DNSSEC it will trust basically any server that returns the response, even if it wasn’t the same server it originally asked. With DNSSEC, every DNS answer is signed so clients can verify answers haven’t been manipulated over transit.</p>
    <div>
      <h3>The Trouble With DNSSEC</h3>
      <a href="#the-trouble-with-dnssec">
        
      </a>
    </div>
    <p>If DNSSEC is so important, why do so few domains support it? First, for a domain to have the opportunity to enable DNSSEC, not only do its DNS provider, its registrar and its registry all have to support DNSSEC, all three of those parties have to also support the same encryption algorithms.</p><p>For domains that do have the ability to enable DNSSEC, DNSSEC is just not easy enough -- domain owners need to first enable DNSSEC with their DNS provider, and then copy and paste some values (called a DS record) from their DNS provider’s dashboard to their registrar’s dashboard, making sure not to miss any characters when copying and pasting, because that would cut off traffic to their whole domain. What we need here is automation.</p>
    <div>
      <h3>Changing an outdated model</h3>
      <a href="#changing-an-outdated-model">
        
      </a>
    </div>
    <p>It's been Cloudflare's long-standing statement that as the DNS operator, we would like to update the DS automatically for a user, but <a href="/updating-the-dns-registration-model-to-keep-pace-with-todays-internet/">DNS operates on a legacy model</a> where the registrar is able to talk directly to the registry, but the DNS operator (Cloudflare) is left completely out of that model.</p><p>Here at Cloudflare, we’re determined it’s time to change that outdated system. We have <a href="https://tools.ietf.org/html/draft-ietf-regext-dnsoperator-to-rrr-protocol">published an Internet Draft</a> to propose a new model for how DNS operators, registries and registrars could operate and communicate to make specific user-authorized changes to domains. It’s important to point out that the IETF works on the principle of rough consensus and running code. Cloudflare, in conjunction with the .dk registry, has produced running code, and we’re very close to getting consensus. That Internet Draft is now making its way through the Standards Track within the IETF and on it’s way to becoming an fully-fledged RFC.</p>
    <div>
      <h3>How .dk and Cloudflare are working together</h3>
      <a href="#how-dk-and-cloudflare-are-working-together">
        
      </a>
    </div>
    <p>The ccTLD operator for Denmark (ie. the .dk domains) has also realized that the model is outdated. They provide their users (and the operators of nameservers associated with .dk domains) a programmatic way of installing and updating DS records. This is exactly what operators like Cloudflare need.</p><p>Cloudflare has been testing their API and is now ready to kick off an automated, clean, safe and reliable way of updating DS records for our .dk customers. Over the next two weeks we will enable DNSSEC for .dk domains that have started to in the past, but haven’t finished the process.</p><p>Of course, for Cloudflare, there’s no surprise that Denmark is the home to forward thinkers like this!</p>
    <div>
      <h3>Onwards!</h3>
      <a href="#onwards">
        
      </a>
    </div>
    <p>If you have a .dk domain on Cloudflare, you really don’t need to do anything except flip the switch enabling DNSSEC within the Cloudflare login console before we do the migration on Tuesday, April 18, 2017.</p><p>We are excited to work with the .dk registry on this first step to making DNSSEC automatic, and are looking for other TLD’s looking to make DNSSEC easy to use.</p> ]]></content:encoded>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">5laIvCdz888qNvdBTd86rl</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[IETF Hackathon: Getting TLS 1.3 working in the browser]]></title>
            <link>https://blog.cloudflare.com/ietf-hackathon-getting-tls-1-3-working-in-the-browser-2/</link>
            <pubDate>Mon, 18 Apr 2016 08:47:06 GMT</pubDate>
            <description><![CDATA[ Over the last few years, the IETF community has been focused on improving and expanding the use of the technical foundations for Internet security. ]]></description>
            <content:encoded><![CDATA[ <p>Over the last few years, the IETF community has been focused on improving and expanding the use of the technical foundations for Internet security. Part of that work has been updating and deploying protocols such as Transport Layer Security (TLS), with the first draft of the latest version of TLS, <a href="https://tools.ietf.org/html/draft-ietf-tls-tls13">TLS 1.3</a>, published a bit more than two years ago on 17 April 2014. Since then, work on TLS 1.3 has continued with expert review and initial implementations aimed at providing a solid base for broad deployment of improved security on the global Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2q6hwXnGFOzOw3Z24g6JQs/eb139b21073327e36b0af6b9c8d3b636/5131980208_6e8180784c_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/emseetwo/5131980208/in/photolist-8PuJVo-5dkfSs-5jrRMd-5jNzsz-935dYZ-JFGSJ-5dkfGQ-5jrRAo-6c43U-5hpYtP-9BaKi1-9ny7kL-69duMS-5hpYdc-7JRBkz-fdAJcK-7Je6wv-92N3gT-5kqXmf-92FKUb-o8s7-tBPiD-94ozmR-5mVBd4-9h8PNR-ddyws-92RaFd-cZWXfG-b3kW-Y3k57-5huiaw-59mgfv-9hbYN3-5D1RNc-ainBvM-coWoD-7hMstW-oghPD3-9hBZKC-6Kn8vP-9pamgz-nVn14t-5cu7BW-7Je6v2-sbpck2-8WWxDJ-ehnWE4-9hbZsA-9h8S3x-9h8QMg">image</a> by <a href="https://www.flickr.com/photos/emseetwo/">Marie-Claire Camp</a></p><p>In February of this year, the Internet Society hosted the <a href="http://www.internetsociety.org/events/ndss-symposium-2016/tls-13-ready-or-not-tron-workshop-programme">TRON</a> (TLS 1.3 Ready Or Not) workshop. The main goal of TRON was to gather feedback from developers and academics about the security of TLS 1.3. The conclusion of the workshop was that TLS 1.3 was, unfortunately, not ready yet.</p><p>One of the reasons it was deemed not yet ready was that there needed to be more real-world testing of independently written implementations. There were some implementations of the core protocol, but nobody had put together a full browser-to-server test. And some of the more exciting new features like PSK-based resumption (which brings improved forward secrecy to session tickets) and 0-RTT (which reduces latency for resumed connections) were still unimplemented.</p><p>The latest <a href="https://ietf.org/hackathon/95-hackathon.html">IETF Hackathon</a> held two days before IETF 95 provided the kind of focused and collaborative environment that is conducive for working through implementation and interoperability without distraction. In Buenos Aires, I was joined by key members of the Mozilla team (Eric Rescorla, Richard Barnes and Martin Thompson) as well as some other great people who joined the team on the dates of the Hackathon. We had two main stacks to work with: NSS, the cryptography library that powers Firefox; and <a href="https://github.com/bifurcation/mint">Mint</a>, a Go based implementation created by Richard Barnes that I had set up on <a href="http://tls13.cloudflare.com/">tls13.cloudflare.com</a>.</p><p>The goals were:</p><ul><li><p>Finish integration with Firefox so we can do an HTTPS request</p></li><li><p>Demonstrate Firefox-&gt;CloudFlare interoperability (with <a href="http://tls13.cloudflare.com/">tls13.cloudflare.com</a>)</p></li><li><p>Resumption-PSK between NSS and Mint</p></li><li><p>0-RTT between NSS and Mint</p></li><li><p>0-RTT in Firefox</p></li></ul><p>We also had a stretch goal of getting 0-RTT working between Firefox and CloudFlare’s test site.</p><p>Getting TLS 1.3 integrated in Firefox took until late Saturday night (we continued in the hotel bar after the Hackathon room closed), but after fighting through segmentation faults, C++11 lambda issues, and obtaining a trusted certificate through Let’s Encrypt, we were able to see a glorious “Hi there!” with a lock icon in Firefox. By the end of the Hackathon on Sunday, we were able to browse the TLS 1.3 specification on <a href="http://tls13.cloudflare.com/">tls13.cloudflare.com</a> with PSK-based session resumption in Firefox.</p><p>Although we were not able to get 0-RTT working between Firefox and CloudFlare in time for the demo (we were <i>so</i> very close), the Hackathon was deemed a success and we were given the “Best Achievement” award. It was great experience and proved invaluable for understanding how TLS 1.3 will work in practice. I’d like to thank the IETF for hosting this event and Huawei for sponsoring it.</p><p>The work at this Hackathon and the subsequent meetings at IETF 95 have helped solidify the core features of TLS 1.3. In the coming months, the remaining issues will be discussed on the TLS Working Group mailing list with the hope that a final draft can be completed soon after IETF 96 in Berlin.</p><p><i>This originally appeared as a </i><a href="https://www.ietf.org/blog/2016/04/ietf-hackathon-getting-tls-1-3-working-in-the-browser/"><i>guest blog</i></a><i> on the IETF web site. CloudFlare is grateful to the IETF for allowing its republication here.</i></p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Hackathon]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">1GBxUNd1x6XSls1TWoDXUE</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Going to IETF 95? Join the TLS 1.3 hackathon]]></title>
            <link>https://blog.cloudflare.com/going-to-ietf-95-join-the-tls-1-3-hackathon/</link>
            <pubDate>Mon, 28 Mar 2016 21:00:00 GMT</pubDate>
            <description><![CDATA[ If you’re in Buenos Aires on April 2-3 and are interested in building, come join the IETF Hackathon. CloudFlare and Mozilla will be working on TLS 1.3, the first new version of TLS in eight years! ]]></description>
            <content:encoded><![CDATA[ <p>If you’re in Buenos Aires on April 2-3 and are interested in building, come join the <a href="https://ietf.org/hackathon/95-hackathon.html">IETF Hackathon</a>. CloudFlare and Mozilla will be working on TLS 1.3, the first new version of TLS in eight years!</p><p>At the hackathon we’ll be focusing on implementing the latest draft of TLS 1.3 and testing interoperability between <a href="https://github.com/tlswg/tls13-spec/wiki/Implementations">existing implementations</a> written in C, Go, OCaml, JavaScript and F*. If you have experience with network programming and cryptography, come hack on the latest and greatest protocol and help find problems before it is finalized. If you’re planning on attending, add your name to the <a href="https://www.ietf.org/registration/MeetingWiki/wiki/95hackathon">Hackathon wiki</a>. If you can’t make it, but implementing cryptographic protocols is your cup of tea, apply to join the <a href="https://careers.jobscore.com/careers/cloudflare/jobs/cryptography-engineer-c0wW9i590r5BqSeMg-44q7">CloudFlare team</a>!</p><p>We’re very excited about TLS 1.3, which brings both security and performance improvements to HTTPS. In fact, if you have a client that speaks TLS 1.3 draft 10, you can read this blog on our TLS 1.3 mirror: tls13.cloudflare.com.</p><p>We hope to see you there!</p> ]]></content:encoded>
            <category><![CDATA[Hackathon]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[South America]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">102GYKj79ImGxiHFBljhoU</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
    </channel>
</rss>