
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 10:59:16 GMT</lastBuildDate>
        <item>
            <title><![CDATA[A QUICker SASE client: re-building Proxy Mode]]></title>
            <link>https://blog.cloudflare.com/faster-sase-proxy-mode-quic/</link>
            <pubDate>Thu, 05 Mar 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ By transitioning the Cloudflare One Client to use QUIC streams for Proxy Mode, we eliminated the overhead of user-space TCP stacks, resulting in a 2x increase in throughput and significant latency reduction for end users.  ]]></description>
            <content:encoded><![CDATA[ <p>When you need to use a <a href="https://blog.cloudflare.com/a-primer-on-proxies/"><u>proxy</u></a> to keep your zero trust environment secure, it often comes with a cost: poor performance for your users. Soon after deploying a client proxy, security teams are generally slammed with support tickets from users frustrated with sluggish browser speed, slow file transfers, and video calls glitching at just the wrong moment. After a while, you start to chalk it up to the proxy — potentially blinding yourself to other issues affecting performance. </p><p>We knew it didn’t have to be this way. We knew users could go faster, without sacrificing security, if we completely re-built our approach to <a href="https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/warp/configure-warp/warp-modes/#local-proxy-mode"><u>proxy mode</u></a>. So we did.</p><p>In the early days of developing the device client for our <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/"><u>SASE</u></a> platform, <a href="https://www.cloudflare.com/sase/"><u>Cloudflare One</u></a>, we prioritized universal compatibility. When an admin enabled proxy mode, the Client acted as a local SOCKS5 or HTTP proxy. However, because our underlying tunnel architecture was built on WireGuard, a Layer 3 (L3) protocol, we faced a technical hurdle: how to get application-layer (L4) TCP traffic into an L3 tunnel. Moving from L4 to L3 was especially difficult because our desktop Client works across multiple platforms (Windows, macOS, Linux) so we couldn’t <a href="https://blog.cloudflare.com/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/#from-an-ip-flow-to-a-tcp-stream"><u>use the kernel </u></a>to achieve this.</p><p>To get over this hurdle, we used smoltcp, a Rust-based user-space TCP implementation. When a packet hit the local proxy, the Client had to perform a conversion, using smoltcp to convert the L4 stream into L3 packets for the WireGuard tunnel.</p><p>While this worked, it wasn't efficient. Smoltcp is optimized for embedded systems, and does not support modern TCP features. In addition, in the Cloudflare edge, we had to convert the L3 packets back into an L4 stream. For users, this manifested as a performance ceiling. On media-heavy sites where a browser might open dozens of concurrent connections for images and video, and the lack of a high performing TCP stack led to high latency and sluggish load times when even on high-speed fiber connections, proxy mode felt significantly slower than all the other device client modes.</p>
    <div>
      <h3>Introducing direct L4 proxying with QUIC</h3>
      <a href="#introducing-direct-l4-proxying-with-quic">
        
      </a>
    </div>
    <p>To solve this, we’ve re-built the Cloudflare One Client’s proxy mode from the ground up and deprecated the use of WireGuard for proxy mode, so we can capitalize on the capabilities of QUIC. We were already leveraging <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>MASQUE</u></a> (part of QUIC) for proxying IP packets, and added the usage of QUIC streams for direct L4 proxying.</p><p>By leveraging HTTP/3 (<a href="https://datatracker.ietf.org/doc/rfc9114"><u>RFC 9114</u></a>) with the CONNECT method, we can now keep traffic at Layer 4, where it belongs. When your browser sends a SOCKS5 or HTTP request to the Client, it is no longer broken down into L3 packets.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/w9mIuKa8usLgxDxVqaHax/9861604fc84508b7fc6666bf8b82a874/image1.png" />
          </figure><p>Instead, it is encapsulated directly into a QUIC stream.</p><p>This architectural shift provides three immediate technical advantages:</p><ul><li><p>Bypassing smoltcp: By removing the L3 translation layer, we eliminate IP packet handling and the limitations of smoltcp’s TCP implementation.</p></li><li><p>Native QUIC Benefits: We benefit from modern congestion control and flow control, which are handled natively by the transport layer.</p></li><li><p>Tuneability: The Client and Cloudflare’s edge can tune QUIC’s parameters to optimize performance.</p></li></ul><p>In our internal testing, the results were clear: <b>download and upload speeds doubled, and latency decreased significantly</b>.</p>
    <div>
      <h3>Who benefits the most</h3>
      <a href="#who-benefits-the-most">
        
      </a>
    </div>
    <p>While faster is always better, this update specifically unblocks three key common use cases.</p><p>First, in <b>coexistence with third-party VPNs </b>where a legacy VPN is still required for specific on-prem resources or where having a dual SASE setup is required for redundancy/compliance, the local proxy mode is the go-to solution for adding zero trust security to web traffic. This update ensures that "layering" security doesn't mean sacrificing the user experience.</p><p>Second, for <b>high-bandwidth application partitioning</b>, proxy mode is often used to steer specific browser traffic through Cloudflare Gateway while leaving the rest of the OS on the local network. Users can now stream high-definition content or handle large datasets without sacrificing performance.</p><p>Finally, <b>developers and power users</b> who rely on the SOCKS5 secondary listener for CLI tools or scripts will see immediate improvements. Remote API calls and data transfers through the proxy now benefit from the same low-latency connection as the rest of the Cloudflare global network.</p>
    <div>
      <h3>How to get started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>The proxy mode improvements are available with minimum client version 2025.8.779.0 for Windows, macOS, and Linux devices. To take advantage of these performance gains, ensure you are running the <a href="https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/warp/download-warp/"><u>latest version of the Cloudflare One Client</u></a>.</p><ol><li><p>Log in to the <b>Cloudflare One dashboard</b>.</p></li><li><p>Navigate to <b>Teams &amp; Resources &gt; Devices &gt; Device profiles &gt; General profiles</b>.</p></li><li><p>Select a profile to edit or create a new one and ensure the <b>Service mode</b> is set to <b>Local proxy mode</b> and the <b>Device tunnel protocol</b> is set to <b>MASQUE</b>.</p></li></ol><p>You can verify your active protocol on a client machine by running the following command in your terminal: </p>
            <pre><code>warp-cli settings | grep protocol</code></pre>
            <p>Visit our <a href="https://developers.cloudflare.com/cloudflare-one/team-and-resources/devices/warp/configure-warp/warp-modes/#set-up-local-proxy-mode"><u>documentation</u></a> for detailed guidance on enabling proxy mode for your devices.</p><p>If you haven't started your SASE journey yet, you can sign up for a<a href="https://dash.cloudflare.com/sign-up/zero-trust"><u> free Cloudflare One account</u></a> for up to 50 users today. Simply <a href="https://dash.cloudflare.com/sign-up/zero-trust"><u>create an account</u></a>, download the<a href="https://1.1.1.1/"> <u>Cloudflare One Client</u></a>, and follow our<a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"> <u>onboarding guide</u></a> to experience a faster, more stable connection for your entire team.</p> ]]></content:encoded>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare One Client]]></category>
            <category><![CDATA[Connectivity]]></category>
            <category><![CDATA[TCP]]></category>
            <guid isPermaLink="false">11I7Snst3LH2T0tJC5HLbN</guid>
            <dc:creator>Koko Uko</dc:creator>
            <dc:creator>Logan Praneis</dc:creator>
            <dc:creator>Gregor Maier</dc:creator>
        </item>
        <item>
            <title><![CDATA[Using machine learning to detect bot attacks that leverage residential proxies]]></title>
            <link>https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/</link>
            <pubDate>Mon, 24 Jun 2024 13:00:17 GMT</pubDate>
            <description><![CDATA[ Cloudflare's Bot Management team has released a new Machine Learning model for bot detection (v8), focusing on bots and abuse from residential proxies ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Bots using residential proxies are a major source of frustration for security engineers trying to fight online abuse. These engineers often see a similar pattern of abuse when well-funded, modern botnets target their applications. Advanced bots bypass country blocks, <a href="https://www.cloudflare.com/en-gb/learning/network-layer/what-is-an-autonomous-system/">ASN</a> blocks, and rate-limiting. Every time, the bot operator moves to a new IP address space until they blend in perfectly with the “good” traffic, mimicking real users’ behavior and request patterns. Our new Bot Management machine learning model (v8) identifies residential proxy abuse without resorting to IP blocking, which can cause false positives for legitimate users.  </p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>One of the main sources of Cloudflare’s <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot score</a> is our bot detection machine learning model which analyzes, on average, over 46 million HTTP requests per second in real time. Since our first Bot Management ML model was released in 2019, we have continuously evolved and improved the model. Nowadays, our models leverage features based on request fingerprints, behavioral signals, and global statistics and trends that we see across our network.</p><p>Each iteration of the model focuses on certain areas of improvement. This process starts with a rigorous R&amp;D phase to identify the emerging patterns of <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot-attack/">bot attacks</a> by reviewing <a href="https://developers.cloudflare.com/bots/concepts/feedback-loop/">feedback from our customers</a> and reports of missed attacks. In v8, we mainly focused on two areas of abuse. First, we analyzed the campaigns that leverage residential IP proxies, which are proxies on residential networks commonly used to launch widely distributed attacks against high profile targets. In addition to that, we improved model accuracy for detecting attacks that originate from cloud providers.</p>
    <div>
      <h3>Residential IP proxies</h3>
      <a href="#residential-ip-proxies">
        
      </a>
    </div>
    <p>Proxies allow attackers to hide their identity and distribute their attack. Moreover, IP address rotation allows attackers to directly bypass traditional defenses such as IP reputation and IP rate limiting. Knowing this, defenders use a plethora of signals to identify malicious use of proxies. In its simplest forms, IP reputation signals (e.g., data center IP addresses, known open proxies, etc.) can lead to the detection of such distributed attacks.</p><p>However, in the past few years, bot operators have started favoring proxies operating in residential network IP address space. By using residential IP proxies, attackers can masquerade as legitimate users by sending their traffic through residential networks. Nowadays, residential IP proxies are offered by companies that facilitate access to large pools of IP addresses for attackers. Residential proxy providers claim to offer 30-100 million IPs belonging to residential and mobile networks across the world. Most commonly, these IPs are sourced by partnering with free VPN providers, as well as including the proxy SDKs into popular browser extensions and mobile applications. This allows residential proxy providers to gain a foothold on victims’ devices and abuse their residential network connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Oob9FiycD6yIfYf8Xtdny/06e74f2dd0032ea4610dcab55b0ec38a/residential-proxy-architecture.jpg" />
            
            </figure><p>Figure 1: Architecture of a residential proxy network</p><p>Figure 1 depicts the architecture of a residential proxy. By subscribing to these services, attackers gain access to an authenticated proxy gateway address commonly using the HTTPS/<a href="https://datatracker.ietf.org/doc/html/rfc1928">SOCKS5</a> proxy protocol. Some residential proxy providers allow their users to select the country or region for the proxy exit nodes. Alternatively, users can choose to keep the same IP address throughout their session or rotate to a new one for each outgoing request. Residential proxy providers then identify active exit nodes on their network (on devices that they control within residential networks across the world) and route the proxied traffic through them.</p><p>The large pool of IP addresses and the diversity of networks poses a challenge to traditional bot defense mechanisms that rely on IP reputation and rate limiting. Moreover, the diversity of IPs enables the attackers to rotate through them indefinitely. This shrinks the window of opportunity for bot detection systems to effectively detect and stop the attacks. Effective defense against residential proxy attacks should be able to detect this type of bot traffic either based on single request features to stop the attack immediately, or identify unique fingerprints from the browsing agent to track and mitigate the bot traffic regardless of the IP source. Overly broad blocking actions, such as IP block-listing, by definition, would result in blocking legitimate traffic from residential networks where at least one device is acting as a residential proxy node.</p>
    <div>
      <h3>ML model training</h3>
      <a href="#ml-model-training">
        
      </a>
    </div>
    <p>At its heart, our model is built using a chain of modules that work together. Initially, we fetch and prepare training and validation datasets from our Clickhouse data storage. We use datasets with high confidence labels as part of our training. For model validation, we use datasets consisting of missed attacks reported by our customers, known sources of bot traffic (e.g., <a href="https://developers.cloudflare.com/bots/reference/verified-bots-policy/">verified bots</a>), and high confidence detections from other bot management modules (e.g., heuristics engine). We orchestrate these steps using Apache Airflow, which enables us to customize each stage of the ML model training and define the interdependencies of our training, validation, and reporting modules in the form of directed acyclic graphs (DAGs).</p><p>The first step of training a new model is fetching labeled training data from our data store. Under the hood, our dataset definitions are SQL queries that will materialize by fetching data from our Clickhouse cluster where we store feature values and calculate aggregates from the traffic on our network. Figure 2 depicts these steps as train and validation dataset fetch operations. Introducing new datasets can be as straightforward as writing the SQL queries to filter the desired subset of requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xmgPc689UWAHZi29CfIrp/c0d5dca3b3a497423b00c9456ea5dc7c/airflow-dag.jpg" />
            
            </figure><p>Figure 2: Airflow DAG for model training and validation</p><p>After fetching the datasets, we train our <a href="https://catboost.ai/">Catboost model</a> and tune its <a href="https://catboost.ai/en/docs/references/training-parameters/">hyper parameters</a>. During evaluation, we compare the performance of the newly trained model against the current default version running for our customers. To capture the intricate patterns in subsets of our data, we split certain validation datasets into smaller slivers called specializations. For instance, we use the detections made by our heuristics engine and managed rulesets as ground truth for bot traffic. To ensure that larger sources of traffic (large <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">ASNs</a>, different HTTP versions, etc.) do not mask our visibility into patterns for the rest of the traffic, we define specializations for these sources of traffic. As a result, improvements in accuracy of the new model can be evaluated for common patterns (e.g., HTTP/1.1 and HTTP/2) as well as less common ones. Our model training DAG will provide a breakdown report for the accuracy, score distribution, feature importance, and <a href="https://shap.readthedocs.io/en/latest/generated/shap.Explainer.html">SHAP explainers</a> for each validation dataset and its specializations.</p><p>Once we are happy with the validation results and model accuracy, we evaluate our model against a checklist of steps to ensure the correctness and validity of our model. We start by ensuring that our results and observations are reproducible over multiple non-overlapping training and validation time ranges. Moreover, we check for the following factors:</p><ul><li><p>Check for the distribution of feature values to identify irregularities such as missing or skewed values.</p></li><li><p>Check for overlaps between training and validation datasets and feature values.</p></li><li><p>Verify the diversity of training data and the balance between labels and datasets.</p></li><li><p>Evaluate performance changes in the accuracy of the model on validation datasets based on their order of importance.</p></li><li><p>Check for model overfitting by evaluating the feature importance and SHAP explainers.</p></li></ul><p>After the model passes the readiness checks, we deploy it in shadow mode. We can observe the behavior of the model on live traffic in log-only mode (i.e., without affecting the <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot score</a>). After gaining confidence in the model's performance on live traffic, we start onboarding beta customers, and gradually switch the model to active mode all while closely <a href="/monitoring-machine-learning-models-for-bot-detection">monitoring the real-world performance of our new model</a>.</p>
    <div>
      <h3>ML features for bot detection</h3>
      <a href="#ml-features-for-bot-detection">
        
      </a>
    </div>
    <p>Each of our models uses a set of features to make inferences about the incoming requests. We compute our features based on single request properties (single request features) and patterns from multiple requests (i.e., inter-request features). We can categorize these features into the following groups:</p><ul><li><p><b>Global features:</b> inter-request features that are computed based on global aggregates for different types of fingerprints and traffic sources (e.g., for an ASN) seen across our global network. Given the relatively lower cardinality of these features, we can scalably calculate global aggregates for each of them.</p></li><li><p><b>High cardinality features:</b> inter-request features focused on fine-grained aggregate data from local traffic patterns and behaviors (e.g., for an individual IP address)</p></li><li><p><b>Single request features:</b> features derived from each individual request (e.g., user agent).</p></li></ul><p>Our Bot Management system (named <a href="/scalable-machine-learning-at-cloudflare">BLISS</a>) is responsible for fetching and computing these feature values and making them available on our servers for inference by active versions of our ML models.</p>
    <div>
      <h2>Detecting residential proxies using network and behavioral signals</h2>
      <a href="#detecting-residential-proxies-using-network-and-behavioral-signals">
        
      </a>
    </div>
    <p>Attacks originating from residential IP addresses are commonly characterized by a spike in the overall traffic towards sensitive endpoints on the target websites from a large number of residential ASNs. Our approach for detecting residential IP proxies is twofold. First, we start by comparing direct vs proxied requests and looking for network level discrepancies. Revisiting Figure 1, we notice that a request routed through residential proxies (red dotted line) has to traverse through multiple hops before reaching the target, which affects the network latency of the request.</p><p>Based on this observation alone, we are able to characterize residential proxy traffic with a high true positive rate (i.e., all residential proxy requests have high network latency). While we were able to replicate this in our lab environment, we quickly realized that at the scale of the Internet, we run into numerous exceptions with false positive detections (i.e., non-residential proxy traffic with high latency). For instance, countries and regions that predominantly use satellite Internet would exhibit a high network latency for the majority of their requests due to the use of <a href="https://datatracker.ietf.org/doc/html/rfc3135">performance enhancing proxies</a>.</p><p>Realizing that relying solely on network characteristics of connections to detect residential proxies is inadequate given the diversity of the connections on the Internet, we switched our focus to the behavior of residential IPs. To that end, we observe that the IP addresses from residential proxies express a distinct behavior during periods of peak activity. While this observation singles out highly active IPs over their peak activity time, given the pool size of residential IPs, it is not uncommon to only observe a small number of requests from the majority of residential proxy IPs.</p><p>These periods of inactivity can be attributed to the temporary nature of residential proxy exit nodes. For instance, when the client software (i.e., browser or mobile application) that runs the exit nodes of these proxies is closed, the node leaves the residential proxy network. One way to filter out periods of inactivity is to increase the monitoring time and punish each IP address that exhibits residential proxy behavior for a period of time. This block-listing approach, however, has certain limitations. Most importantly, by relying only on IP-based behavioral signals, we would block traffic from legitimate users that may unknowingly run mobile applications or browser extensions that turn their devices into proxies. This is further detrimental for mobile networks where many users share their IPs behind <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT">CGNATs</a>. Figure 3 demonstrates this by comparing the share of direct vs proxied requests that we received from active residential proxy IPs over a 24-hour period. Overall, we see that 4 out of 5 requests from these networks belong to direct and benign connections from residential devices.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vdkbz94Am2EHlL7tu9t3t/3256ae94bfee66c60ad71c6a6dbc7fc0/mlv8-blog-proxy-vs-direct.jpg" />
            
            </figure><p>Figure 3: Percentage of direct vs proxied requests from residential proxy IPs.</p><p>Using this insight, we combined behavioral and latency-based features along with new datasets to train a new <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning model</a> that detects residential proxy traffic on a per-request basis. This scheme allows us to block residential proxy traffic while allowing benign residential users to visit Cloudflare-protected websites from the same residential network.</p>
    <div>
      <h2>Detection results and case studies</h2>
      <a href="#detection-results-and-case-studies">
        
      </a>
    </div>
    <p>We started testing v8 in shadow mode in March 2024. Every hour, v8 is classifying more than 17 million unique IPs that participate in residential proxy attacks. Figure 4 shows the geographic distribution of IPs with residential proxy activity belonging to more than 45 thousand ASNs in 237 countries/regions. Among the most commonly requested endpoints from residential proxies, we observe patterns of account takeover attempts, such as requests to /login, /auth/login, and /api/login.  </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gGGhqngSzEkThZUoV3Fiw/d807f5456e3277e4aa24972588c59cf7/mlv8-blog-map-1.jpg" />
            
            </figure><p>Figure 4: Countries and regions with residential network activity. Size of markers are proportionate to the number of IPs with residential proxy activity.</p><p>Furthermore, we see significant improvements when evaluating our new machine learning model on previously missed attacks reported by our customers. In one case, v8 was able to correctly classify 95% of requests from distributed residential proxy attacks targeting the voucher redemption endpoint of the customer’s website. In another case, our new model successfully detected a previously missed <a href="https://www.cloudflare.com/learning/bots/what-is-content-scraping/">content scraping attack</a> evident by increased detection during traffic spikes depicted in Figure 5. We are continuing to monitor the behavior of residential proxy attacks in the wild and work with our customers to ensure that we can provide robust detection against these distributed attacks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1jstvGe6HudxtAS54UJPW6/9bde592cb0efc64fb7a86ba33bb25047/shadowmode-v8.jpg" />
            
            </figure><p>Figure 5: Spikes in bot requests from residential proxies detected by ML v8</p>
    <div>
      <h2>Improving detection for bots from cloud providers</h2>
      <a href="#improving-detection-for-bots-from-cloud-providers">
        
      </a>
    </div>
    <p>In addition to residential IP proxies, bot operators commonly use cloud providers to host and run bot scripts that attack our customers. To combat these attacks, we improved our ground truth labels for cloud provider attacks in our latest ML training datasets. Early results show that v8 detects 20% more bots from cloud providers, with up to 70% more bots detected on zones that are marked as <a href="https://developers.cloudflare.com/fundamentals/reference/under-attack-mode/">under attack</a>. We further plan to expand the list of cloud providers that v8 detects as part of our ongoing updates.</p>
    <div>
      <h2>Check out ML v8</h2>
      <a href="#check-out-ml-v8">
        
      </a>
    </div>
    <p>For existing Bot Management customers we recommend <a href="https://developers.cloudflare.com/bots/reference/machine-learning-models/#enable-auto-updates-to-the-machine-learning-models">toggling “Auto-update machine learning model”</a> to instantly gain the benefits of ML v8 and its residential proxy detection, and to stay up to date with our future ML model updates. If you’re not a Cloudflare Bot Management customer, <a href="https://www.cloudflare.com/application-services/products/bot-management/">contact our sales team</a> to try out <a href="https://www.cloudflare.com/application-services/products/bot-management/">Bot Management</a>.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">2EZrHNgKqLkaTGqoRM9pMS</guid>
            <dc:creator>Bob AminAzad</dc:creator>
            <dc:creator>Santiago Vargas</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Oxy: Fish/Bumblebee/Splicer subsystems to improve reliability]]></title>
            <link>https://blog.cloudflare.com/oxy-fish-bumblebee-splicer-subsystems-to-improve-reliability/</link>
            <pubDate>Thu, 20 Apr 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ We split a proxy application into multiple services to improve development agility and reliability. This blog also shares some common patterns we are leveraging to design a system supporting zero-downtime restart ]]></description>
            <content:encoded><![CDATA[ <p></p><p>At Cloudflare, we are building proxy applications on top of <a href="/introducing-oxy/">Oxy</a> that must be able to handle a <i>huge</i> amount of traffic. Besides high performance requirements, the applications must also be resilient against crashes or reloads. As the framework evolves, the complexity also increases. While migrating WARP to support soft-unicast (<a href="/cloudflare-servers-dont-own-ips-anymore/">Cloudflare servers don't own IPs anymore</a>), we needed to add different functionalities to our proxy framework. Those additions increased not only the code size but also resource usage and states required to be <a href="/oxy-the-journey-of-graceful-restarts/">preserved between process upgrades</a>.</p><p>To address those issues, we opted to split a big proxy process into smaller, specialized services. Following the Unix philosophy, each service should have a single responsibility, and it must do it well. In this blog post, we will talk about how our proxy interacts with three different services - Splicer (which pipes data between sockets), Bumblebee (which upgrades an IP flow to a TCP socket), and Fish (which handles layer 3 egress using soft-unicast IPs). Those three services help us to improve system reliability and efficiency as we migrated WARP to support soft-unicast.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BnvngSFKAe8Lo7PrjUbIQ/770f180a0fd67ad8ad7c5914046614f8/image2-8.png" />
            
            </figure>
    <div>
      <h3>Splicer</h3>
      <a href="#splicer">
        
      </a>
    </div>
    <p>Most transmission tunnels in our proxy forward <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-packet/">packets</a> without making any modifications. In other words, given two sockets, the proxy just relays the data between them: read from one socket and write to the other. This is a common pattern within Cloudflare, and we reimplement very similar functionality in separate projects. These projects often have their own tweaks for buffering, flushing, and terminating connections, but they also have to coordinate long-running proxy tasks with their process restart or upgrade handling, too.</p><p>Turning this into a service allows other applications to send a long-running proxying task to Splicer. The applications pass the two sockets to Splicer and they will not need to worry about keeping the connection alive when restart. After finishing the task, Splicer will return the two original sockets and the original metadata attached to the request, so the original application can inspect the final state of the sockets - <a href="/when-tcp-sockets-refuse-to-die/">for example using TCP_INFO</a> - and finalize audit logging if required.</p>
    <div>
      <h3>Bumblebee</h3>
      <a href="#bumblebee">
        
      </a>
    </div>
    <p>Many of Cloudflare’s on-ramps are IP-based (layer 3) but most of our services operate on <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">TCP</a> or <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">UDP</a> sockets (layer 4). To handle TCP termination, we want to create a <i>kernel</i> TCP socket from IP packets received from the client (and we can later forward this socket and an upstream socket to Splicer to proxy data between the eyeball and origin). Bumblebee performs the upgrades by spawning a thread in an anonymous network namespace with <a href="https://man7.org/linux/man-pages/man2/unshare.2.html">unshare</a> syscall, NAT-ing the IP packets, and using a tun device there to perform TCP three-way handshakes to a listener. You can find a more detailed write-up on how we upgrade an IP flows to a TCP stream <a href="/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/">here</a>.</p><p>In short, other services just need to pass a socket carrying the IP flow, and Bumblebee will upgrade it to a TCP socket, no user-space TCP stack involved! After the socket is created, Bumblebee will return the socket to the application requesting the upgrade. Again, the proxy can restart without breaking the connection as Bumblebee pipes the IP socket while Splicer handles the TCP ones.</p>
    <div>
      <h3>Fish</h3>
      <a href="#fish">
        
      </a>
    </div>
    <p>Fish forwards IP packets using <a href="/cloudflare-servers-dont-own-ips-anymore/">soft-unicast</a> IP space without upgrading them to layer 4 sockets. We previously implemented packet forwarding on shared IP space using iptables and conntrack. However, IP/port mapping management is not simple when you have many possible IPs to egress from and variable port assignments. Conntrack is highly configurable, but applying configuration through iptables rules requires careful coordination and debugging iptables execution can be challenging. Plus, relying on configuration when sending a packet through the network stack results in arcane failure modes when conntrack is unable to rewrite a packet to the exact IP or port range specified.</p><p>Fish attempts to overcome this problem by rewriting the packets and configuring conntrack using the netlink protocol. Put differently, a proxy application sends a socket containing IP packets from the client, together with the desired soft-unicast IP and port range, to Fish. Then, Fish will ensure to forward those packets to their destination. The client’s choice of IP address does not matter; Fish ensures that egressed IP packets have a unique five-tuple within the root network namespace and performs the necessary packet rewriting to maintain this isolation. Fish’s internal state is also survived across restarts.</p>
    <div>
      <h3>The Unix philosophy, manifest</h3>
      <a href="#the-unix-philosophy-manifest">
        
      </a>
    </div>
    <p>To sum up what we are having so far: instead of adding the functionalities directly to the proxy application, we create smaller and reusable services. It becomes possible to understand the failure cases present in a smaller system and design it to exhibit reliable behavior. Then if we can remove the subsystems of a larger system, we can apply this logic to those subsystems. By focusing on making the smaller service work correctly, we improve the whole system's reliability and development agility.</p><p>Although those three services’ business logics are different, you can notice what they do in common: receive sockets, or file descriptors, from other applications to allow them to restart. Those services can be restarted without dropping the connection too. Let’s take a look at how graceful restart and file descriptor passing work in our cases.</p>
    <div>
      <h3>File descriptor passing</h3>
      <a href="#file-descriptor-passing">
        
      </a>
    </div>
    <p>We use Unix Domain Sockets for interprocess communication. This is a common pattern for inter-process communication. Besides sending raw data, unix sockets also allow passing file descriptors between different processes. This is essential for our architecture as well as graceful restarts.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/43M4QXoTMDFZLlb9idMPbs/5f9ea51f3c055e0b7ee8723c97c9188a/image4-6.png" />
            
            </figure><p>There are two main ways to transfer a file descriptor: using pid_getfd syscall or <a href="/know-your-scm_rights/">SCM_RIGHTS</a>. The latter is the better choice for us here as the use cases gear toward the proxy application “giving” the sockets instead of the microservices “taking” them. Moreover, the first method would require special permission and a way for the proxy to signal which file descriptor to take.</p><p>Currently we have our own internal library named hot-potato to pass the file descriptors around as we use stable Rust in production. If you are fine with using nightly Rust, you may want to consider the <a href="https://doc.rust-lang.org/std/os/unix/net/struct.SocketAncillary.html">unix_socket_ancillary_data</a> feature. The linked blog post above about SCM_RIGHTS also explains how that can be implemented. Still, we also want to add some “interesting” details you may want to know before using your SCM_RIGHTS in production:</p><ul><li><p>There is a maximum number of file descriptors you can pass per messageThe limit is defined by the constant SCM_MAX_FD in the kernel. This is set to 253 since kernel version 2.6.38</p></li><li><p>Getting the peer credentials of a socket may be quite useful for observability in multi-tenant settings</p></li><li><p>A SCM_RIGHTS ancillary data forms a message boundary.</p></li><li><p>It is possible to send any file descriptors, not only socketsWe use this trick together with memfd_create to get around the maximum buffer size without implementing something like length-encoded frames. This also makes zero-copy message passing possible.</p></li></ul>
    <div>
      <h3>Graceful restart</h3>
      <a href="#graceful-restart">
        
      </a>
    </div>
    <p>We explored the general strategy for graceful restart in “<a href="/oxy-the-journey-of-graceful-restarts/">Oxy: the journey of graceful restarts</a>” blog. Let’s dive into how we leverage tokio and file descriptor passing to migrate all important states in the old process to the new one. We can terminate the old process almost instantly without leaving any connection behind.</p>
    <div>
      <h3>Passing states and file descriptors</h3>
      <a href="#passing-states-and-file-descriptors">
        
      </a>
    </div>
    <p>Applications like NGINX can be reloaded with no downtime. However, if there are pending requests then there will be lingering processes that handle those connections before they terminate. This is not ideal for observability. It can also cause performance degradation when the old processes start building up after consecutive restarts.</p><p>In three micro-services in this blog post, we use the state-passing concept, where the pending requests will be paused and transferred to the new process. The new process will pick up both new requests and the old ones immediately on start. This method indeed requires a higher complexity than keeping the old process running. At a high level, we have the following extra steps when the application receives an upgrade request (usually SIGHUP): pause all tasks, wait until all tasks (in groups) are paused, and send them to the new process.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JKw9eZUPQspxBsmCKcwm6/5a83e36bcc5fb03531a89cd75da73158/Graceful-restart.png" />
            
            </figure>
    <div>
      <h3>WaitGroup using JoinSet</h3>
      <a href="#waitgroup-using-joinset">
        
      </a>
    </div>
    <p>Problem statement: we dynamically spawn different concurrent tasks, and each task can spawn new child tasks. We must wait for some of them to complete before continuing.</p><p>In other words, tasks can be managed as groups. In Go, waiting for a collection of tasks to complete is a solved problem with WaitGroup. We discussed a way to implement WaitGroup in Rust using channels in a <a href="/oxy-the-journey-of-graceful-restarts/">previous blog</a>. There also exist crates like waitgroup that simply use AtomicWaker. Another approach is using JoinSet, which may make the code more readable. Considering the below example, we group the requests using a JoinSet.</p>
            <pre><code>    let mut task_group = JoinSet::new();

    loop {
        // Receive the request from a listener
        let Some(request) = listener.recv().await else {
            println!("There is no more request");
            break;
        };
        // Spawn a task that will process request.
        // This returns immediately
        task_group.spawn(process_request(request));
    }

    // Wait for all requests to be completed before continue
    while task_group.join_next().await.is_some() {}</code></pre>
            <p>However, an obvious problem with this is if we receive a lot of requests then the JoinSet will need to keep the results for all of them. Let’s change the code to clean up the JoinSet as the application processes new requests, so we have lower memory pressure</p>
            <pre><code>    loop {
        tokio::select! {
            biased; // This is optional

            // Clean up the JoinSet as we go
            // Note: checking for is_empty is important ?
            _task_result = task_group.join_next(), if !task_group.is_empty() =&gt; {}

            req = listener.recv() =&gt; {
                let Some(request) = req else {
                    println!("There is no more request");
                    break;
                };
                task_group.spawn(process_request(request));
            }
        }
    }

    while task_group.join_next().await.is_some() {}</code></pre>
            
    <div>
      <h3>Cancellation</h3>
      <a href="#cancellation">
        
      </a>
    </div>
    <p>We want to pass the pending requests to the new process as soon as possible once the upgrade signal is received. This requires us to pause all requests we are processing. In other terms, to be able to implement graceful restart, we need to implement graceful shutdown. The <a href="https://tokio.rs/tokio/topics/shutdown">official tokio tutorial</a> already covered how this can be achieved by using channels. Of course, we must guarantee the tasks we are pausing are cancellation-safe. The paused results will be collected into the JoinSet, and we just need to pass them to the new process using file descriptor passing.</p><p>For example, in Bumblebee, a paused state will include the environment’s file descriptors, client socket, and the socket proxying IP flow. We also need to transfer the current NAT table to the new process, which could be larger than the socket buffer. So the NAT table state is encoded into an anonymous file descriptor, and we just need to pass the file descriptor to the new process.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We considered how a complex proxy app can be divided into smaller components. Those components can run as new processes, allowing different life-times. Still, this type of architecture does incur additional costs: distributed tracing and inter-process communication. However, the costs are acceptable nonetheless considering the performance, maintainability, and reliability improvements. In the upcoming blog posts, we will talk about different debug tricks we learned when working with a large codebase with complex service interactions using tools like strace and eBPF.</p> ]]></content:encoded>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Edge]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Oxy]]></category>
            <guid isPermaLink="false">3xNFwkSFuO8BXQtaddgoVq</guid>
            <dc:creator>Quang Luong</dc:creator>
            <dc:creator>Chris Branch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Oxy: the journey of graceful restarts]]></title>
            <link>https://blog.cloudflare.com/oxy-the-journey-of-graceful-restarts/</link>
            <pubDate>Tue, 04 Apr 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Deploying new versions of long-lived server software while maintaining a reliable experience is challenging. For oxy, we established several development and operational patterns to increase reliability and reduce friction in deployments ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Any software under continuous development and improvement will eventually need a new version deployed to the systems running it. This can happen in several ways, depending on how much you care about things like reliability, availability, and correctness. When I started out in web development, I didn’t think about any of these qualities; I simply blasted my new code over FTP directly to my <code>/cgi-bin/</code> directory, which was the style at the time. For those of us producing desktop software, often you sidestep this entirely by having the user save their work, close the program and install an update – but they usually get to decide when this happens.</p><p>At Cloudflare we have to take this seriously. Our software is in constant use and cannot simply be stopped abruptly. A dropped HTTP request can cause an entire webpage to load incorrectly, and a broken connection can kick you out of a video call. Taking away reliability creates a vacuum filled only by user frustration.</p>
    <div>
      <h3>The limitations of the typical upgrade process</h3>
      <a href="#the-limitations-of-the-typical-upgrade-process">
        
      </a>
    </div>
    <p>There is no one right way to upgrade software reliably. <a href="https://www.erlang.org">Some programming languages</a> and environments make it easier than others, but in a Turing-complete language <a href="https://en.wikipedia.org/wiki/Halting_problem">few things are impossible</a>.</p><p>One popular and generally applicable approach is to start a new version of the software, make it responsible for a small number of tasks at first, and then gradually increase its workload until the new version is responsible for everything and the old version responsible for nothing. At that point, you can stop the old version.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AkYGHsqbNozEYjlnqYAPA/4abd834e1a5117244cb5f79967e88937/image2-1.png" />
            
            </figure><p>Most of Cloudflare’s proxies follow a similar pattern: they receive connections or requests from many clients over the Internet, communicate with other internal services to decide how to serve the request, and fetch content over the Internet if we cannot serve it locally. In general, all of this work happens within the lifetime of a client’s connection. If we aren’t serving any clients, we aren’t doing any work.</p><p>The safest time to restart, therefore, is when there is nobody to interrupt. But does such a time really exist? The Internet operates 24 hours a day and many users rely on long-running connections for things like backups, real-time updates or remote shell sessions. Even if you defer restarts to a “quiet” period, the next-best strategy of “interrupt the fewest number of people possible” will fail when you have a critical security fix that needs to be deployed immediately.</p><p>Despite this challenge, we have to start somewhere. You rarely arrive at the perfect solution in your first try.</p>
    <div>
      <h3><a href="https://knowyourmeme.com/memes/flipping-tables-%E2%95%AF%E2%96%A1%E2%95%AF%EF%B8%B5-%E2%94%BB%E2%94%81%E2%94%BB">(╯°□°）╯︵ ┻━┻</a></h3>
      <a href="#">
        
      </a>
    </div>
    <p>We have previously blogged about <a href="/graceful-upgrades-in-go/">implementing graceful restarts in Cloudflare’s Go projects</a>, using a library called <a href="https://github.com/cloudflare/tableflip">tableflip</a>. This starts a new version of your program and allows the new version to signal to the old version that it started successfully, then lets the old version clear its workload. For a proxy like any Oxy application, that means the old version stops accepting new connections once the new version starts accepting connections, then drives its remaining connections to completion.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2yUYfctQn4Vpmwc3xiBEPi/4c5717abe754fd365aea93ea9bc2c933/image6-1.png" />
            
            </figure><p>This is the simplest case of the migration strategy previously described: the new version immediately takes all new connections, instead of a gradual rollout. But in aggregate across Cloudflare’s server fleet the upgrade process is spread across several hours and the result is as gradual as a deployment orchestrated by Kubernetes or similar.</p><p>tableflip also allows your program to bind to sockets, or to reuse the sockets opened by a previous instance. This enables the new instance to accept new connections on the same socket and let the old instance release that responsibility.</p><p>Oxy is a Rust project, so we can’t reuse tableflip. We rewrote the spawning/signaling section in Rust, but not the socket code. For that we had an alternative approach.</p>
    <div>
      <h3>Socket management with systemd</h3>
      <a href="#socket-management-with-systemd">
        
      </a>
    </div>
    <p>systemd is a widely used suite of programs for starting and managing all of the system software needed to run a useful Linux system. It is responsible for running software in the correct order – for example ensuring the network is ready before starting a program that needs network access – or running it only if it is needed by another program.</p><p>Socket management falls in this latter category, under the term ‘socket activation’. Its <a href="https://mgdm.net/weblog/systemd-socket-activation/">intended and original use is interesting</a> but ultimately irrelevant here; for our purposes, systemd is a mere socket manager. Many Cloudflare services configure their sockets using systemd .socket files, and when their service is started the socket is brought into the process with it. This is how we deploy most Oxy-based services, and Oxy has first-class support for sockets opened by systemd.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/374DW87mqCYpK3NDSWWWQe/2395b888ef72a70713da9b7cadb361cc/image4-1.png" />
            
            </figure><p>Using systemd decouples the lifetime of sockets from the lifetime of the Oxy application. When Oxy creates its sockets on startup, if you restart or temporarily stop the Oxy application the sockets are closed. When clients attempt to connect to the proxy during this time, they will get a very unfriendly “connection refused” error. If, however, systemd manages the socket, that socket remains open even while the Oxy application is stopped. Clients can still connect to the socket and those connections will be served as soon as the Oxy application starts up successfully.</p>
    <div>
      <h3>Channeling your inner WaitGroup</h3>
      <a href="#channeling-your-inner-waitgroup">
        
      </a>
    </div>
    <p>A useful piece of library code our Go projects use is <a href="https://gobyexample.com/waitgroups">WaitGroups</a>. These are essential in Go, where goroutines - asynchronously-running code blocks - are pervasive. Waiting for goroutines to complete before continuing another task is a common requirement. Even the example for tableflip uses them, to demonstrate how to wait for tasks to shut down cleanly before quitting your process.</p><p>There is not an out-of-the-box equivalent in <a href="https://tokio.rs">tokio</a> – the async Rust runtime Oxy uses – or async/await generally, so we had to create one ourselves. Fortunately, most of the building blocks to roll your own exist already. Tokio has <a href="https://docs.rs/tokio/latest/tokio/sync/mpsc/index.html">multi-producer, single consumer (MPSC) channels</a>, generally used by multiple tasks to push the results of work onto a queue for a single task to process, but we can exploit the fact that it signals to that single receiver when all the sender channels have been closed and no new messages are expected.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/696myOP10jhM6nQdMPOerD/59613bd231ef16df30683b277f5cb86f/image5-1.png" />
            
            </figure><p>To start, we create an MPSC channel. Each task takes a clone of the producer end of the channel, and when that task completes it closes its instance of the producer. When we want to wait for all of the tasks to complete, we await a result on the consumer end of the MPSC channel. When every instance of the producer channel is closed - i.e. all tasks have completed - the consumer receives a notification that all of the channels are closed. Closing the channel when a task completes is an automatic consequence of Rust’s <a href="https://doc.rust-lang.org/rust-by-example/scope/raii.html">RAII</a> rules. Because the language enforces this rule it is harder to write incorrect code, though in fact we need to write very little code at all.</p>
    <div>
      <h3>Getting feedback on failure</h3>
      <a href="#getting-feedback-on-failure">
        
      </a>
    </div>
    <p>Many programs that implement a graceful reload/restart mechanism use Unix signals to trigger the process to perform an action. Signals are an ancient technique introduced in early versions of Unix to <a href="https://lwn.net/Articles/414618/">solve a specific problem while creating dozens more</a>. A common pattern is to change a program’s configuration on disk, then send it a signal (often SIGHUP) which the program handles by reloading those configuration files.</p><p>The limitations of this technique are obvious as soon as you make a mistake in the configuration, or when an important file referenced in the configuration is deleted. You reload the program and wonder why it isn’t behaving as you expect. If an error is raised, you have to look in the program’s log output to find out.</p><p>This problem compounds when you use <a href="/future-proofing-saltstack/">an automated configuration management tool</a>. It is not useful if that tool makes a configuration change and reports that it successfully reloaded your program, when in fact the program failed to read the change. The only thing that was successful was sending the reload signal!</p><p>We solved this in Oxy by creating a Unix socket specifically for coordinating restarts, and adding a new mode to Oxy that triggers a restart. In this mode:</p><ol><li><p>The restarter process validates the configuration file.</p></li><li><p>It connects to the restart coordination socket defined in that file.</p></li><li><p>It sends a “restart requested” message.</p></li><li><p>The current proxy instance receives this message.</p></li><li><p>A new instance is started, inheriting a pipe it will use to notify its parent instance.</p></li><li><p>The current instance waits for the new instance to report success or fail.</p></li><li><p>The current instance sends a “restart response” message back to the restarter process, containing the result.</p></li><li><p>The restarter process reports this result back to the user, using exit codes for automated systems to detect failure.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/24v87tIgWVdtxWE80HINrJ/4e64fff6fdb5591c8f8f36d2deb0ae7b/image3-1.png" />
            
            </figure><p>Now when we make a change to any of our Oxy applications, we can be confident that failures are detected using nothing more than our SREs’ existing tooling. This lets us discover failures earlier, narrow down root causes sooner, and avoid our systems getting into an inconsistent state.</p><p>This technique is described more generally in a coworker’s blog, <a href="https://blog.adamchalmers.com/signals-vs-servers/#a-better-way-control-servers">using an internal HTTP endpoint instead</a>. Yet HTTP is missing one important property of Unix sockets for the purpose of replacing signals. A user may only send a signal to a process if the process belongs to them - i.e. they started it - or if the user is root. This prevents another user logged into the same machine from you from terminating all of your processes. As Unix sockets are files, they also follow the Unix permission model. <a href="https://man7.org/linux/man-pages/man7/unix.7.html">Write permissions are required to connect to a socket</a>. Thus we can trivially reproduce the signals security model by making the restart coordination socket user writable only. (Root, as always, bypasses all permission checks.)</p>
    <div>
      <h3>Leave no connection behind</h3>
      <a href="#leave-no-connection-behind">
        
      </a>
    </div>
    <p>We have put a lot of effort into making restarts as graceful as possible, but there are still certain limitations. After restarting, eventually the old process has to terminate, to prevent a build-up of old processes after successive restarts consuming excessive memory and reducing the performance of other running services. There is an upper bound to how long we’ll let the old process run for; when this is reached, any connections remaining are forcibly broken.</p><p>The configuration changes that can be applied using graceful restart is limited by the design of systemd. While some configuration like resource limits can now be applied without restarting the service it applies to, others cannot; most significantly, new sockets. This is a problem inherent to the fork-and-inherit model.</p><p>For UDP-based protocols like HTTP/3, there is not even a concept of listener socket. The new process may open UDP sockets, but by default incoming packets are balanced between all open unconnected UDP sockets for a given address. How does the old process drain existing sessions without receiving packets intended for the new process and vice versa?</p><p>Is there a way to carry existing state to a new process to avoid some of these limitations? This is a hard problem to solve generally, and even in languages designed to support hot code upgrades there is some degree of running old tasks with old versions of code. Yet there are some common useful tasks that can be carried between processes so we can “interrupt the fewest number of people possible”.</p><p>Let’s not forget the unplanned outages: segfaults, oomkiller and other crashes. Thankfully rare in Rust code, but not impossible.</p><p>You can find the source for our Rust implementation of graceful restarts, named shellflip, in <a href="https://github.com/cloudflare/shellflip">its GitHub repository</a>. However, restarting correctly is just the first step of many needed to achieve our ultimate reliability goals. In a follow-up blog post we’ll talk about some creative solutions to these limitations.</p> ]]></content:encoded>
            <category><![CDATA[Oxy]]></category>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Edge]]></category>
            <guid isPermaLink="false">nebmQQHCFE8esMLxchEw9</guid>
            <dc:creator>Chris Branch</dc:creator>
        </item>
        <item>
            <title><![CDATA[From IP packets to HTTP: the many faces of our Oxy framework]]></title>
            <link>https://blog.cloudflare.com/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/</link>
            <pubDate>Thu, 30 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ We have recently introduced Oxy, our Rust-based framework for proxies powering many Cloudflare services and products. Today, we will explain why and how it spans various layers of the OSI model, by handling directly raw IP packets, TCP connections and UDP payloads ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Grcs0D3SWn2QqZ6Clo5ua/4c71120829ce6d992a02faab0119b20c/image2-27.png" />
            
            </figure><p>We have recently <a href="/introducing-oxy/">introduced Oxy</a>, our Rust-based framework for proxies powering many Cloudflare services and products. Today, we will explain why and how it spans various layers of the <a href="https://en.wikipedia.org/wiki/OSI_model">OSI model</a>, by handling directly raw IP packets, TCP connections and UDP payloads, all the way up to application protocols such as HTTP and <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a>.</p>
    <div>
      <h3>On-ramping IP packets</h3>
      <a href="#on-ramping-ip-packets">
        
      </a>
    </div>
    <p>An application built on top of Oxy defines — in a configuration file — the on-ramps that will accept ingress traffic to be proxied to some off-ramp. One of the possibilities is to on-ramp raw IP packets. But why operate at that layer?</p><p>The answer is: <a href="/introducing-cloudflare-one/">to power Cloudflare One</a>, our network offering for customers to extend their private networks — such as offices, data centers, <a href="https://www.cloudflare.com/learning/cloud/what-is-cloud-networking/">cloud networks</a> and roaming users — with the Cloudflare global network. Such private networks operate based on <a href="/stronger-bridge-to-zero-trust/">Zero Trust principles</a>, which means every access is authenticated and authorized, contrasting with legacy approaches where you can reach every private service after authenticating once with the Virtual Private Network.</p><p>To effectively extend our customer’s private network into ours, we need to support arbitrary protocols that rely on the Internet Protocol (IP). Hence, we on-ramp Cloudflare One customers’ traffic at (OSI model) layer 3, as a stream of IP packets. Naturally, those will often encapsulate TCP streams and UDP sessions. But nothing precludes other traffic from flowing through.</p>
    <div>
      <h3>IP tunneling</h3>
      <a href="#ip-tunneling">
        
      </a>
    </div>
    <p>Cloudflare’s operational model dictates that <a href="/how-cloudflares-architecture-allows-us-to-scale-to-stop-the-largest-attacks/">every service, machine and network</a> be operated in an homogeneous way, usable by every one of our customers the same way. We essentially have a gigantic multi-tenanted system. Simply on-ramping raw IP packets does not suffice: we must always move the IP packets within the scope of the tenant they belong to.</p><p>This is why we introduced the concept of IP tunneling in Oxy: every IP packet handled has context associated with it; at the very least, the tenant that it belongs to. Other arbitrary contexts can be added, but that is up to each application (built on top of Oxy) to define, parse and consume in its Oxy hooks. This allows applications to <a href="/introducing-oxy/">extend and customize</a> Oxy’s behavior.</p><p>You have probably heard of (or even used!) <a href="/warp-for-desktop/">Cloudflare Zero Trust WARP</a>: a client software that you can install on your device(s) to create virtual private networks managed and handled by Cloudflare. You begin by authenticating with your Cloudflare One account, and then the software will on-ramp your device’s traffic through the nearest Cloudflare data center: either to be upstreamed to Internet public IPs, or to other Cloudflare One connectors, such as <a href="/warp-to-warp/">another WARP device</a>.</p><p>Today, WARP routes the traffic captured in your device (e.g. your smartphone) via a WireGuard tunnel that is terminated in a server in the nearest Cloudflare data center. That server then opens an IP tunnel to an Oxy instance running on the same server. To convey context about that traffic, namely the <a href="/gateway-swg-3/">identity of the tenant</a>, some context must be attached to the IP tunnel.</p><p>For this, we use a <a href="https://man7.org/linux/man-pages/man7/unix.7.html">Unix SOCK_SEQPACKET</a>, which is a datagram-oriented socket exposing a connection-based interface with reliable and ordered delivery — it only accepts connections locally within the machine where it is bound to. Oxy receives the context in the first datagram, which the application parses — could be any format the application using Oxy desires. Then all subsequent datagrams are assumed to be raw self-describing IP packets, with no overhead whatsoever.</p><p>Another example are the on-ramps of <a href="/magic-wan-firewall/">Magic WAN</a>, such as <a href="https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/">GRE</a> or <a href="https://www.cloudflare.com/learning/network-layer/what-is-ipsec/">IPsec</a> tunnels, which also bring raw IP packets from customer’s networks to Cloudflare data centers. Unlike WARP, where its IP packets are decapsulated in user space, for GRE and IPsec we rely on the Linux kernel to do the job for us. Hence, we have no state whatsoever between two consecutive IP packets coming from the same customer, as the Linux kernel is routing them independently.</p><p>To accommodate the differences between IP packet handling in user space and the kernel, Oxy differentiates two types of IP tunnels:</p><ul><li><p><i>Connected IP tunnels</i> — as explained for WARP above, where the context is passed once, in the first datagram of the IP Tunnel SEQPACKET connection</p></li><li><p><i>Unconnected IP tunnels</i> — used by Magic WAN, where each IP packet is encapsulated (using GUE, i.e. <a href="https://datatracker.ietf.org/meeting/91/materials/slides-91-nvo3-1">Generic UDP Encapsulation</a>) to accommodate the context and unconnected UDP sockets are used</p></li></ul><p>Encapsulating every IP packet comes at the cost of extra CPU usage. But moving the packet around to and from an Oxy instance does not change much regardless of the encapsulation, as we do not have <a href="https://www.cloudflare.com/learning/network-layer/what-is-mtu/">MTU limitations</a> inside our data centers. This way we avoid causing IP packet fragmentation, whose reassembly takes a toll on CPU and Memory usage.</p>
    <div>
      <h3>Tracking IP flows</h3>
      <a href="#tracking-ip-flows">
        
      </a>
    </div>
    <p>Once IP packets arrive to Oxy, regardless of how they on-ramp, we must decide what to do with them. We decided to rely on the idea of IP flows, as that is inherent to most protocols: a point to point interaction will generally be bounded in time and follow some type of state machine, either known by the transport or by the application protocol.</p><p>We perform flow tracking to detect IP flows. When handling an on-ramped IP packet, we parse its IP header and possible transport (i.e. OSI Model layer 4) header. We use the excellent <a href="https://crates.io/crates/etherparse">etherparse Rust crate</a> for this purpose, which parses the flow signature, with a source and destination IP address, ports (optional) and protocol. We then look up whether there is already a known IP flow for that signature: if so, then the packet is proxied through the path already determined for that flow towards its off-ramp. If the flow is new, then its upstream route is computed and memoized for future packets. This is in essence what routers do, and to some extent Oxy handling of IP packets is meant to operate as a router.</p><p>The interesting thing about tracking IP flows is that we can now expose their lifetime events to the application built on top of Oxy, via its hooks. Applications can then use these events for interesting operations, such as:</p><ul><li><p>Applying <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust principles</a> before allowing the IP flow through, such as <a href="https://developers.cloudflare.com/cloudflare-one/policies/filtering/">our Secure Web Gateway policies</a></p></li><li><p>Emitting <a href="https://developers.cloudflare.com/cloudflare-one/analytics/logs/gateway-logs/">audit logs</a> that collect the decisions taken at the start of the IP flow</p></li><li><p>Collecting metadata about the traffic processed by the time the IP flow ends, e.g., to support billing calculations</p></li><li><p>Computing routing decisions of where to send the IP flow next, e.g. to another Cloudflare product/service, or off-ramped to the Internet, or to another Cloudflare One connector</p></li></ul>
    <div>
      <h3>From an IP flow to a TCP stream</h3>
      <a href="#from-an-ip-flow-to-a-tcp-stream">
        
      </a>
    </div>
    <p>You would think that most applications do not handle IP packets directly. That is a good hunch, and also a fact at Cloudflare: many systems operate at the application layer (OSI Model layer 7) where they can inspect traffic in a way much closer to what the end user is perceiving.</p><p>To get closer to that reality, Oxy can upgrade an IP flow to the transport layer (OSI Model layer 4). We first consider what this means for the case of TCP traffic. The problem that we want to solve is to process a given stream of raw IP packets, with the same TCP flow signature initiating a <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">TCP handshake</a>, and obtain as a result a TCP connection streaming data. Hence, we need a TCP protocol implementation that can be used from userspace.</p><p>The best Rust-native implementation is the <a href="https://crates.io/crates/smoltcp">smoltcp</a> crate. However, its stated objectives do not match our needs, as it does not implement many of the performance and reliability enhancements of TCP that are expected of a first-class TCP, therefore not sufficing for the sheer amount of traffic and demands we have.</p><p>Instead, we rely on the Linux kernel to help us here. After all, it has the most battle-tested TCP protocol implementation in the world.</p><p>To leverage that, we set up a <a href="https://www.kernel.org/doc/html/v5.8/networking/tuntap.html">TUN interface</a>, and add an IP route to forward traffic to that interface (more details below as to what IPs to use). A TUN interface is a virtual network device whose network data is generated by user-programmable software, rather than a device driver for a physically-connected network adapter. But otherwise it looks and works like a physical network adapter for all purposes.</p><p>We write the IP packets — meant to be <i>upgraded</i> to a TCP stream — to the file descriptor backing the TUN interface. However, that’s not enough, as the kernel in our machines will drop those packets since customer’s IP addresses only make sense in their infrastructure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tFfWzjpKd9CB0An7BwDY5/e77a2dd9597c5a26bded5b251c66ab40/From-IP-flow-to-TCP-stream.png" />
            
            </figure><p>Transforming raw IP packets into a TCP stream</p><p>The step we are missing is that those packets must be transformed, i.e. <a href="https://en.wikipedia.org/wiki/Network_address_translation">Network Address Translated</a> (NAT), so that the kernel routes them into the TUN interface. Hence, Oxy maintains its own stateful NAT: every IP flow desired to be upgraded to a TCP stream must claim a NAT slot (to be returned when the TCP stream finishes), and have its packets’ addresses rewritten for the IPs that the TUN interface route encompasses.</p><p>Once packets flow into the TUN interface with the right addresses, the kernel will process them as if they had entered the machine through your network card. This means that you can now bind a TCP listener to accept TCP connections in the IP address for which the NAT-ed IP packets are destined to, and <i>voilà</i>, we have our IP flows upgraded to TCP streams.</p><p>We are left with one question: what IP address should the NAT use? One option is to just reserve some machine-local IP address and hope that no other application running in that machine uses it, as otherwise unexpected traffic will show up in our TUN device.</p><p>Instead, we chose to not have to worry about that at all by relying on <a href="https://man7.org/linux/man-pages/man7/network_namespaces.7.html">Linux network namespaces</a>. A network namespace provides you with an isolated network in a machine, acting as a virtualization layer provided by the kernel. Even if you do not know what this is, you are likely using it, e.g. via Docker.</p><p>Hence, Oxy dynamically starts a network namespace to run its TUN interface for upgrading IP flows, where it can use all the local IP space and ports freely. After all, those TCP connections only matter locally, between Oxy’s NAT and Oxy’s L4 proxy.</p><p>An interesting aspect here is that the Oxy application itself runs in the default/root namespace, making it easily reachable for on-ramping traffic, and also able to off-ramp traffic to other services operating on the same machine in the default/root namespace. But that raises the question: how is Oxy able to operate simultaneously in the root namespace as well as in the namespace dedicated to upgrading IP flows to TCP connections? The trick is to:</p><ul><li><p>Run the Oxy-based process in the root namespace, without any special permissions (no elevated permissions required).</p></li><li><p>That process calls <a href="https://man7.org/linux/man-pages/man2/clone.2.html">clone</a> into a new unnamed user and network namespace.</p></li><li><p>The child (cloned) and parent (original) processes communicate via a paired pipe.</p></li><li><p>The child brings up the TUN interface and establishes the IP routes to it.</p></li><li><p>The child process binds a TCP listener on an IP address that is bound to the TUN interface and passes that file descriptor to the parent process using <a href="/know-your-scm_rights/">SCM_RIGHTS</a>.</p></li></ul><p>This way, the Oxy process will now have a TCP listener, to obtain the upgraded IP flow connections from, while running in the default namespace and despite that TCP listener — and any connections accepted from it — operating in an unnamed dynamically created namespace.</p>
    <div>
      <h3>From a TCP stream to HTTP</h3>
      <a href="#from-a-tcp-stream-to-http">
        
      </a>
    </div>
    <p>Once Oxy has a TCP stream, it may also <i>upgrade</i> it, in a sense, to be handled as HTTP traffic. Again, the framework provides the capabilities, but it is up to the application (built on top of Oxy) to make the decision. Analogously to the IP flow, the TCP stream start also triggers a hook to let the application know about a new connection, and to let it decide what to do with it. One of the choices is to treat it as HTTP(S) traffic, at which point Oxy will pass the connection through a <a href="https://crates.io/crates/hyper">Hyper server</a> (possibly also doing TLS if necessary). If you are curious about this part, then rest assured we will have a blog post focused just on that soon.</p>
    <div>
      <h3>What about UDP</h3>
      <a href="#what-about-udp">
        
      </a>
    </div>
    <p>While we have been focusing on TCP so far, all of the capabilities implemented for TCP are also supported for UDP as well. We’ve glossed over it so far because it is easier to handle, since converting an IP packet to UDP payloads requires only stripping the IP and UDP headers. We do this in Oxy logic, in user space, thereby replacing the idea employed for TCP that relies on the TUN interface. Everything else works the same way across TCP and UDP, with UDP traffic potentially being HTTPS for the case of QUIC-based HTTP/3.</p>
    <div>
      <h3>From TCP/UDP back to IP flow</h3>
      <a href="#from-tcp-udp-back-to-ip-flow">
        
      </a>
    </div>
    <p>We have been looking at IP packets on-ramping in Oxy and converting from IP flows to TCP/UDP. Eventually that traffic is sent to an upstream that will respond back, and so we ought to obtain resulting IP packets to send to the client. This happens quite naturally in the code base as we only need to revert the operation done in the <i>upgrade</i>:</p><ul><li><p>For UDP, we add the IP and UDP headers to the payload of each datagram and thereby obtain the IP packet to send to the client.</p></li><li><p>For TCP, writing to the upgraded TCP socket causes the kernel to generate IP packets routed to the TUN interface. We read these packets from the TUN interface and <i>undo</i> the NAT operation explained above — applied to packets being written to the TUN interface — thereby obtaining the IP packet to send to the client.</p></li></ul><p>More interestingly, the application built on top of Oxy may also define that TCP/UDP traffic (handled as layer 4) is to be <i>downgraded</i> to IP flow (i.e. layer 3). To imagine where this would be usable, consider another Cloudflare One example, where a WARP client establishes an SSH session to a remote WARP device (which is <a href="/warp-to-warp/">now possible</a>) and has configured <a href="/ssh-command-logging/">SSH command audit logging</a> — in that case, we will have the following steps:</p><ol><li><p>On-ramp the IP packets from WARP client device into the Oxy application.</p></li><li><p>Oxy tracks the IP flows; per application mandate, then Oxy checks if it is a TCP IP flow with destination port 22, and as such it upgrades to TCP connection.</p></li><li><p>The application is given control of the TCP connection and, in this case, <a href="https://developers.cloudflare.com/cloudflare-one/policies/filtering/network-policies/ssh-logging/">our Secure Web Gateway</a> (an Oxy application) parses the traffic to perform the SSH command logging.</p></li><li><p>Since the upstream is determined to be another WARP device, Oxy is mandated to <i>downgrade</i> the TCP connection to IP packets, so that they can be off-ramped to the upstream as such.</p></li></ol><p>Therefore, we need to provide the capability to do step 4, which we haven’t described yet. For UDP the operation is trivial: add or remove the IP/UDP headers as necessary.</p><p>For TCP, we will again resort to (another) TUN interface. This is slightly more complicated than upgrading, because when upgrading we use a single TCP listener from the network namespace where all upgraded connections appear, whereas to downgrade we need a TCP client connection from the network namespace per downgraded connection. Therefore we need to interact with the network namespace to obtain these <i>on-demand</i> TCP client connections at runtime, as explained next, making the process to downgrade more convoluted.</p><p>To enable that, we rely on the paired pipe maintained between the Oxy (parent) process and the cloned (child) process that operates inside the dynamic namespace: it is used for requesting the TCP client socket for a specific IP flow. This entails the following steps:</p><ol><li><p>The Oxy process reserves a NAT mapping for that IP flow for downgrade.</p></li><li><p>It requests (via a <a href="https://man7.org/linux/man-pages/man2/send.2.html">pipe sendmsg</a>) the cloned child process to establish a TCP connection to the NAT-ed addresses.</p></li><li><p>By doing so, the child process inherently makes the Linux kernel TCP implementation issue a TCP handshake to the upstream, causing a SYN IP packet to show up in the TUN interface.</p></li><li><p>The Oxy process is consuming packets from the downgrading namespace’s TUN interface, and hence will consume that packet, for which it promptly reverts the NAT. The IP packet is then off-ramped as explained in the next section.</p></li><li><p>In the meantime, the child process will have sent back (via the paired pipe) the file descriptor for the TCP client socket, again using SCM_RIGHTS. The Oxy application will now proxy the client TCP connection (meant to be downgraded) into that obtained TCP connection, to result in the raw IP packets read from the TUN interface.</p></li></ol><p>Despite being elaborate, this is quite intuitive, particularly if you’ve read through the upgrade section earlier that is a simpler version of this idea.</p>
    <div>
      <h3>The overall picture</h3>
      <a href="#the-overall-picture">
        
      </a>
    </div>
    <p>In the sections above we have covered the life of an IP packet entering Oxy and what happens to it until exiting towards its upstream destination. This is summarized in the following diagram illustrating the life cycle of such packets.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PQ1zLHMeqHLCiTaLfdDXT/e68ba49b2f7015e0e31bb173cc8ac54a/The-overall-picture.png" />
            
            </figure><p>Life cycle of IP packets in and out of an Oxy instance</p><p>We are left with how to exit the traffic. Sending the proxied traffic towards its destination (referred to as upstream) is what we call off-ramping it. We support off-ramping traffic across the same OSI Model layers that we allow to on-ramp: that is, as IP packets, TCP or UDP sockets, or HTTP(S) directly.</p><p>It is up to the application logic (that uses the Oxy framework) to make that decision and instruct Oxy on which layer to use. There is a lot to be said about this part, such as what <a href="/cloudflare-servers-dont-own-ips-anymore/">IPs to use when egressing to the Internet</a> — so if you are curious for more details, then stay tuned for more blog posts about Oxy.</p><p>No software overview is complete without its tests. The one interesting thing to think about here is that, to test all of the above, we need to generate raw IP packets in our tests. That’s not ideal as one would like to just write plain Rust logic that establishes TCP connections towards the Oxy proxy. Hence, to simplify all of this, our tests actually reuse our internal library (described above) to create a dynamic network namespaces and downgrade/upgrade the TCP connections as necessary.</p><p>Therefore, our tests talk normal TCP against a TCP downgrader running together with the tests, which outputs raw IP packets that we pipe to the Oxy instance being tested. It is an elegant and simple way to work around the challenge while battle testing further the TUN interface logic.</p>
    <div>
      <h3>Wrapping up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>Covering proxying IP packets all the way to HTTP requests feels like an overly broad framework. We felt the same at first at Cloudflare, particularly because Oxy was not born in a day, and in fact it started first with HTTP proxying and then started to go down the OSI Model layers. In hindsight, doing it all feels the right decision: being able to upgrade and downgrade traffic as necessary has been very useful, and in fact our proxying logic shares the majority of code despite handling different layers (socket primitives, observability, security aspects, configurability, etc).</p><p>Today, all of the ideas above are powering Cloudflare One Zero Trust as well as <a href="/geoexit-improving-warp-user-experience-larger-network/">plain WARP</a>. This means they are battle-tested across millions of daily users exchanging most of their traffic (both to the Internet as well as towards private/corporate networks) through the Cloudflare global network.</p><p>If you’ve enjoyed reading this and are interested in working on similar challenges with Rust, then be sure to check our open positions as we continue to grow our team. Likewise, there will be more blog posts related to our learnings developing Oxy, so tag along the ride for more fun!</p> ]]></content:encoded>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Oxy]]></category>
            <guid isPermaLink="false">6QDZq0bfVTtMajuHkH0UFA</guid>
            <dc:creator>Nuno Diegues</dc:creator>
        </item>
        <item>
            <title><![CDATA[Oxy is Cloudflare's Rust-based next generation proxy framework]]></title>
            <link>https://blog.cloudflare.com/introducing-oxy/</link>
            <pubDate>Thu, 02 Mar 2023 15:05:00 GMT</pubDate>
            <description><![CDATA[ In this blog post, we are proud to introduce Oxy - our modern proxy framework, developed using the Rust programming language ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In this blog post, we are proud to introduce Oxy - our modern proxy framework, developed using the Rust programming language. Oxy is a foundation of several Cloudflare projects, including the <a href="https://www.cloudflare.com/products/zero-trust/gateway/">Zero Trust Gateway</a>, the iCloud Private Relay <a href="/icloud-private-relay/">second hop proxy</a>, and the internal <a href="/cloudflare-servers-dont-own-ips-anymore/">egress routing service</a>.</p><p>Oxy leverages our years of experience building high-load proxies to implement the latest communication protocols, enabling us to effortlessly build sophisticated services that can accommodate massive amounts of daily traffic.</p><p>We will be exploring Oxy in greater detail in upcoming technical blog posts, providing a comprehensive and in-depth look at its capabilities and potential applications. For now, let us embark on this journey and discover what Oxy is and how we built it.</p>
    <div>
      <h2>What Oxy does</h2>
      <a href="#what-oxy-does">
        
      </a>
    </div>
    <p>We refer to Oxy as our "next-generation proxy framework". But what do we really mean by “proxy framework”? Picture a server (like NGINX, that reader might be familiar with) that can proxy traffic with an array of protocols, including various predefined common traffic flow scenarios that enable you to route traffic to specific destinations or even egress with a different protocol than the one used for ingress. This server can be configured in many ways for specific flows and boasts tight integration with the surrounding infrastructure, whether telemetry consumers or networking services.</p><p>Now, take all of that and add in the ability to programmatically control every aspect of the proxying: protocol decapsulation, traffic analysis, routing, tunneling logic, DNS resolution, and so much more. And this is what Oxy proxy framework is: a feature-rich proxy server tightly integrated with our internal infrastructure that's customizable to meet application requirements, allowing engineers to tweak every component.</p><p>This design is in line with our belief in an iterative approach to development, where a basic solution is built first and then gradually improved over time. With Oxy, you can start with a basic solution that can be deployed to our servers and then add additional features as needed, taking advantage of the many extensibility points offered by Oxy. In fact, you can avoid writing any code, besides a few lines of bootstrap boilerplate and get a production-ready server with a wide variety of startup configuration options and traffic flow scenarios.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nk7Ri6viC85BdWoRSiB9v/f40a9971fdad71cb07ee0b3aebf99fd9/image3-2.png" />
            
            </figure><p><i>High-level Oxy architecture</i></p><p>For example, suppose you'd like to implement an HTTP firewall. With Oxy, you can proxy HTTP(S) requests right out of the box, eliminating the need to write any code related to production services, such as request metrics and logs. You simply need to implement an Oxy hook handler for HTTP requests and responses. If you've used <a href="https://developers.cloudflare.com/workers/examples/respond-with-another-site/">Cloudflare Workers</a> before, then you should be familiar with this extensibility model.</p><p>Similarly, you can implement a <a href="https://en.wikipedia.org/wiki/OSI_model">layer 4</a> firewall by providing application hooks that handle ingress and egress connections. This goes beyond a simple block/accept scenario, as you can build authentication functionality or a traffic router that sends traffic to different destinations based on the geographical information of the ingress connection. The capabilities are incredibly rich, and we've made the extensibility model as ergonomic and flexible as possible. As an example, if information obtained from layer 4 is insufficient to make an informed firewall decision, the app can simply ask Oxy to decapsulate the traffic and process it with HTTP firewall.</p><p>The aforementioned scenarios are prevalent in many products we build at Cloudflare, so having a foundation that incorporates ready solutions is incredibly useful. This foundation has absorbed lots of experience we've gained over the years, taking care of many sharp and dark corners of high-load service programming. As a result, application implementers can stay focused on the business logic of their application with Oxy taking care of the rest. In fact, we've been able to create a few privacy proxy applications using Oxy that now serve massive amounts of traffic in production with less than a couple of hundred lines of code. This is something that would have taken multiple orders of magnitude more time and lines of code before.</p><p>As previously mentioned, we'll dive deeper into the technical aspects in future blog posts. However, for now, we'd like to provide a brief overview of Oxy's capabilities. This will give you a glimpse of the many ways in which Oxy can be customized and used.</p>
    <div>
      <h3>On-ramps</h3>
      <a href="#on-ramps">
        
      </a>
    </div>
    <p>On-ramp defines a combination of transport layer socket type and protocols that server listeners can use for ingress traffic.</p><p>Oxy supports a wide variety of traffic on-ramps:</p><ul><li><p>HTTP 1/2/3 (including various CONNECT protocols for layer 3 and 4 traffic)</p></li><li><p>TCP and UDP traffic over Proxy Protocol</p></li><li><p>general purpose IP traffic, including ICMP</p></li></ul><p>With Oxy, you have the ability to analyze and manipulate traffic at multiple layers of the OSI model - from layer 3 to layer 7. This allows for a wide range of possibilities in terms of how you handle incoming traffic.</p><p>One of the most notable and powerful features of Oxy is the ability for applications to force decapsulation. This means that an application can analyze traffic at a higher level, even if it originally arrived at a lower level. For example, if an application receives IP traffic, it can choose to analyze the UDP traffic encapsulated within the IP packets. With just a few lines of code, the application can tell Oxy to upgrade the IP flow to a UDP tunnel, effectively allowing the same code to be used for different on-ramps.</p><p>The application can even go further and ask Oxy to sniff UDP packets and check if they contain <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3 traffic</a>. In this case, Oxy can upgrade the UDP traffic to HTTP and handle HTTP/3 requests that were originally received as raw IP packets. This allows for the simultaneous processing of traffic at all three layers (L3, L4, L7), enabling applications to analyze, filter, and manipulate the traffic flow from multiple perspectives. This provides a robust toolset for developing advanced traffic processing applications.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tVlLbQeNVeN2lYN9ovJNH/d87cc5adb53ff0fc441530520540f781/image1-1.png" />
            
            </figure><p><i>Multi-layer traffic processing in Oxy applications</i></p>
    <div>
      <h3>Off-ramps</h3>
      <a href="#off-ramps">
        
      </a>
    </div>
    <p>Off-ramp defines a combination of transport layer socket type and protocols that proxy server connectors can use for egress traffic.</p><p>Oxy offers versatility in its egress methods, supporting a range of protocols including HTTP 1 and 2, UDP, TCP, and IP. It is equipped with internal DNS resolution and caching, as well as customizable resolvers, with automatic fallback options for maximum system reliability. Oxy implements <a href="https://www.rfc-editor.org/rfc/rfc8305">happy eyeballs</a> for TCP, advanced tunnel timeout logic and has the ability to route traffic to internal services with accompanying metadata.</p><p>Additionally, through collaboration with one of our internal services (which is an Oxy application itself!) <a href="/geoexit-improving-warp-user-experience-larger-network/">Oxy is able to offer geographical egress</a> — allowing applications to route traffic to the public Internet from various locations in our extensive network covering numerous cities worldwide. This complex and powerful feature can be easily utilized by Oxy application developers at no extra cost, simply by adjusting configuration settings.</p>
    <div>
      <h3>Tunneling and request handling</h3>
      <a href="#tunneling-and-request-handling">
        
      </a>
    </div>
    <p>We've discussed Oxy's communication capabilities with the outside world through on-ramps and off-ramps. In the middle, Oxy handles efficient stateful tunneling of various traffic types including TCP, UDP, QUIC, and IP, while giving applications full control over traffic blocking and redirection.</p><p>Additionally, Oxy effectively handles HTTP traffic, providing full control over requests and responses, and allowing it to serve as a direct HTTP or API service. With built-in tools for streaming analysis of HTTP bodies, Oxy makes it easy to extract and process data, such as form data from uploads and downloads.</p><p>In addition to its multi-layer traffic processing capabilities, Oxy also supports advanced HTTP tunneling methods, such as <a href="https://datatracker.ietf.org/doc/html/rfc9298">CONNECT-UDP</a> and <a href="https://datatracker.ietf.org/doc/draft-ietf-masque-connect-ip/">CONNECT-IP</a>, using the latest extensions to HTTP 3 and 2 protocols. It can even process HTTP CONNECT request payloads on layer 4 and recursively process the payload as HTTP if the encapsulated traffic is HTTP.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4a80AwmzUmUyxx7q8j2hcK/c2bcd1903e037852e57186510f6bac58/image2-2.png" />
            
            </figure><p><i>Recursive processing of HTTP CONNECT body payload in HTTP pipeline</i></p>
    <div>
      <h3>TLS</h3>
      <a href="#tls">
        
      </a>
    </div>
    <p>The modern Internet is unimaginable without traffic encryption, and Oxy, of course, provides this essential aspect. Oxy's cryptography and TLS are based on BoringSSL, providing both a FIPS-compliant version with a limited set of certified features and the latest version that supports all the currently available TLS features. Oxy also allows applications to switch between the two versions in real-time, on a per-request or per-connection basis.</p><p>Oxy's TLS client is designed to make HTTPS requests to <a href="https://en.wikipedia.org/wiki/Upstream_server">upstream servers</a>, with the functionality and security of a browser-grade client. This includes the reconstruction of certificate chains, certificate revocation checks, and more. In addition, Oxy applications can be secured with TLS v1.3, and optionally mTLS, allowing for the extraction of client authentication information from x509 certificates.</p><p>Oxy has the ability to inspect and filter HTTPS traffic, including HTTP/3, and provides the means for dynamically generating certificates, serving as a foundation for implementing data loss prevention (DLP) products. Additionally, Oxy's internal fork of BoringSSL, which is not FIPS-compliant, supports the use of <a href="https://datatracker.ietf.org/doc/html/rfc7250">raw public keys</a> as an alternative to WebPKI, making it ideal for internal service communication. This allows for all the benefits of TLS without the hassle of managing root certificates.</p>
    <div>
      <h3>Gluing everything together</h3>
      <a href="#gluing-everything-together">
        
      </a>
    </div>
    <p>Oxy is more than just a set of building blocks for network applications. It acts as a cohesive glue, handling the bootstrapping of the entire proxy application with ease, including parsing and applying configurations, setting up an asynchronous runtime, applying seccomp hardening and providing automated graceful restarts functionality.</p><p>With built-in support for panic reporting to Sentry, Prometheus metrics with a Rust-macro based API, Kibana logging, distributed tracing, memory and runtime profiling, Oxy offers comprehensive <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitoring</a> and analysis capabilities. It can also generate detailed audit logs for layer 4 traffic, useful for billing and network analysis.</p><p>To top it off, Oxy includes an integration testing framework, allowing for easy testing of application interactions using TypeScript-based tests.</p>
    <div>
      <h3>Extensibility model</h3>
      <a href="#extensibility-model">
        
      </a>
    </div>
    <p>To take full advantage of Oxy's capabilities, one must understand how to extend and configure its features. Oxy applications are configured using YAML configuration files, offering numerous options for each feature. Additionally, application developers can extend these options by leveraging the convenient macros provided by the framework, making customization a breeze.</p><p>Suppose the Oxy application uses a key-value database to retrieve user information. In that case, it would be beneficial to expose a YAML configuration settings section for this purpose. With Oxy, defining a structure and annotating it with the <code>#[oxy_app_settings]</code> attribute is all it takes to accomplish this:</p>
            <pre><code>///Application’s key-value (KV) database settings
#[oxy_app_settings]
pub struct MyAppKVSettings {
    /// Key prefix.
    pub prefix: Option&lt;String&gt;,
    /// Path to the UNIX domain socket for the appropriate KV 
    /// server instance.
    pub socket: Option&lt;String&gt;,
}</code></pre>
            <p>Oxy can then generate a default YAML configuration file listing available options and their default values, including those extended by the application. The configuration options are automatically documented in the generated file from the Rust doc comments, following best Rust practices.</p><p>Moreover, Oxy supports multi-tenancy, allowing a single application instance to expose multiple on-ramp endpoints, each with a unique configuration. But, sometimes even a YAML configuration file is not enough to build a desired application, this is where Oxy's comprehensive set of hooks comes in handy. These hooks can be used to extend the application with Rust code and cover almost all aspects of the traffic processing.</p><p>To give you an idea of how easy it is to write an Oxy application, here is an example of basic Oxy code:</p>
            <pre><code>struct MyApp;

// Defines types for various application extensions to Oxy's
// data types. Contexts provide information and control knobs for
// the different parts of the traffic flow and applications can extend // all of them with their custom data. As was mentioned before,
// applications could also define their custom configuration.
// It’s just a matter of defining a configuration object with
// `#[oxy_app_settings]` attribute and providing the object type here.
impl OxyExt for MyApp {
    type AppSettings = MyAppKVSettings;
    type EndpointAppSettings = ();
    type EndpointContext = ();
    type IngressConnectionContext = MyAppIngressConnectionContext;
    type RequestContext = ();
    type IpTunnelContext = ();
    type DnsCacheItem = ();

}
   
#[async_trait]
impl OxyApp for MyApp {
    fn name() -&gt; &amp;'static str {
        "My app"
    }

    fn version() -&gt; &amp;'static str {
        env!("CARGO_PKG_VERSION")
    }

    fn description() -&gt; &amp;'static str {
        "This is an example of Oxy application"
    }

    async fn start(
        settings: ServerSettings&lt;MyAppSettings, ()&gt;
    ) -&gt; anyhow::Result&lt;Hooks&lt;Self&gt;&gt; {
        // Here the application initializes various hooks, with each
        // hook being a trait implementation containing multiple
        // optional callbacks invoked during the lifecycle of the
        // traffic processing.
        let ingress_hook = create_ingress_hook(&amp;settings);
        let egress_hook = create_egress_hook(&amp;settings);
        let tunnel_hook = create_tunnel_hook(&amp;settings);
        let http_request_hook = create_http_request_hook(&amp;settings);
        let ip_flow_hook = create_ip_flow_hook(&amp;settings);

        Ok(Hooks {
            ingress: Some(ingress_hook),
            egress: Some(egress_hook),
            tunnel: Some(tunnel_hook),
            http_request: Some(http_request_hook),
            ip_flow: Some(ip_flow_hook),
            ..Default::default()
        })
    }
}

// The entry point of the application
fn main() -&gt; OxyResult&lt;()&gt; {
    oxy::bootstrap::&lt;MyApp&gt;()
}</code></pre>
            
    <div>
      <h2>Technology choice</h2>
      <a href="#technology-choice">
        
      </a>
    </div>
    <p>Oxy leverages the safety and performance benefits of Rust as its implementation language. At Cloudflare, Rust has emerged as a popular choice for new product development, and there are ongoing efforts to migrate some of the existing products to the language as well.</p><p>Rust offers memory and concurrency safety through its ownership and borrowing system, preventing issues like null pointers and data races. This safety is achieved without sacrificing performance, as Rust provides low-level control and the ability to write code with minimal runtime overhead. Rust's balance of safety and performance has made it popular for building safe performance-critical applications, like proxies.</p><p>We intentionally tried to stand on the shoulders of the giants with this project and avoid reinventing the wheel. Oxy heavily relies on open-source dependencies, with <a href="https://github.com/hyperium/hyper">hyper</a> and <a href="https://github.com/tokio-rs/tokio">tokio</a> being the backbone of the framework. Our philosophy is that we should pull from existing solutions as much as we can, allowing for faster iteration, but also use widely battle-tested code. If something doesn't work for us, we try to collaborate with maintainers and contribute back our fixes and improvements. In fact, we now have two team members who are core team members of tokio and hyper projects.</p><p>Even though Oxy is a proprietary project, we try to give back some love to the open-source community without which the project wouldn’t be possible by open-sourcing some of the building blocks such as <a href="https://github.com/cloudflare/boring">https://github.com/cloudflare/boring</a> and <a href="https://github.com/cloudflare/quiche">https://github.com/cloudflare/quiche</a>.</p>
    <div>
      <h2>The road to implementation</h2>
      <a href="#the-road-to-implementation">
        
      </a>
    </div>
    <p>At the beginning of our journey, we set out to implement a proof-of-concept  for an HTTP firewall using Rust for what would eventually become Zero Trust Gateway product. This project was originally part of the <a href="/1111-warp-better-vpn/">WARP</a> service repository. However, as the PoC rapidly advanced, it became clear that it needed to be separated into its own Gateway proxy for both technical and operational reasons.</p><p>Later on, when tasked with implementing a relay proxy for iCloud Private Relay, we saw the opportunity to reuse much of the code from the Gateway proxy. The Gateway project could also benefit from the HTTP/3 support that was being added for the Private Relay project. In fact, early iterations of the relay service were forks of the Gateway server.</p><p>It was then that we realized we could extract common elements from both projects to create a new framework, Oxy. The history of Oxy can be traced back to its origins in the commit history of the Gateway and Private Relay projects, up until its separation as a standalone framework.</p><p>Since our inception, we have leveraged the power of Oxy to efficiently roll out multiple projects that would have required a significant amount of time and effort without it. Our iterative development approach has been a strength of the project, as we have been able to identify common, reusable components through hands-on testing and implementation.</p><p>Our small core team is supplemented by internal contributors from across the company, ensuring that the best subject-matter experts are working on the relevant parts of the project. This contribution model also allows us to shape the framework's API to meet the functional and ergonomic needs of its users, while the core team ensures that the project stays on track.</p>
    <div>
      <h2>Relation to <a href="/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/">Pingora</a></h2>
      <a href="#relation-to">
        
      </a>
    </div>
    <p>Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective. Pingora was created to serve traffic from millions of our client’s upstream servers, including those with ancient and unusual configurations. Non-UTF 8 URLs or TLS settings that are not supported by most TLS libraries being just a few such quirks among many others. This focus on handling technically challenging unusual configurations sets Pingora apart from other proxy servers.</p><p>The concept of Pingora came about during the same period when we were beginning to develop Oxy, and we initially considered merging the two projects. However, we quickly realized that their objectives were too different to do that. Pingora is specifically designed to establish Cloudflare’s HTTP connectivity with the Internet, even in its most technically obscure corners. On the other hand, Oxy is a multipurpose platform that supports a wide variety of communication protocols and aims to provide a simple way to develop high-performance proxy applications with business logic.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Oxy is a proxy framework that we have developed to meet the demanding needs of modern services. It has been designed  to provide a flexible and scalable solution that can be adapted to meet the unique requirements of each project and by leveraging the power of Rust, we made it both safe and fast.</p><p>Looking forward, Oxy is poised to play one of the critical roles in our company's larger effort to modernize and improve our architecture. It provides a solid block in foundation on which we can keep building the better Internet.</p><p>As the framework continues to evolve and grow, we remain committed to our iterative approach to development, constantly seeking out new opportunities to reuse existing solutions and improve our codebase. This collaborative, community-driven approach has already yielded impressive results, and we are confident that it will continue to drive the future success of Oxy.</p><p>Stay tuned for more tech savvy blog posts on the subject!</p> ]]></content:encoded>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Edge]]></category>
            <category><![CDATA[iCloud Private Relay]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Oxy]]></category>
            <guid isPermaLink="false">1HAnoThlPiFQ4Bgpn04CM0</guid>
            <dc:creator>Ivan Nikulin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Unlocking QUIC’s proxying potential with MASQUE]]></title>
            <link>https://blog.cloudflare.com/unlocking-quic-proxying-potential/</link>
            <pubDate>Sun, 20 Mar 2022 16:58:37 GMT</pubDate>
            <description><![CDATA[ We continue our technical deep dive into traditional TCP proxying over HTTP ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In the last post on <a href="/a-primer-on-proxies/">proxy TCP-based applications</a>, we discussed how HTTP CONNECT can be used to proxy TCP-based applications, including DNS-over-HTTPS and generic HTTPS traffic, between a client and target server. This provides significant benefits for those applications, but it doesn’t lend itself to non-TCP applications. And if you’re wondering whether or not we care about these, the answer is an affirmative yes!</p><p>For instance, <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is based on QUIC, which runs on top of UDP. What if we wanted to speak HTTP/3 to a target server? That requires two things: (1) the means to encapsulate a UDP payload between client and proxy (which the proxy decapsulates and forward to the target in an actual UDP datagram), and (2) a way to instruct the proxy to open a UDP association to a target so that it knows where to forward the decapsulated payload. In this post, we’ll discuss answers to these two questions, starting with encapsulation.</p>
    <div>
      <h3>Encapsulating datagrams</h3>
      <a href="#encapsulating-datagrams">
        
      </a>
    </div>
    <p>While TCP provides a reliable and ordered byte stream for applications to use, UDP instead provides unreliable messages called datagrams. Datagrams sent or received on a connection are loosely associated, each one is independent from a transport perspective. Applications that are built on top of UDP can leverage the unreliability for good. For example, low-latency media streaming often does so to avoid lost packets getting retransmitted. This makes sense, on a live teleconference it is better to receive the most recent audio or video rather than starting to lag behind while you're waiting for stale data</p><p>QUIC is designed to run on top of an unreliable protocol such as UDP. QUIC provides its own layer of security, packet loss detection, methods of data recovery, and congestion control. If the layer underneath QUIC duplicates those features, they can cause wasted work or worse create destructive interference. For instance, QUIC <a href="https://www.rfc-editor.org/rfc/rfc9002.html#section-7">congestion control</a> defines a number of signals that provide input to sender-side algorithms. If layers underneath QUIC affect its packet flows (loss, timing, pacing, etc), they also affect the algorithm output. Input and output run in a feedback loop, so perturbation of signals can get amplified. All of this can cause congestion control algorithms to be more conservative in the data rates they use.</p><p>If we could speak HTTP/3 to a proxy, and leverage a reliable QUIC stream to carry encapsulated datagrams payload, then everything <i>can</i> work. However, the reliable stream interferes with expectations. The most likely outcome being slower end-to-end UDP throughput than we could achieve without tunneling. Stream reliability runs counter to our goals.</p><p>Fortunately, QUIC's <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-datagram/">unreliable datagram extension</a> adds a new <a href="https://datatracker.ietf.org/doc/html/draft-ietf-quic-datagram-07#section-4">DATAGRAM frame</a> that, as its name plainly says, is unreliable. It has several uses; the one we care about is that it provides a building block for performant UDP tunneling. In particular, this extension has the following properties:</p><ul><li><p>DATAGRAM frames are individual messages, unlike a long QUIC stream.</p></li><li><p>DATAGRAM frames do not contain a multiplexing identifier, unlike QUIC's stream IDs.</p></li><li><p>Like all QUIC frames, DATAGRAM frames must fit completely inside a QUIC packet.</p></li><li><p>DATAGRAM frames are subject to congestion control, helping senders to avoid overloading the network.</p></li><li><p>DATAGRAM frames are acknowledged by the receiver but, importantly, if the sender detects a loss, QUIC does not retransmit the lost data.</p></li></ul><p>The Datagram "Unreliable Datagram Extension to QUIC" specification will be published as an RFC soon. Cloudflare's <a href="https://github.com/cloudflare/quiche">quiche</a> library has supported it since October 2020.</p><p>Now that QUIC has primitives that support sending unreliable messages, we have a standard way to effectively tunnel UDP inside it. QUIC provides the STREAM and DATAGRAM transport primitives that support our proxying goals. Now it is the application layer responsibility to describe <b>how</b> to use them for proxying. Enter MASQUE.</p>
    <div>
      <h3>MASQUE: Unlocking QUIC’s potential for proxying</h3>
      <a href="#masque-unlocking-quics-potential-for-proxying">
        
      </a>
    </div>
    <p>Now that we’ve described how encapsulation works, let’s now turn our attention to the second question listed at the start of this post: How does an application initialize an end-to-end tunnel, informing a proxy server where to send UDP datagrams to, and where to receive them from? This is the focus of the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a>, which was formed in June 2020 and has been designing answers since. Many people across the Internet ecosystem have been contributing to the standardization activity. At Cloudflare, that includes Chris (as co-chair), Lucas (as co-editor of one WG document) and several other colleagues.</p><p>MASQUE started solving the UDP tunneling problem with a pair of specifications: a definition for how <a href="https://datatracker.ietf.org/doc/draft-ietf-masque-h3-datagram/">QUIC datagrams are used with HTTP/3</a>, and <a href="https://datatracker.ietf.org/doc/draft-ietf-masque-connect-udp/">a new kind of HTTP request</a> that initiates a UDP socket to a target server. These have built on the concept of extended CONNECT, which was first introduced for HTTP/2 in <a href="https://datatracker.ietf.org/doc/html/rfc8441">RFC 8441</a> and has now been <a href="https://datatracker.ietf.org/doc/draft-ietf-httpbis-h3-websockets/">ported to HTTP/3</a>. Extended CONNECT defines the :protocol pseudo-header that can be used by clients to indicate the intention of the request. The initial use case was WebSockets, but we can repurpose it for UDP and it looks like this:</p>
            <pre><code>:method = CONNECT
:protocol = connect-udp
:scheme = https
:path = /target.example.com/443/
:authority = proxy.example.com</code></pre>
            <p>A client sends an extended CONNECT request to a proxy server, which identifies a target server in the :path. If the proxy succeeds in opening a UDP socket, it responds with a 2xx (Successful) status code. After this, an end-to-end flow of unreliable messages between the client and target is possible; the client and proxy exchange QUIC DATAGRAM frames with an encapsulated payload, and the proxy and target exchange UDP datagrams bearing that payload.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BXgtvSVPvNMa3CkKqqioK/1530306e4007e30b6d7b995c2b01f823/image3-34.png" />
            
            </figure>
    <div>
      <h3>Anatomy of Encapsulation</h3>
      <a href="#anatomy-of-encapsulation">
        
      </a>
    </div>
    <p>UDP tunneling has a constraint that TCP tunneling does not – namely, the size of messages and how that relates to path MTU (Maximum Transmission Unit; for more background see our <a href="https://www.cloudflare.com/learning/network-layer/what-is-mtu/">Learning Center article</a>). The path MTU is the maximum size that is allowed on the path between client and server. The actual maximum is the smallest maximum across all elements at every hop and at every layer, from the network up to application. All it takes is for one component with a small MTU to reduce the path MTU entirely. On the Internet, <a href="https://www.cloudflare.com/learning/network-layer/what-is-mtu/">1,500 bytes</a> is a common practical MTU. When considering tunneling using QUIC, we need to appreciate the anatomy of QUIC packets and frames in order to understand how they add bytes of overheard. This consumes bytes and subtracts from our theoretical maximum.</p><p>We've been talking in terms of HTTP/3 which normally has its own frames (HEADERS, DATA, etc) that have a common type and length overhead. However, there is no HTTP/3 framing when it comes to DATAGRAM, instead the bytes are placed directly into the QUIC frame. This frame is composed of two fields. The first field is a variable number of bytes, called the <a href="https://datatracker.ietf.org/doc/html/draft-ietf-masque-h3-datagram-05#section-3">Quarter Stream ID</a> field, which is an encoded identifier that supports independent multiplexed DATAGRAM flows. It does so by binding each DATAGRAM to the HTTP request stream ID. In QUIC, stream IDs use two bits to encode four types of stream. Since request streams are always of one type (client-initiated bidirectional, to be exact), we can divide their ID by four to save space on the wire. Hence the name Quarter Stream ID. The second field is payload, which contains the end-to-end message payload. Here's how it might look on the wire.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fzMrcBgH540C9SizGskfh/2107ffe432c189ec089e5663615bc814/image2-75.png" />
            
            </figure><p>If you recall our lesson from the <a href="/a-primer-on-proxies/">last post</a>, DATAGRAM frames (like all frames) must fit completely inside a QUIC packet. Moreover, since QUIC requires that <a href="https://www.rfc-editor.org/rfc/rfc9000.html#section-14-7">fragmentation is disabled</a>, QUIC packets must fit completely inside a UDP datagram. This all combines to limit the maximum size of things that we can actually send: the path MTU determines the size of the UDP datagram, then we need to subtract the overheads of the UDP datagram header, QUIC packet header, and QUIC DATAGRAM frame header. For a better understanding of QUIC's wire image and overheads, see <a href="https://www.rfc-editor.org/rfc/rfc8999.html#section-5">Section 5 of RFC 8999</a> and <a href="https://www.rfc-editor.org/rfc/rfc9000.html#section-12.4">Section 12.4 of RFC 9000</a>.</p><p>If a sender has a message that is too big to fit inside the tunnel, there are only two options: discard the message or fragment it. Neither of these are good options. Clients create the UDP tunnel and are more likely to accurately calculate the real size of encapsulated UDP datagram payload, thus avoiding the problem. However, a target server is most likely unaware that a client is behind a proxy, so it cannot accommodate the tunneling overhead. It might send a UDP datagram payload that is too big for the proxy to encapsulate. This conundrum is common to all proxy protocols! There's an art in picking the right MTU size for UDP-based traffic in the face of tunneling overheads. While approaches like path MTU discovery can help, they are <a href="/path-mtu-discovery-in-practice/">not a silver bullet</a>. Choosing conservative maximum sizes can reduce the chances of tunnel-related problems. However, this needs to be weighed against being too restrictive. Given a theoretical path MTU of 1,500, once we consider QUIC encapsulation overheads, tunneled messages with a limit between 1,200 and 1,300 bytes can be effective.This is especially important when we think about tunneling QUIC itself. <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-8.1">RFC 9000 Section 8.1</a> details how clients that initiate new QUIC connections must send UDP datagrams of at least 1,200 bytes. If a proxy can't support that, then QUIC will not work in a tunnel.</p>
    <div>
      <h3>Nested tunneling for Improved Privacy Proxying</h3>
      <a href="#nested-tunneling-for-improved-privacy-proxying">
        
      </a>
    </div>
    <p>MASQUE gives us the application layer building blocks to support efficient tunneling of TCP or UDP traffic. What's cool about this is that we can combine these blocks into different deployment architectures for different scenarios or different needs.</p><p>One example of this case is nested tunneling via multiple proxies, which can minimize the connection metadata available to each individual proxy or server (one example of this type of deployment is described in our recent post on <a href="/icloud-private-relay/">iCloud Private Relay)</a>. In this kind of setup, a client might manage at least three logical connections. First, a QUIC connection between Client and Proxy 1. Second, a QUIC connection between Client and Proxy 2, which runs via a CONNECT tunnel in the first connection. Third, an end-to-end byte stream between Client and Server, which runs via a CONNECT tunnel in the second connection. A real TCP connection only exists between Proxy 2 and Server. If additional Client to Server logical connections are needed, they can be created inside the existing pair of QUIC connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zIR70pmMrWlgjGongXqkH/9409a30dfba8a603d9a917b20e8a3d3a/image4-16.png" />
            
            </figure>
    <div>
      <h3>Towards a full tunnel with IP tunneling</h3>
      <a href="#towards-a-full-tunnel-with-ip-tunneling">
        
      </a>
    </div>
    <p>Proxy support for UDP and TCP already unblocks a huge assortment of use cases, including TLS, QUIC, HTTP, DNS, and so on. But it doesn’t help protocols that use different <a href="https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml">IP protocols</a>, like <a href="https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol">ICMP</a> or IPsec <a href="https://en.wikipedia.org/wiki/IPsec#Encapsulating_Security_Payload">Encapsulating Security Payload</a> (ESP). Fortunately, the MASQUE Working Group has also been working on IP tunneling. This is a lot more complex than UDP tunneling, so they first spent some time defining a common set of <a href="https://datatracker.ietf.org/doc/draft-ietf-masque-ip-proxy-reqs/">requirements</a>. The group has recently adopted a new specification to support <a href="https://datatracker.ietf.org/doc/draft-ietf-masque-connect-ip/">IP proxying over HTTP</a>. This behaves similarly to the other CONNECT designs we've discussed but with a few differences. Indeed, IP proxying support using HTTP as a substrate would unlock many applications that existing protocols like IPsec and WireGuard enable.</p><p>At this point, it would be reasonable to ask: “A complete HTTP/3 stack is a bit excessive when all I need is a simple end-to-end tunnel, right?” Our answer is, it depends! CONNECT-based IP proxies use TLS and rely on well established PKIs for creating secure channels between endpoints, whereas protocols like WireGuard use a simpler cryptographic protocol for key establishment and defer authentication to the application. WireGuard does not support proxying over TCP but <a href="https://www.wireguard.com/known-limitations/">can be adapted to work over TCP</a> transports, if necessary. In contrast, CONNECT-based proxies do support TCP and UDP transports, depending on what version of HTTP is used. Despite these differences, these protocols do share similarities. In particular, the actual framing used by both protocols – be it the TLS record layer or QUIC packet protection for CONNECT-based proxies, or WireGuard encapsulation – are not interoperable but only slightly differ in wire format. Thus, from a performance perspective, there’s not really much difference.</p><p>In general, comparing these protocols is like comparing apples and oranges – they’re fit for different purposes, have different implementation requirements, and assume different ecosystem participants and threat models. At the end of the day, CONNECT-based proxies are better suited to an ecosystem and environment that is already heavily invested in TLS and the existing WebPKI, so we expect CONNECT-based solutions for IP tunnels to become the norm in the future. Nevertheless, it's early days, so be sure to watch this space if you’re interested in learning more!</p>
    <div>
      <h3>Looking ahead</h3>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>The IETF has chartered the MASQUE Working Group to help design an HTTP-based solution for UDP and IP that complements the existing CONNECT method for TCP tunneling. Using HTTP semantics allows us to use features like request methods, response statuses, and header fields to enhance tunnel initialization. For example, allowing for reuse of existing authentication mechanisms or the <a href="https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-proxy-status">Proxy-Status</a> field. By using HTTP/3, UDP and IP tunneling can benefit from QUIC's secure transport native unreliable datagram support, and other features. Through a flexible design, older versions of HTTP can also be supported, which helps widen the potential deployment scenarios. Collectively, this work brings proxy protocols to the masses.</p><p>While the design details of MASQUE specifications continue to be iterated upon, so far several implementations have been developed, some of which have been interoperability tested during IETF hackathons. This running code helps inform the continued development of the specifications. Details are likely to continue changing before the end of the process, but we should expect the overarching approach to remain similar. Join us during the MASQUE WG meeting in <a href="https://www.ietf.org/how/meetings/113/">IETF 113</a> to learn more!</p> ]]></content:encoded>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[QUIC]]></category>
            <guid isPermaLink="false">7uf0jVn0IMKFbqLxWODDNM</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>Christopher Wood</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Primer on Proxies]]></title>
            <link>https://blog.cloudflare.com/a-primer-on-proxies/</link>
            <pubDate>Sat, 19 Mar 2022 17:01:15 GMT</pubDate>
            <description><![CDATA[ A technical dive into traditional TCP proxying over HTTP ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4icLjzJ8inC97t9zh3LiWw/e0e75625752de444e0fd2a32f627112e/image2-73.png" />
            
            </figure><p>Traffic proxying, the act of encapsulating one flow of data inside another, is a valuable privacy tool for establishing boundaries on the Internet. Encapsulation has an overhead, Cloudflare and our Internet peers strive to avoid turning it into a performance cost. MASQUE is the latest collaboration effort to design efficient proxy protocols based on IETF standards. We're already running these at scale in production; see our recent blog post about Cloudflare's role in <a href="/icloud-private-relay/">iCloud Private Relay</a> for an example.</p><p>In this blog post series, we’ll dive into proxy protocols.</p><p>To begin, let’s start with a simple question: what is proxying? In this case, we are focused on <b>forward</b> proxying — a client establishes an end-to-end tunnel to a target server via a proxy server. This contrasts with the Cloudflare CDN, which operates as a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><b>reverse</b></a> <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/">proxy</a> that terminates client connections and then takes responsibility for actions such as caching, security including <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a>, load balancing, etc. With forward proxying, the details about the tunnel, such as how it is established and used, whether it provides confidentiality via authenticated encryption, and so on, vary by proxy protocol. Before going into specifics, let’s start with one of the most common tunnels used on the Internet: TCP.</p>
    <div>
      <h3>Transport basics: TCP provides a reliable byte stream</h3>
      <a href="#transport-basics-tcp-provides-a-reliable-byte-stream">
        
      </a>
    </div>
    <p>The TCP transport protocol is a rich topic. For the purposes of this post, we will focus on one aspect: TCP provides a readable and writable, reliable, and ordered byte stream. Some protocols like HTTP and TLS require reliable transport underneath them and TCP's single byte stream is an ideal fit. The application layer reads or writes to this byte stream, but the details about how TCP sends this data "on the wire" are typically abstracted away.</p><p>Large application objects are written into a stream, then they are split into many small packets, and they are sent in order to the network. At the receiver, packets are read from the network and combined back into an identical stream. Networks are not perfect and packets can be lost or reordered. TCP is clever at dealing with this and not worrying the application with details. It just works. A way to visualize this is to imagine a magic paper shredder that can both shred documents and convert shredded papers back to whole documents. Then imagine you and your friend bought a pair of these and decided that it would be fun to send each other shreds.</p><p>The one problem with TCP is that when a lost packet is detected at a receiver, the sender needs to retransmit it. This takes time to happen and can mean that the byte stream reconstruction gets delayed. This is known as TCP head-of-line blocking. Applications regularly use TCP via a socket API that abstracts away protocol details; they often can't tell if there are delays because the other end is slow at sending or if the network is slowing things down via packet loss.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xnUbBQnb4droA0xmJMJ69/c07b46a941cdc3a7cbb75dc696050f38/image1-84.png" />
            
            </figure>
    <div>
      <h3>Proxy Protocols</h3>
      <a href="#proxy-protocols">
        
      </a>
    </div>
    <p>Proxying TCP is immensely useful for many applications, including, though certainly not limited to HTTPS, <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a>, and RDP. In fact, <a href="/oblivious-dns/">Oblivious DoH</a>, which is a proxy protocol for DNS messages, could very well be implemented using a TCP proxy, though there are reasons <a href="https://datatracker.ietf.org/doc/html/draft-pauly-dprive-oblivious-doh-11#appendix-A">why this may not be desirable</a>. Today, there are a number of different options for proxying TCP end-to-end, including:</p><ul><li><p>SOCKS, which runs in cleartext and requires an expensive connection establishment step.</p></li><li><p>Transparent TCP proxies, commonly referred to as performance enhancing proxies (PEPs), which must be on path and offer no additional transport security, and, definitionally, are limited to TCP protocols.</p></li><li><p>Layer 4 proxies such as Cloudflare <a href="https://developers.cloudflare.com/spectrum/">Spectrum</a>, which might rely on side carriage metadata via something like the <a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">PROXY protocol</a>.</p></li><li><p>HTTP CONNECT, which transforms HTTPS connections into opaque byte streams.</p></li></ul><p>While SOCKS and PEPs are viable options for some use cases, when choosing which proxy protocol to build future systems upon, it made most sense to choose a reusable and general-purpose protocol that provides well-defined and standard abstractions. As such, the IETF chose to focus on using HTTP as a substrate via the CONNECT method.</p><p>The concept of using HTTP as a substrate for proxying is not new. Indeed, HTTP/1.1 and HTTP/2 have supported proxying TCP-based protocols for a long time. In the following sections of this post, we’ll explain in detail how CONNECT works across different versions of HTTP, including HTTP/1.1, HTTP/2, and the <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">recently standardized HTTP/3</a>.</p>
    <div>
      <h3>HTTP/1.1 and CONNECT</h3>
      <a href="#http-1-1-and-connect">
        
      </a>
    </div>
    <p>In HTTP/1.1, the <a href="https://www.rfc-editor.org/rfc/rfc7231#section-4.3.6">CONNECT method</a> can be used to establish an end-to-end TCP tunnel to a target server via a proxy server. This is commonly applied to use cases where there is a benefit of protecting the traffic between the client and the proxy, or where the proxy can provide access control at network boundaries. For example, a Web browser can be configured to issue all of its HTTP requests via an HTTP proxy.</p><p>A client sends a CONNECT request to the proxy server, which requests that it opens a TCP connection to the target server and desired port. It looks something like this:</p>
            <pre><code>CONNECT target.example.com:80 HTTP/1.1
Host: target.example.com</code></pre>
            <p>If the proxy succeeds in opening a TCP connection to the target, it responds with a 2xx range status code. If there is some kind of problem, an error status in the 5xx range can be returned. Once a tunnel is established there are two independent TCP connections; one on either side of the proxy. If a flow needs to stop, you can simply terminate them.</p><p>HTTP CONNECT proxies forward data between the client and the target server. The TCP packets themselves are not tunneled, only the data on the logical byte stream. Although the proxy is supposed to forward data and not process it, if the data is plaintext there would be nothing to stop it. In practice, CONNECT is often used to create an end-to-end TLS connection where only the client and target server have access to the protected content; the proxy sees only TLS records and can't read their content because it doesn't have access to the keys.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ParPDOxCyJFT2m3UYsLtR/d76fbce62a99c53fa68bc86773c231bd/image8-1.png" />
            
            </figure><p>Finally, it's worth noting that after a successful CONNECT request, the HTTP connection (and the TCP connection underpinning it) has been converted into a tunnel. There is no more possibility of issuing other HTTP messages, to the proxy itself, on the connection.</p>
    <div>
      <h3>HTTP/2 and CONNECT</h3>
      <a href="#http-2-and-connect">
        
      </a>
    </div>
    <p><a href="https://www.rfc-editor.org/rfc/rfc7540.html">HTTP/2</a> adds logical streams above the TCP layer in order to support concurrent requests and responses on a single connection. Streams are also reliable and ordered byte streams, operating on top of TCP. Returning to our magic shredder analogy: imagine you wanted to send a book. Shredding each page one after another and rebuilding the book one page at a time is slow, but handling multiple pages at the same time might be faster. HTTP/2 streams allow us to do that. But, as we all know, trying to put too much into a shredder can sometimes cause it to jam.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/37F1LF53K2Plk0vYbcdmgg/d7d57e19b13ebce0aabb910568f4d9f6/image3-33.png" />
            
            </figure><p>In HTTP/2, each request and response is sent on a different stream. To support this, HTTP/2 defines frames that contain the stream identifier that they are associated with. Requests and responses are composed of HEADERS and DATA frames which contain HTTP header fields and HTTP content, respectively. Frames can be large. When they are sent on the wire they might span multiple TLS records or TCP segments. Side note: the HTTP WG has been working on a new revision of the document that defines HTTP semantics that are common to all HTTP versions. The terms message, header fields, and content all come from <a href="https://www.ietf.org/archive/id/draft-ietf-httpbis-semantics-19.html#name-message-abstraction">this description</a>.</p><p>HTTP/2 concurrency allows applications to read and write multiple objects at different rates, which can improve HTTP application performance, such as web browsing. HTTP/1.1 traditionally dealt with this concurrency by opening multiple TCP connections in parallel and striping requests across these connections. In contrast, HTTP/2 multiplexes frames belonging to different streams onto the single byte stream provided by one TCP connection. Reusing a single connection has benefits, but it still leaves HTTP/2 at risk of TCP head-of-line blocking. For more details, refer to <a href="https://calendar.perfplanet.com/2020/head-of-line-blocking-in-quic-and-http-3-the-details/">Perf Planet blog</a>.</p><p><a href="https://datatracker.ietf.org/doc/html/rfc7540#section-8.3">HTTP/2 also supports the CONNECT method</a>. In contrast to HTTP/1.1, CONNECT requests do not take over an entire HTTP/2 connection. Instead, they convert a single stream into an end-to-end tunnel. It looks something like this:</p>
            <pre><code>:method = CONNECT
:authority = target.example.com:443</code></pre>
            <p>If the proxy succeeds in opening a TCP connection, it responds with a 2xx (Successful) status code. After this, the client sends DATA frames to the proxy, and the content of these frames are put into TCP packets sent to the target. In the return direction, the proxy reads from the TCP byte stream and populates DATA frames. If a tunnel needs to stop, you can simply terminate the stream; there is no need to terminate the HTTP/2 connection.</p><p>By using HTTP/2, a client can create multiple CONNECT tunnels in a single connection. This can help reduce resource usage (saving the global count of TCP connections) and allows related tunnels to be logically grouped together, ensuring that they "share fate" when either client or proxy need to gracefully close. On the proxy-to-server side there are still multiple independent TCP connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2zNstK2cYrqJKofOIiOeII/2c4bda54b181ce6c171b264589511421/image7.png" />
            
            </figure><p>One challenge of multiplexing tunnels on concurrent streams is how to effectively prioritize them. We've talked in the past about <a href="/better-http-2-prioritization-for-a-faster-web/">prioritization for web pages</a>, but the story is a bit different for CONNECT. We've been thinking about this and captured <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-priority.html#name-scheduling-and-the-connect-">considerations</a> in the new <a href="/adopting-a-new-approach-to-http-prioritization/">Extensible Priorities</a> draft.</p>
    <div>
      <h3>QUIC, HTTP/3 and CONNECT</h3>
      <a href="#quic-http-3-and-connect">
        
      </a>
    </div>
    <p>QUIC is a new secure and multiplexed transport protocol from the IETF. QUIC version 1 was published as <a href="https://www.rfc-editor.org/rfc/rfc9000.html">RFC 9000</a> in May 2021 and, <a href="/quic-version-1-is-live-on-cloudflare/">the next day</a>, we enabled it for all Cloudflare customers.</p><p>QUIC is composed of several foundational features. You can think of these like individual puzzle pieces that interlink to form a transport service. This service needs one more piece, an application mapping, to bring it all together.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gGK0LOkLSU64zmxkd9gVd/d02d3a3f1f7e13d7c2819342f61f6a4a/image4-15.png" />
            
            </figure><p>Similar to HTTP/2, QUIC version 1 provides reliable and ordered streams. But QUIC streams live at the transport layer and they are the only type of QUIC primitive that can carry application data. QUIC has no opinion on how streams get used. Applications that wish to use QUIC must define that themselves.</p><p>QUIC streams can be long (up to 2^62 - 1 bytes). Stream data is sent on the wire in the form of <a href="https://www.rfc-editor.org/rfc/rfc9000.html#name-stream-frames">STREAM frames</a>. All QUIC frames must fit completely inside a QUIC packet. QUIC packets must fit entirely in a UDP datagram; fragmentation is prohibited. These requirements mean that a long stream is serialized to a series of QUIC packets sized roughly to the path <a href="https://en.wikipedia.org/wiki/Maximum_transmission_unit">MTU</a> (Maximum Transmission Unit). STREAM frames provide reliability via QUIC loss detection and recovery. Frames are acknowledged by the receiver and if the sender detects a loss (via missing acknowledgments), QUIC will retransmit the lost data. In contrast, TCP retransmits packets. This difference is an important feature of QUIC, letting implementations decide how to repacketize and reschedule lost data.</p><p>When multiplexing streams, different packets can contain <a href="https://www.rfc-editor.org/rfc/rfc9000.html#name-stream-frames">STREAM frames</a> belonging to different stream identifiers. This creates independence between streams and helps avoid the head-of-line blocking caused by packet loss that we see in TCP. If a UDP packet containing data for one stream is lost, other streams can continue to make progress without being blocked by retransmission of the lost stream.</p><p>To use our magic shredder analogy one more time: we're sending a book again, but this time we parallelise our task by using independent shredders. We need to logically associate them together so that the receiver knows the pages and shreds are all for the same book, but otherwise they can progress with less chance of jamming.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KPS4I6E6zvfPnrY4zNlxB/912d09cf787fa46d6dc77140919ba607/image6-5.png" />
            
            </figure><p><a href="https://datatracker.ietf.org/doc/draft-ietf-quic-http/">HTTP/3</a> is an example of an application mapping that describes how streams are used to exchange: HTTP settings, <a href="https://datatracker.ietf.org/doc/html/draft-ietf-quic-qpack-21">QPACK</a> state, and request and response messages. HTTP/3 still defines its own frames like HEADERS and DATA, but it is overall simpler than HTTP/2 because QUIC deals with the hard stuff. Since HTTP/3 just sees a logical byte stream, its frames can be arbitrarily sized. The QUIC layer handles segmenting HTTP/3 frames over STREAM frames for sending in packets. HTTP/3 <a href="https://datatracker.ietf.org/doc/html/draft-ietf-quic-http-34#section-4.2">also supports the CONNECT method</a>. It functions identically to CONNECT in HTTP/2, each request stream converting to an end-to-end tunnel.</p>
    <div>
      <h3>HTTP packetization comparison</h3>
      <a href="#http-packetization-comparison">
        
      </a>
    </div>
    <p>We've talked about HTTP/1.1, HTTP/2 and HTTP/3. The diagram below is a convenient way to summarize how HTTP requests and responses get serialized for transmission over a secure transport. The main difference is that with TLS, protected records are split across several TCP segments. While with QUIC there is no record layer, each packet has its own protection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Ixri8ytJ113sldUVNplLm/47ffbc388a4a45ae481a78e39b2986a0/image5-18.png" />
            
            </figure>
    <div>
      <h3>Limitations and looking ahead</h3>
      <a href="#limitations-and-looking-ahead">
        
      </a>
    </div>
    <p>HTTP CONNECT is a simple and elegant protocol that has a tremendous number of application use cases, especially for privacy-enhancing technology. In particular, applications can use it to proxy <a href="https://www.cloudflare.com/learning/dns/dns-over-tls/">DNS-over-HTTPS</a> similar to what’s been done for Oblivious DoH, or more generic HTTPS traffic (based on HTTP/1.1 or HTTP/2), and many more.</p><p>However, what about non-TCP traffic? Recall that HTTP/3 is an application mapping for QUIC, and therefore runs over UDP as well. What if we wanted to proxy QUIC? What if we wanted to proxy entire IP datagrams, similar to VPN technologies like IPsec or WireGuard? This is where <a href="/unlocking-quic-proxying-potential/">MASQUE</a> comes in. In the next post, we’ll discuss how the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a> is standardizing technologies to enable proxying for datagram-based protocols like UDP and IP.</p> ]]></content:encoded>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Proxying]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[iCloud Private Relay]]></category>
            <guid isPermaLink="false">2YU980GMLipuAmzDuDgrTc</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>Christopher Wood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Orange Clouding with Secondary DNS]]></title>
            <link>https://blog.cloudflare.com/orange-clouding-with-secondary-dns/</link>
            <pubDate>Thu, 20 Aug 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Secondary DNS Override is a great option for any users that want to take advantage of the Cloudflare network, without transferring all of their zones to Cloudflare DNS as a primary provider. ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h3>What is secondary DNS?</h3>
      <a href="#what-is-secondary-dns">
        
      </a>
    </div>
    <p>In a traditional sense, secondary DNS servers act as a backup to the primary authoritative DNS server.  When a change is made to the records on the primary server, a zone transfer occurs, synchronizing the secondary DNS servers with the primary server. The secondary servers can then serve the records as if they were the primary server, however changes can only be made by the primary server, not the secondary servers. This creates redundancy across many different servers that can be distributed as necessary.</p><p>There are many common ways to take advantage of Secondary DNS, some of which are:</p><ol><li><p>Secondary DNS as passive backup - The secondary DNS server sits idle until the primary server goes down, at which point a failover can occur and the secondary can start serving records.</p></li><li><p>Secondary DNS as active backup - The secondary DNS server works alongside the primary server to serve records.</p></li><li><p>Secondary DNS with a hidden primary - The nameserver records at the <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrar</a> point towards the secondary servers only, essentially treating them as the primary nameservers.</p></li></ol>
    <div>
      <h3>What is secondary DNS Override?</h3>
      <a href="#what-is-secondary-dns-override">
        
      </a>
    </div>
    <p>Secondary DNS Override builds on the Secondary DNS with a hidden primary model by allowing our customers to not only serve records as they tell us to, but also enable them to proxy any A/AAAA/CNAME records through <a href="https://www.cloudflare.com/network/">Cloudflare's network</a>. This is similar to how Cloudflare as a primary DNS provider currently works.</p><p>Consider the following example:</p><p>example.com Cloudflare IP - 192.0.2.0example.com origin IP - 203.0.113.0</p><p>In order to take advantage of Cloudflare's security and performance services, we need to make sure that the origin IP stays hidden from the Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YAO6PZ3BqRCD5IK5NEL1t/fc4d45f62075d3d0531734273c707e20/image1-10.png" />
            
            </figure><p>Figure 1: Secondary DNS without a hidden primary nameserver</p><p>Figure 1 shows that without a hidden primary nameserver, the resolver can choose to query either one. This opens up two issues:</p><ol><li><p>Violates <a href="https://tools.ietf.org/html/rfc1034">RFC 1034</a> and <a href="https://tools.ietf.org/html/rfc2182">RFC 2182</a> because the Cloudflare server will be responding differently than the primary nameserver.</p></li><li><p>The origin IP will be exposed to the Internet.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3WlzPk7YGcEbVgVPWRsVNP/b36c9763423f18a1559922a772a8e5ce/image2-6.png" />
            
            </figure><p>Figure 2: Secondary DNS with a hidden primary nameserver</p><p>Figure 2 shows the resolver always querying the Cloudflare Secondary DNS server.</p>
    <div>
      <h3>How Does Secondary DNS Override work</h3>
      <a href="#how-does-secondary-dns-override-work">
        
      </a>
    </div>
    <p>The Secondary DNS Override UI looks similar to the primary UI, the only difference is that records cannot be edited.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23aOCWhXUNvk3rlDKomIUH/134a8410ac677f4c9341bd4ca757225f/image3-5.png" />
            
            </figure><p>Figure 3: Secondary DNS Override Dashboard</p><p>In figure 3, all of the records have been transferred from the primary DNS server. test-orange and test-aaaa-orange have been overridden to proxy through the cloudflare network, while test-grey and test-mx are treated as regular DNS records.</p><p>Behind the scenes we store override records that pair with transferred records based on the name. For secondary override we don’t care about the type when overriding records, because of two things:</p><ol><li><p>According to <a href="https://tools.ietf.org/html/rfc1912#section-2.4">RFC 1912</a> you cannot have a CNAME record with the same name as any other record. (This does not apply to some DNSSEC records, see <a href="https://tools.ietf.org/html/rfc2181">RFC 2181</a>)</p></li><li><p>A and AAAA records are both address type records which should be either all proxied or all not proxied under the same name.</p></li></ol><p>This means if you have several A and several AAAA records all with the name “example.com”, if one of them is proxied, all of them will be proxied. The UI helps abstract the idea that we are storing additional override records through the “orange cloud” button, which when clicked, will create an override record which applies to all A/AAAA or CNAME records with that name.</p>
    <div>
      <h3>CNAME at the Apex</h3>
      <a href="#cname-at-the-apex">
        
      </a>
    </div>
    <p>Normally, putting a CNAME at the apex of a zone is not allowed. For example:</p><p><code>example.com CNAME other-domain.com</code></p><p>Is not allowed because this means that there will be at least one other SOA record and one other NS record with the same name, disobeying RFC 1912 as mentioned above. Cloudflare can overcome this through the use of <a href="https://support.cloudflare.com/hc/en-us/articles/200169056-CNAME-Flattening-RFC-compliant-support-for-CNAME-at-the-root">CNAME Flattening</a>, which is a common technique used within the primary DNS product today. CNAME flattening allows us to return address records instead of the CNAME record when a query comes into our authoritative server.</p><p>Contrary to what was said above regarding the prevention of editing records through the Secondary DNS Override UI, the CNAME at the apex is the one exception to this rule. Users are able to create a CNAME at the apex in addition to the regular secondary DNS records, however the same rules defined in RFC 1912 also apply here. What this means is that the CNAME at the apex record can be treated as a regular DNS record or a proxied record, depending on what the user decides. Regardless of the proxy status of the CNAME at the apex record, it will override any other A/AAAA records that have been transferred from the primary DNS server.</p>
    <div>
      <h3>Merging Secondary, Override and CNAME at Apex Records</h3>
      <a href="#merging-secondary-override-and-cname-at-apex-records">
        
      </a>
    </div>
    <p>At record edit time we do all of the merging of the secondary, override and CNAME at the apex records. This means that when a DNS request comes in to our authoritative server at the edge, we can still return the records in <a href="https://www.dnsperf.com/">blazing fast times</a>. The workflow is shown in figure 4.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/28Q1ApTislcuCz1JZlE57M/ce770c426fe865fbed639a06440d8412/image4-6.png" />
            
            </figure><p>Figure 4: Record Merging process</p><p>Within the zone builder the steps are as follows:</p><ol><li><p>Check if there is any CNAME at the apex, if so, override all other A/AAAA secondary records at the apex.</p></li><li><p>For each secondary record, check if there is a matching override record, if so, apply the proxy status of the override record to all secondary records with that name.</p></li><li><p>Leave all other secondary records as is.</p></li></ol>
    <div>
      <h3>Getting Started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Secondary DNS Override is a great option for any users that want to take advantage of the Cloudflare network, without transferring all of their zones to Cloudflare DNS as a primary provider. Security and access control can be managed on the primary side, without worrying about unauthorized edits of information on the Cloudflare side.</p><p>Secondary DNS Override is currently available on the Enterprise plan, if you’d like to take advantage of it, please let your account team know. For additional documentation on Secondary DNS Override, please refer to our <a href="https://support.cloudflare.com/hc/en-us/articles/360042169091-Understanding-Secondary-DNS-Override#:~:text=Secondary%20Override%20allows%20customers%20to,record%20at%20the%20root%20domain.">support article</a>.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Proxying]]></category>
            <guid isPermaLink="false">3RBRtub1POeiDTrcMdBcCN</guid>
            <dc:creator>Alex Fattouche</dc:creator>
        </item>
    </channel>
</rss>