
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 09:31:27 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Some TXT about, and A PTR to, new DNS insights on Cloudflare Radar]]></title>
            <link>https://blog.cloudflare.com/new-dns-section-on-cloudflare-radar/</link>
            <pubDate>Thu, 27 Feb 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ The new Cloudflare Radar DNS page provides increased visibility into aggregate traffic and usage trends seen by our 1.1.1.1 resolver ]]></description>
            <content:encoded><![CDATA[ <p>No joke – Cloudflare's <a href="https://www.cloudflare.com/en-gb/learning/dns/what-is-1.1.1.1/"><u>1.1.1.1 resolver</u></a> was <a href="https://blog.cloudflare.com/dns-resolver-1-1-1-1"><u>launched</u></a> on April Fool's Day in 2018. Over the last seven years, this highly <a href="https://www.dnsperf.com/#!dns-resolvers"><u>performant</u></a> and <a href="https://developers.cloudflare.com/1.1.1.1/privacy/public-dns-resolver/"><u>privacy</u></a>-<a href="https://blog.cloudflare.com/announcing-the-results-of-the-1-1-1-1-public-dns-resolver-privacy-examination"><u>conscious</u></a> service has grown to handle an average of 1.9 Trillion queries per day from approximately 250 locations (countries/regions) around the world. Aggregated analysis of this traffic provides us with unique insight into Internet activity that goes beyond simple Web traffic trends, and we currently use analysis of 1.1.1.1 data to power Radar's <a href="https://radar.cloudflare.com/domains"><u>Domains</u></a> page, as well as the <a href="https://blog.cloudflare.com/radar-domain-rankings"><u>Radar Domain Rankings</u></a>.</p><p>In December 2022, Cloudflare <a href="https://blog.cloudflare.com/the-as112-project/"><u>joined the AS112 Project</u></a>, which helps the Internet deal with misdirected DNS queries. In March 2023, we launched an <a href="https://radar.cloudflare.com/as112"><u>AS112 statistics</u></a> page on Radar, providing insight into traffic trends and query types for this misdirected traffic. Extending the basic analysis presented on that page, and building on the analysis of resolver data used for the Domains page, today we are excited to launch a dedicated DNS page on Cloudflare Radar to provide increased visibility into aggregate traffic and usage trends seen across 1.1.1.1 resolver traffic. In addition to looking at global, location, and <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a> traffic trends, we are also providing perspectives on protocol usage, query and response characteristics, and <a href="https://www.cloudflare.com/learning/dns/dnssec/how-dnssec-works/"><u>DNSSEC</u></a> usage.</p><p>The traffic analyzed for this new page may come from users that have manually configured their devices or local routers to use 1.1.1.1 as a resolver, ISPs that set 1.1.1.1 as the default resolver for their subscribers, ISPs that use 1.1.1.1 as a resolver upstream from their own, or users that have installed Cloudflare’s <a href="https://one.one.one.one/"><u>1.1.1.1/WARP app</u></a> on their device. The traffic analysis is based on anonymised DNS query logs, in accordance with <a href="https://www.cloudflare.com/privacypolicy/"><u>Cloudflare’s Privacy Policy</u></a>, as well as our <a href="https://developers.cloudflare.com/1.1.1.1/privacy/public-dns-resolver/"><u>1.1.1.1 Public DNS Resolver privacy commitments</u></a>.</p><p>Below, we walk through the sections of Radar’s new DNS page, reviewing the included graphs and the importance of the metrics they present. The data and trends shown within these graphs will vary based on the location or network that the aggregated queries originate from, as well as on the selected time frame.</p>
    <div>
      <h3>Traffic trends</h3>
      <a href="#traffic-trends">
        
      </a>
    </div>
    <p>As with many Radar metrics, the <a href="https://radar.cloudflare.com/dns"><u>DNS page</u></a> leads with traffic trends, showing normalized query volume at a worldwide level (default), or from the selected location or <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system (ASN)</u></a>. Similar to other Radar traffic-based graphs, the time period shown can be adjusted using the date picker, and for the default selections (last 24 hours, last 7 days, etc.), a comparison with traffic seen over the previous period is also plotted.</p><p>For location-level views (such as <a href="https://radar.cloudflare.com/dns/lv"><u>Latvia</u></a>, in the example below), a table showing the top five ASNs by query volume is displayed alongside the graph. Showing the network’s share of queries from the selected location, the table provides insights into the providers whose users are generating the most traffic to 1.1.1.1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tFv24QhHPReek393iHte7/03894de5973a1fed2805f69dcd9323c6/01.png" />
            
            </figure><p>When a country/region is selected, in addition to showing an aggregate traffic graph for that location, we also show query volumes for the <a href="https://en.wikipedia.org/wiki/Country_code_top-level_domain"><u>country code top level domain (ccTLD)</u></a> associated with that country. The graph includes a line showing worldwide query volume for that ccTLD, as well as a line showing the query volume based on queries from the associated location. <a href="https://radar.cloudflare.com/dns/ai#dns-query-volume-for-ai-domains"><u>Anguilla’s</u></a> ccTLD is .ai, and is a popular choice among the growing universe of AI-focused companies. While most locations see a gap between the worldwide and “local” query volume for their ccTLD, Anguilla’s is rather significant — as the graph below illustrates, this size of the gap is driven by both the popularity of the ccTLD and Anguilla’s comparatively small user base. (Traffic for <a href="https://www.cloudflare.com/application-services/products/registrar/buy-ai-domains/">.ai domains</a> from Anguilla is shown by the dark blue line at the bottom of the graph.) Similarly, sizable gaps are seen with other “popular” ccTLDs as well, such as .io (<a href="https://radar.cloudflare.com/dns/io#dns-query-volume-for-io-domains"><u>British Indian Ocean Territory</u></a>), .fm (<a href="https://radar.cloudflare.com/dns/fm#dns-query-volume-for-fm-domains"><u>Federated States of Micronesia</u></a>), and .co (<a href="https://radar.cloudflare.com/dns/co#dns-query-volume-for-co-domains"><u>Colombia</u></a>). A higher “local” ccTLD query volume in other locations results in smaller gaps when compared to the worldwide query volume.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LXc2OLAoHqAVbgspo5cjb/c01b9f7e90d1d27f66eb3dcb35bf2622/02.png" />
            
            </figure><p>Depending on the strength of the signal (that is, the volume of traffic) from a given location or ASN, this data can also be used to corroborate reported Internet outages or shutdowns, or reported blocking of 1.1.1.1. For example, the <a href="https://radar.cloudflare.com/dns/as8048?dateStart=2025-01-10&amp;dateEnd=2025-02-06"><u>graph below</u></a> illustrates the result of Venezuelan provider <a href="https://radar.cloudflare.com/as8048"><u>CANTV</u></a> reportedly <a href="https://x.com/vesinfiltro/status/1879943715537711233"><u>blocking access to 1.1.1.1</u></a> for its subscribers. A <a href="https://radar.cloudflare.com/dns/as22313?dateStart=2025-01-10&amp;dateEnd=2025-01-23"><u>comparable drop</u></a> is visible for <a href="https://radar.cloudflare.com/as22313"><u>Supercable</u></a>, another Venezuelan provider that also reportedly blocked access to Cloudflare’s resolver around the same time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hR11TuJDhzWDFhoCo3Uh7/970ecbc951edd352f80a3b87f607e580/03.png" />
            
            </figure><p>Individual domain pages (like the one for <a href="https://radar.cloudflare.com/domains/domain/cloudflare.com"><u>cloudflare.com</u></a>, for example) have long had a choropleth map and accompanying table showing the <a href="https://radar.cloudflare.com/domains/domain/cloudflare.com#visitor-location"><u>popularity of the domain by location</u></a>, based on the share of DNS queries for that domain from each location. A <a href="https://radar.cloudflare.com/dns#geographical-distribution"><u>similar view</u></a> is included at the bottom of the worldwide overview page, based on the share of total global queries to 1.1.1.1 from each location.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kchGpH4fmYxmX4up953VC/744632815d78a9a77526e97d8c4d1664/04.png" />
            
            </figure>
    <div>
      <h3>Query and response characteristics</h3>
      <a href="#query-and-response-characteristics">
        
      </a>
    </div>
    <p>While traffic trends are always interesting and important to track, analysis of the characteristics of queries to 1.1.1.1 and the associated responses can provide insights into the adoption of underlying transport protocols, record type popularity, cacheability, and security.</p><p>Published in November 1987, <a href="https://datatracker.ietf.org/doc/html/rfc1035#section-4.2"><u>RFC 1035 notes</u></a> that “<i>The Internet supports name server access using TCP [</i><a href="https://datatracker.ietf.org/doc/html/rfc793"><i><u>RFC-793</u></i></a><i>] on server port 53 (decimal) as well as datagram access using UDP [</i><a href="https://datatracker.ietf.org/doc/html/rfc768"><i><u>RFC-768</u></i></a><i>] on UDP port 53 (decimal).</i>” Over the subsequent three-plus decades, UDP has been the primary transport protocol for DNS queries, falling back to TCP for a limited number of use cases, such as when the response is too big to fit in a single UDP packet. However, as privacy has become a significantly greater concern, encrypted queries have been made possible through the specification of <a href="https://datatracker.ietf.org/doc/html/rfc7858"><u>DNS over TLS</u></a> (DoT) in 2016 and <a href="https://datatracker.ietf.org/doc/html/rfc8484"><u>DNS over HTTPS</u></a> (DoH) in 2018. Cloudflare’s 1.1.1.1 resolver has <a href="https://blog.cloudflare.com/announcing-1111/#toward-a-better-dns-infrastructure"><u>supported both of these privacy-preserving protocols since launch</u></a>. The <a href="https://radar.cloudflare.com/dns#dns-transport-protocol"><b><u>DNS transport protocol</u></b></a> graph shows the distribution of queries to 1.1.1.1 over these four protocols. (Setting up 1.1.1.1 <a href="https://one.one.one.one/dns/"><u>on your device or router</u></a> uses DNS over UDP by default, although recent versions of <a href="https://developers.cloudflare.com/1.1.1.1/setup/android/#configure-1111-manually"><u>Android</u></a> support DoT and DoH. The <a href="https://one.one.one.one/"><u>1.1.1.1 app</u></a> uses DNS over HTTPS by default, and users can also <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/encrypted-dns-browsers/"><u>configure their browsers</u></a> to use DNS over HTTPS.)</p><p>Note that Cloudflare's resolver also services queries over DoH and <a href="https://blog.cloudflare.com/oblivious-dns/"><u>Oblivious DoH (ODoH)</u></a> for <a href="https://developers.cloudflare.com/1.1.1.1/privacy/cloudflare-resolver-firefox/"><u>Mozilla</u></a> and other large platforms, but this traffic is not currently included in our analysis. As such, DoH adoption is under-represented in this graph.</p><p>Aggregated worldwide between February 19 - February 26, distribution of transport protocols was 86.6% for UDP, 9.6% for DoT, 2.0% for TCP, and 1.7% for DoH. However, in some locations, these ratios may shift if users are more privacy conscious. For example, the graph below shows the distribution for <a href="https://radar.cloudflare.com/dns/eg"><u>Egypt</u></a> over the same time period. In that country, the UDP and TCP shares are significantly lower than the global level, while the DoT and DoH shares are significantly higher, suggesting that users there may be more concerned about the privacy of their DNS queries than the global average, or that there is a larger concentration of 1.1.1.1 users on Android devices who have set up 1.1.1.1 using DoT manually. (The 2024 Cloudflare Radar Year in Review found that <a href="https://radar.cloudflare.com/year-in-review/2024/eg#ios-vs-android"><u>Android had an 85% mobile device traffic share in Egypt</u></a>, so mobile device usage in the country leans very heavily toward Android.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1srd6prVQCUxHvxw8eFNjL/987f2d925120be867174fd04a8c7eb2c/05-b.png" />
            
            </figure><p>RFC 1035 also defined a number of <a href="https://datatracker.ietf.org/doc/html/rfc1035#section-3.3"><u>standard</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc1035#section-3.4"><u>Internet specific</u></a> resource record types that return the associated information about the submitted query name. The most common record types are <code>A</code> and <code>AAAA</code>, which return the hostname’s IPv4 and IPv6 addresses respectively (assuming they exist). The <a href="https://radar.cloudflare.com/dns#dns-query-type"><b><u>DNS query type</u></b></a> graph below shows that globally, these two record types comprise on the order of 80% of the queries received by 1.1.1.1. Among the others shown in the graph, <a href="https://blog.cloudflare.com/speeding-up-https-and-http-3-negotiation-with-dns/#service-bindings-via-dns"><code><u>HTTPS</u></code></a> records can be used to signal HTTP/3 and HTTP/2 support, <a href="https://www.cloudflare.com/learning/dns/dns-records/dns-ptr-record/"><code><u>PTR</u></code></a> records are used in reverse DNS records to look up a domain name based on a given IP address, and <a href="https://www.cloudflare.com/learning/dns/dns-records/dns-ns-record/"><code><u>NS</u></code></a> records indicate authoritative nameservers for a domain.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3LI2EW249EtFFX5FvlONDg/4b150dfbdd8de5c0e9def9eb18c81d70/06.png" />
            
            </figure><p>A response code is sent with each response from 1.1.1.1 to the client. Six possible values were <a href="https://datatracker.ietf.org/doc/html/rfc1035#section-4.1.1"><u>originally defined in RFC 1035</u></a>, with the list <a href="https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-6"><u>further extended</u></a> in <a href="https://datatracker.ietf.org/doc/html/rfc2136"><u>RFC 2136</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc2671"><u>RFC 2671</u></a>. <code>NOERROR</code>, as the name suggests, means that no error condition was encountered with the query. Others, such as <code>NXDOMAIN</code>, <code>SERVFAIL</code>, <code>REFUSED</code>, and <code>NOTIMP</code> define specific error conditions encountered when trying to resolve the requested query name. The response codes may be generated by 1.1.1.1 itself (like <code>REFUSED</code>) or may come from an upstream authoritative nameserver (like <code>NXDOMAIN</code>).</p><p>The <a href="https://radar.cloudflare.com/dns#dns-response-code"><b><u>DNS response code</u></b></a> graph shown below highlights that the vast majority of queries seen globally do not encounter an error during the resolution process (<code>NOERROR</code>), and that when errors are encountered, most are <a href="https://datatracker.ietf.org/doc/html/rfc8020"><code><u>NXDOMAIN</u></code></a> (no such record). It is worth noting that <code>NOERROR</code> also includes empty responses, which occur when there are no records for the query name and query type, but there are records for the query name and some other query type.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZXQ8kcT0H7zfb8najn42C/df8c8c2f54c492484bb5d59f437eee5d/07.png" />
            
            </figure><p>With DNS being a first-step dependency for many other protocols, the amount of queries of particular types can be used to indirectly measure the adoption of those protocols. But to effectively measure adoption, we should also consider the fraction of those queries that are met with useful responses, which are represented with the <a href="https://radar.cloudflare.com/dns#dns-record-adoption"><b><u>DNS record adoption</u></b></a> graphs.</p><p>The example below shows that queries for <code>A</code> records are met with a useful response nearly 88% of the time. As IPv4 is an established protocol, the remaining 12% are likely to be queries for valid hostnames that have no <code>A </code>records (e.g. email domains that only have MX records). But the same graph also shows that there’s still a <a href="https://blog.cloudflare.com/ipv6-from-dns-pov/"><u>significant adoption gap</u></a> where IPv6 is concerned.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6blxaHcK6UtPp67V3SGNML/daed03be6793aab32ec21b2bb2f08374/08.png" />
            
            </figure><p>When Cloudflare’s DNS resolver gets a response back from an upstream authoritative nameserver, it caches it for a specified amount of time — more on that below. By caching these responses, it can more efficiently serve subsequent queries for the same name. The <a href="https://radar.cloudflare.com/dns#dns-cache-hit-ratio"><b><u>DNS cache hit ratio</u></b></a> graph provides insight into how frequently responses are served from cache. At a global level, as seen below, over 80% of queries have a response that is already cached. These ratios will vary by location or ASN, as the query patterns differ across geographies and networks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sj0gBv53GdPF0slfGlKlr/fa86ff6fc610aefad2e675c5dc926f54/09.png" />
            
            </figure><p>As noted in the preceding paragraph, when an authoritative nameserver sends a response back to 1.1.1.1, each record inside it includes information about how long it should be cached/considered valid for. This piece of information is known as the <a href="https://developers.cloudflare.com/dns/manage-dns-records/reference/ttl/"><u>Time-To-Live (TTL)</u></a> and, as a response may contain multiple records, the smallest of these TTLs (the “minimum” TTL) defines how long 1.1.1.1 can cache the entire response for. The TTLs on each response served from 1.1.1.1’s cache decrease towards zero as time passes, at which point 1.1.1.1 needs to go back to the authoritative nameserver. Hostnames with relatively low TTL values suggest that the records may be somewhat dynamic, possibly due to traffic management of the associated resources; longer TTL values suggest that the associated resources are more stable and expected to change infrequently.</p><p>The <a href="https://radar.cloudflare.com/dns#dns-minimum-ttl"><b><u>DNS minimum TTL</u></b></a> graphs show the aggregate distribution of TTL values for five popular DNS record types, broken out across seven buckets ranging from under one minute to over one week. During the third week of February, for example, <code>A</code> and <code>AAAA</code> responses had a concentration of low TTLs, with over 80% below five minutes. In contrast, <code>NS</code> and <code>MX</code> responses were more concentrated across 15 minutes to one hour and one hour to one day. Because <code>MX</code> and <code>NS</code> records change infrequently, they are generally configured with higher TTLs. This allows them to be cached for longer periods in order to achieve faster DNS resolution.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3r6ppahpkqyfAHi89LWNA1/6dc6f52e92c1d7aa2dfaeaa411deb982/10.png" />
            
            </figure>
    <div>
      <h3>DNS security</h3>
      <a href="#dns-security">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/dns/dns-security/"><u>DNS Security Extensions (DNSSEC)</u></a> add an extra layer of authentication to DNS establishing the integrity and authenticity of a DNS response. This ensures subsequent HTTPS requests are not routed to a spoofed domain. When sending a query to 1.1.1.1, a DNS client can indicate that it is DNSSEC-aware by setting a specific flag (the “DO” bit) in the query, which lets our resolver know that it is OK to return DNSSEC data in the response. The <a href="https://radar.cloudflare.com/dns#dnssec-client-awareness"><b><u>DNSSEC client awareness</u></b></a> graph breaks down the share of queries that 1.1.1.1 sees from clients that understand DNSSEC and can require validation of responses vs. those that don’t. (Note that by default, 1.1.1.1 tries to protect clients by always validating DNSSEC responses from authoritative nameservers and not forwarding invalid responses to clients, unless the client has explicitly told it not to by setting the “CD” (checking-disabled) bit in the query.)</p><p>Unfortunately, as the graph below shows, nearly 90% of the queries seen by Cloudflare’s resolver are made by clients that are not DNSSEC-aware. This broad lack of client awareness may be due to several factors. On the client side, DNSSEC is not enabled by default for most users, and enabling DNSSEC requires extra work, even for technically savvy and security conscious users. On the authoritative side, for domain owners, supporting DNSSEC requires extra operational maintenance and knowledge, and a mistake can cost your domain to <a href="https://blog.cloudflare.com/dnssec-issues-fiji/"><u>disappear from the Internet</u></a>, causing significant (including financial) issues.</p><p>The companion <a href="https://radar.cloudflare.com/dns#end-to-end-security"><b><u>End-to-end security</u></b></a> graph represents the fraction of DNS interactions that were protected from tampering, when considering the client’s DNSSEC capabilities and use of encryption (use of DoT or DoH). This shows an even greater imbalance at a global level, and highlights the importance of further adoption of encryption and DNSSEC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nErpp8o9tPuE0jt5PQ3fg/3e509065967a8f43c6679d400fd31454/11.png" />
            
            </figure><p>For DNSSEC validation to occur, the query name being requested must be part of a DNSSEC-enabled domain, and the <a href="https://radar.cloudflare.com/dns#dnssec-validation-status"><b><u>DNSSEC validation status</u></b></a> graph represents the share of queries where that was the case under the <b>Secure</b> and <b>Invalid</b> labels. Queries for domains without DNSSEC are labeled as <b>Insecure</b>, and queries where DNSSEC validation was not applicable (such as various kinds of errors) fall under the <b>Other</b> label. Although nearly 93% of generic Top Level Domains (TLDs) and 65% of country code Top Level Domains (ccTLDs) are <a href="https://ithi.research.icann.org/graph-m7.html"><u>signed with DNSSEC</u></a> (as of February 2025), the adoption rate across individual (child) domains lags significantly, as the graph below shows that over 80% of queries were labeled as <b>Insecure</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3shBkfRYcpHKgXI6Y9jcjq/26929261c5c6800fa1fee562dad5ce53/12.png" />
            
            </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>DNS is a fundamental, foundational part of the Internet. While most Internet users don’t think of DNS beyond its role in translating easy-to-remember hostnames to IP addresses, there’s a lot going on to make even that happen, from privacy to performance to security. The new DNS page on Cloudflare Radar endeavors to provide visibility into what’s going on behind the scenes, at a global, national, and network level.</p><p>While the graphs shown above are taken from the DNS page, all the underlying data is available via the <a href="https://developers.cloudflare.com/api/resources/radar/subresources/dns/"><u>API</u></a> and can be interactively explored in more detail across locations, networks, and time periods using Radar’s <a href="https://radar.cloudflare.com/explorer?dataSet=dns"><u>Data Explorer and AI Assistant</u></a>. And as always, Radar and Data Assistant charts and graphs are downloadable for sharing, and embeddable for use in your own blog posts, websites, or dashboards.</p><p>If you share our DNS graphs on social media, be sure to tag us: <a href="https://x.com/CloudflareRadar"><u>@CloudflareRadar</u></a> and <a href="https://x.com/1111Resolver"><u>@1111Resolver</u></a> (X), <a href="https://noc.social/@cloudflareradar"><u>noc.social/@cloudflareradar</u></a> (Mastodon), and <a href="https://bsky.app/profile/radar.cloudflare.com"><u>radar.cloudflare.com</u></a> (Bluesky). If you have questions or comments, you can reach out to us on social media, or contact us via <a><u>email</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Resolver]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DoH]]></category>
            <category><![CDATA[Traffic]]></category>
            <guid isPermaLink="false">2aI8Y4m36DD0HQghRNFZ2n</guid>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Carlos Rodrigues</dc:creator>
            <dc:creator>Vicky Shrestha</dc:creator>
            <dc:creator>Hannes Gerhart</dc:creator>
        </item>
        <item>
            <title><![CDATA[Remediating new DNSSEC resource exhaustion vulnerabilities]]></title>
            <link>https://blog.cloudflare.com/remediating-new-dnssec-resource-exhaustion-vulnerabilities/</link>
            <pubDate>Thu, 29 Feb 2024 14:00:57 GMT</pubDate>
            <description><![CDATA[ Cloudflare recently fixed two critical DNSSEC vulnerabilities: CVE-2023-50387 and CVE-2023-50868. Both of these vulnerabilities can exhaust computational resources of validating DNS resolvers. These vulnerabilities do not affect our Authoritative DNS or DNS firewall products ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aQzvD1YJLHbGjaALKlC8e/23b4147ceed9f1d364101fe3fcbda244/image1-13.png" />
            
            </figure><p>Cloudflare has been part of a multivendor, industry-wide effort to mitigate two critical <a href="https://www.cloudflare.com/dns/dnssec/how-dnssec-works/">DNSSEC</a> vulnerabilities. These vulnerabilities exposed significant risks to critical infrastructures that provide DNS resolution services. Cloudflare provides DNS resolution for anyone to use for free with our <a href="/dns-resolver-1-1-1-1">public resolver 1.1.1.1 service</a>. Mitigations for Cloudflare’s public resolver 1.1.1.1 service were applied before these vulnerabilities were disclosed publicly. Internal resolvers using <a href="https://nlnetlabs.nl/projects/unbound/about/">unbound</a> (open source software) were upgraded promptly after a new software version fixing these vulnerabilities was released.</p><p>All Cloudflare DNS infrastructure was protected from both of these vulnerabilities before they were <a href="https://www.athene-center.de/fileadmin/content/PDF/Technical_Report_KeyTrap.pdf">disclosed</a> and is safe today. These vulnerabilities do not affect our <a href="https://www.cloudflare.com/application-services/products/dns/">Authoritative DNS</a> or <a href="https://www.cloudflare.com/dns/dns-firewall/">DNS firewall</a> products.</p><p>All major DNS software vendors have released new versions of their software. All other major DNS resolver providers have also applied appropriate mitigations. Please update your DNS resolver software immediately, if you haven’t done so already.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Domain name system (DNS) security extensions, commonly known as <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a>, are extensions to the DNS protocol that add authentication and integrity capabilities. DNSSEC uses cryptographic keys and signatures that allow DNS responses to be validated as authentic. DNSSEC protocol specifications have certain requirements that prioritize availability at the cost of increased complexity and computational cost for the validating DNS resolvers. The mitigations for the vulnerabilities discussed in this blog require local policies to be applied that relax these requirements in order to avoid exhausting the resources of validators.</p><p>The design of the DNS and DNSSEC protocols follows the <a href="https://datatracker.ietf.org/doc/html/rfc761#section-2.10">Robustness principle</a>: “be conservative in what you do, be liberal in what you accept from others”. There have been many vulnerabilities in the past that have taken advantage of protocol requirements following this principle. Malicious actors can exploit these vulnerabilities to attack DNS infrastructure, in this case by causing additional work for DNS resolvers by crafting DNSSEC responses with complex configurations. As is often the case, we find ourselves having to create a pragmatic balance between the flexibility that allows a protocol to adapt and evolve and the need to safeguard the stability and security of the services we operate.</p><p>Cloudflare’s public resolver 1.1.1.1 is a <a href="https://developers.cloudflare.com/1.1.1.1/privacy/public-dns-resolver/">privacy-centric</a> public resolver service. We have been using stricter validations and limits aimed at protecting our own infrastructure in addition to shielding authoritative DNS servers operated outside our network. As a result, we often receive complaints about resolution failures. Experience shows us that strict validations and limits can impact availability in some edge cases, especially when DNS domains are improperly configured. However, these strict validations and limits are necessary to improve the overall reliability and resilience of the DNS infrastructure.</p><p>The vulnerabilities and how we mitigated them are described below.</p>
    <div>
      <h2>Keytrap vulnerability (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-50387">CVE-2023-50387</a>)</h2>
      <a href="#keytrap-vulnerability">
        
      </a>
    </div>
    
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    <p>A DNSSEC signed zone can contain multiple keys (DNSKEY) to sign the contents of a DNS zone and a Resource Record Set (RRSET) in a DNS response can have multiple signatures (RRSIG). Multiple keys and signatures are required to support things like key rollover, algorithm rollover, and <a href="https://datatracker.ietf.org/doc/html/rfc8901">multi-signer DNSSEC</a>. DNSSEC protocol specifications require a validating DNS resolver to <a href="https://datatracker.ietf.org/doc/html/rfc4035#section-5.3.3">try every possible combination of keys and signatures</a> when validating a DNS response.</p><p>During validation, a resolver looks at the key tag of every signature and tries to find the associated key that was used to sign it. A key tag is an unsigned 16-bit number <a href="https://datatracker.ietf.org/doc/html/rfc4034#appendix-B">calculated as a checksum</a> over the key’s resource data (RDATA). Key tags are intended to allow efficient pairing of a signature with the key which has supposedly created it.  However, key tags are not unique, and it is possible that multiple keys can have the same key tag. A malicious actor can easily craft a DNS response with multiple keys having the same key tag together with multiple signatures, none of which might validate. A validating resolver would have to try every combination (number of keys multiplied by number of signatures) when trying to validate this response. This increases the computational cost of the validating resolver many-fold, degrading performance for all its users. This is known as the Keytrap vulnerability.</p><p>Variations of this vulnerability include using multiple signatures with one key, using one signature with multiple keys having colliding key tags, and using multiple keys with corresponding hashes added to the parent delegation signer record.</p>
    <div>
      <h3>Mitigation</h3>
      <a href="#mitigation">
        
      </a>
    </div>
    <p>We have limited the maximum number of keys we will accept at a zone cut. A zone cut is where a parent zone delegates to a child zone, e.g. where the .com zone delegates cloudflare.com to Cloudflare nameservers. Even with this limit already in place and various other protections built for our platform, we realized that it would still be computationally costly to process a malicious DNS answer from an authoritative DNS server.</p><p>To address and further mitigate this vulnerability, we added a signature validations limit per RRSET and a total signature validations limit per resolution task. One resolution task might include multiple recursive queries to external authoritative DNS servers in order to answer a single DNS question. Clients queries exceeding these limits will fail to resolve and will receive a response with an Extended DNS Error (<a href="/unwrap-the-servfail/">EDE</a>) <a href="https://datatracker.ietf.org/doc/html/rfc8914#name-extended-dns-error-code-0-o">code 0</a>. Furthermore, we added metrics which will allow us to detect attacks attempting to exploit this vulnerability.</p>
    <div>
      <h2>NSEC3 iteration and closest encloser proof vulnerability (<a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-50868">CVE-2023-50868</a>)</h2>
      <a href="#nsec3-iteration-and-closest-encloser-proof-vulnerability">
        
      </a>
    </div>
    
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc5155">NSEC3</a> is an alternative approach for authenticated denial of existence. You can learn more about authenticated denial of existence <a href="/black-lies/">here</a>. NSEC3 uses a hash derived from DNS names instead of the DNS names directly in an attempt to prevent zone enumeration and the standard supports multiple iterations for hash calculations. However, because the full DNS name is used as input to the hash calculation, increasing hashing iterations beyond the initial doesn’t provide any additional value and is not recommended in <a href="https://datatracker.ietf.org/doc/html/rfc9276#name-iterations">RFC9276</a>. This complication is further inflated while finding the <a href="https://datatracker.ietf.org/doc/html/rfc5155#section-8.3">closest enclosure proof</a>. A malicious DNS response from an authoritative DNS server can set a high NSEC3 iteration count and long DNS names with multiple DNS labels to exhaust the computing resources of a validating resolver by making it do unnecessary hash computations.</p>
    <div>
      <h3>Mitigation</h3>
      <a href="#mitigation">
        
      </a>
    </div>
    <p>For this vulnerability, we applied a similar mitigation technique as we did for Keytrap. We added a limit for total hash calculations per resolution task to answer a single DNS question. Similarly, clients queries exceeding this limit will fail to resolve and will receive a response with an EDE <a href="https://datatracker.ietf.org/doc/html/rfc9276.html#section-6">code 27</a>. We also added metrics to track hash calculations allowing early detection of attacks attempting to exploit this vulnerability.</p>
    <div>
      <h2>Timeline</h2>
      <a href="#timeline">
        
      </a>
    </div>
    <table>
	<tbody>
		<tr>
			<td>
			<p><strong><span><span><span><span>Date and time in UTC</span></span></span></span></strong></p>
			</td>
			<td>
			<p><strong><span><span><span><span>Event</span></span></span></span></strong></p>
			</td>
		</tr>
		<tr>
			<td>
			<p><span><span><span><span>2023-11-03 16:05</span></span></span></span></p>
			</td>
			<td>
			<p><span><span><span><span>John Todd from </span></span></span></span><a href="https://quad9.net/"><span><span><span><span><u>Quad9</u></span></span></span></span></a><span><span><span><span> invites Cloudflare to participate in a joint task force to discuss a new DNS vulnerability. </span></span></span></span></p>
			</td>
		</tr>
		<tr>
			<td>
			<p><span><span><span>2023-11-07 14:30</span></span></span></p>
			</td>
			<td>
			<p><span><span><span><span>A group of DNS vendors and service providers meet to discuss the vulnerability during </span></span></span></span><a href="https://www.ietf.org/blog/ietf118-highlights/"><span><span><span><span><u>IETF 118</u></span></span></span></span></a><span><span><span><span>. Discussions and collaboration continues in a closed chat group hosted at </span></span></span></span><a href="https://dns-oarc.net/oarc/services/chat"><span><span><span><span><u>DNS-OARC</u></span></span></span></span></a></p>
			</td>
		</tr>
		<tr>
			<td>
			<p><span><span><span>2023-12-08 20:20</span></span></span></p>
			</td>
			<td>
			<p><span><span><span><span>Cloudflare public resolver 1.1.1.1 is fully patched to mitigate Keytrap vulnerability (</span></span></span></span><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-50387"><span><span><span><u>CVE-2023-50387</u></span></span></span></a><span><span><span><span>)</span></span></span></span></p>
			</td>
		</tr>
		<tr>
			<td>
			<p><span><span><span><span>2024-01-17 22:39</span></span></span></span></p>
			</td>
			<td>
			<p><span><span><span><span>Cloudflare public resolver 1.1.1.1 is fully patched to mitigate NSEC3 iteration count and closest encloser vulnerability (</span></span></span></span><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-50868"><span><span><span><u>CVE-2023-50868</u></span></span></span></a><span><span><span><span>)</span></span></span></span></p>
			</td>
		</tr>
		<tr>
			<td>
			<p><span><span><span>2024-02-13 13:04</span></span></span></p>
			</td>
			<td>
			<p><a href="https://nlnetlabs.nl/news/2024/Feb/13/unbound-1.19.1-released/"><span><span><span><span><u>Unbound</u></span></span></span></span></a><span><span><span><span> package is released </span></span></span></span></p>
			</td>
		</tr>
		<tr>
			<td>
			<p><span><span><span>2024-02-13 23:00</span></span></span></p>
			</td>
			<td>
			<p><span><span><span><span>Cloudflare internal CDN resolver is fully patched to mitigate both </span></span></span></span><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-50387"><span><span><span><u>CVE-2023-50387</u></span></span></span></a><span><span><span><span> and </span></span></span></span><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-50868"><span><span><span><u>CVE-2023-50868</u></span></span></span></a></p>
			</td>
		</tr>
	</tbody>
</table>
    <div>
      <h2>Credits</h2>
      <a href="#credits">
        
      </a>
    </div>
    <p>We would like to thank Elias Heftrig, Haya Schulmann, Niklas Vogel, Michael Waidner from the German National Research Center for Applied Cybersecurity <a href="https://www.athene-center.de/en/">ATHENE</a>, for discovering the <a href="https://www.athene-center.de/fileadmin/content/PDF/Technical_Report_KeyTrap.pdf">Keytrap vulnerability</a> and doing a responsible disclosure.</p><p>We would like to thank Petr Špaček from Internet Systems Consortium (<a href="https://www.isc.org/">ISC</a>) for discovering the <a href="https://www.isc.org/blogs/2024-bind-security-release/">NSEC3 iteration and closest encloser proof vulnerability</a> and doing a responsible disclosure.</p><p>We would like to thank John Todd from <a href="https://quad9.net/">Quad9</a>  and the DNS Operations Analysis and Research Center (<a href="https://dns-oarc.net/">DNS-OARC</a>) for facilitating coordination amongst various stakeholders.</p><p>And finally, we would like to thank the DNS-OARC community members, representing various DNS vendors and service providers, who all came together and worked tirelessly to fix these vulnerabilities, working towards a common goal of making the internet resilient and secure.</p> ]]></content:encoded>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Resolver]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[KeyTrap]]></category>
            <category><![CDATA[NSEC3]]></category>
            <category><![CDATA[CVE-2023-50387]]></category>
            <guid isPermaLink="false">5KGfAQ21FRucS2X625z4FX</guid>
            <dc:creator>Vicky Shrestha</dc:creator>
            <dc:creator>Anbang Wen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Connection errors in Asia Pacific region on July 9, 2023]]></title>
            <link>https://blog.cloudflare.com/connection-errors-in-asia-pacific-region-on-july-9-2023/</link>
            <pubDate>Tue, 11 Jul 2023 08:48:13 GMT</pubDate>
            <description><![CDATA[ On July 9, 2023, users in the Asia Pacific region experienced connection errors due to origin DNS resolution failures to .com and .net TLD nameservers ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XSclbffVsXyJvs6H28PzZ/8ca3e3e580eecf4e762af00eb94eb8d4/image2-5.png" />
            
            </figure><p>On Sunday, July 9, 2023, early morning UTC time, we observed a high number of DNS resolution failures — up to 7% of all DNS queries across the Asia Pacific region — caused by invalid DNSSEC signatures from Verisign .com and .net Top Level Domain (TLD) nameservers. This resulted in connection errors for visitors of Internet properties on Cloudflare in the region.</p><p>The local instances of Verisign’s nameservers started to respond with expired DNSSEC signatures in the Asia Pacific region. In order to remediate the impact, we have rerouted upstream DNS queries towards Verisign to locations on the US west coast which are returning valid signatures.</p><p>We have already reached out to Verisign to get more information on the root cause. Until their issues have been resolved, we will keep our DNS traffic to .com and .net TLD nameservers rerouted, which might cause slightly increased latency for the first visitor to domains under .com and .net in the region.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>In order to proxy a domain’s traffic through Cloudflare’s network, there are two components involved with respect to the Domain Name System (DNS) from the perspective of a Cloudflare data center: external DNS resolution, and upstream or origin DNS resolution.</p><p>To understand this, let’s look at the domain name <code>blog.cloudflare.com</code> — which is proxied through Cloudflare’s network — and let’s assume it is configured to use <code>origin.example</code> as the origin server:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nnwNUFQmflIHHioGDISlg/250c388a2d796d0dc8139b4eddda6c05/image5-1.png" />
            
            </figure><p>Here, the external DNS resolution is the part where DNS queries to <code>blog.cloudflare.com</code> sent by public resolvers like <code>1.1.1.1</code> or <code>8.8.8.8</code> on behalf of a visitor return a set of Cloudflare Anycast IP addresses. This ensures that the visitor’s browser knows where to send an HTTPS request to load the website. Under the hood your browser performs a DNS query that looks something like this (the trailing dot indicates the <a href="https://en.wikipedia.org/wiki/DNS_root_zone">DNS root zone</a>):</p>
            <pre><code>$ dig blog.cloudflare.com. +short
104.18.28.7
104.18.29.7</code></pre>
            <p>(Your computer doesn’t actually use the dig command internally; we’ve used it to illustrate the process) Then when the next closest Cloudflare data center receives the HTTPS request for blog.cloudflare.com, it needs to fetch the content from the origin server (assuming it is not cached).</p><p>There are two ways Cloudflare can reach the origin server. If the DNS settings in Cloudflare contain IP addresses then we can connect directly to the origin. In some cases, our customers use a CNAME which means Cloudflare has to perform another DNS query to get the IP addresses associated with the CNAME. In the example above, <code>blog.cloudflare.com</code> has a CNAME record instructing us to look at <code>origin.example</code> for IP addresses. During the incident, only customers with CNAME records like this going to .com and .net domain names may have been affected.</p><p>The domain <code>origin.example</code> needs to be resolved by Cloudflare as part of the upstream or origin DNS resolution. This time, the Cloudflare data center needs to perform an outbound DNS query that looks like this:</p>
            <pre><code>$ dig origin.example. +short
192.0.2.1</code></pre>
            <p>DNS is a hierarchical protocol, so the recursive DNS resolver, which usually handles DNS resolution for whoever wants to resolve a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a>, needs to talk to several involved nameservers until it finally gets an answer from the authoritative nameservers of the domain (assuming no DNS responses are cached). This is the same process during the external DNS resolution and the origin DNS resolution. Here is an example for the origin DNS resolution:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7E1GLN7i8qGi3oB6Zentug/8b55136d3c67a79d0d9c711c428911b4/image6-1.png" />
            
            </figure><p>Inherently, DNS is a public system that was initially published without any means to validate the integrity of the DNS traffic. So in order to prevent someone from spoofing DNS responses, <a href="/dnssec-an-introduction/">DNS Security Extensions (DNSSEC)</a> were introduced as a means to authenticate that DNS responses really come from who they claim to come from. This is achieved by generating cryptographic signatures alongside existing DNS records like A, AAAA, MX, CNAME, etc. By validating a DNS record’s associated signature, it is possible to verify that a requested DNS record really comes from its authoritative nameserver and wasn’t altered en-route. If a signature cannot be validated successfully, recursive resolvers usually return an error indicating the invalid signature. This is exactly what happened on Sunday.</p>
    <div>
      <h3>Incident timeline and impact</h3>
      <a href="#incident-timeline-and-impact">
        
      </a>
    </div>
    <p>On Saturday, July 8, 2023, at 21:10 UTC our logs show the first instances of DNSSEC validation errors that happened during upstream DNS resolution from multiple Cloudflare data centers in the Asia Pacific region. It appeared DNS responses from the TLD nameservers of .com and .net of the type NSEC3 (a DNSSEC mechanism to <a href="/black-lies/">prove non-existing DNS records</a>) included invalid signatures. About an hour later at 22:16 UTC, the first internal alerts went off (since it required issues to be consistent over a certain period of time), but error rates were still at a level at around 0.5% of all upstream DNS queries.</p><p>After several hours, the error rate had increased to a point in which ~13% of our upstream DNS queries in affected locations were failing. This percentage continued to fluctuate over the duration of the incident between the ranges of 10-15% of upstream DNS queries, and roughly 5-7% of all DNS queries (external &amp; upstream resolution) to affected Cloudflare data centers in the Asia Pacific region.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/acWvj718KdxYfGx33fBZT/7ee25b63cf83ee9ff18e6734aeb1cc3e/image1-6.png" />
            
            </figure><p>Initially it appeared as though only a single upstream nameserver was having issues with DNS resolution, however upon further investigation it was discovered that the issue was more widespread. Ultimately, we were able to verify that the Verisign nameservers for .com and .net were returning expired DNSSEC signatures on a portion of DNS queries in the Asia Pacific region. Based on our tests, other nameserver locations were correctly returning valid DNSSEC signatures.</p><p>In response, we rerouted our DNS traffic to the .com and .net TLD nameserver IP addresses to Verisign’s US west locations. After this change was propagated, the issue very quickly subsided and origin resolution error rates returned to normal levels.</p><p>All times are in UTC:</p><p><b>2023-07-08 21:10</b> First instances of DNSSEC validation errors appear in our logs for origin DNS resolution.</p><p><b>2023-07-08 22:16</b> First internal alerts for Asia Pacific data centers go off indicating origin DNS resolution error on our test domain. Very few errors for other domains at this point.</p><p><b>2023-07-09 02:58</b> Error rates have increased substantially since the first instance. An incident is declared.</p><p><b>2023-07-09 03:28</b> DNSSEC validation issues seem to be isolated to a single upstream provider. It is not abnormal that a single upstream has issues that propagate back to us, and in this case our logs were predominantly showing errors from domains that resolve to this specific upstream.</p><p><b>2023-07-09 04:52</b> We realize that DNSSEC validation issues are more widespread and affect multiple .com and .net domains. Validation issues continue to be isolated to the Asia Pacific region only, and appear to be intermittent.</p><p><b>2023-07-09 05:15</b> DNS queries via popular recursive resolvers like 8.8.8.8 and 1.1.1.1 do not return invalid DNSSEC signatures at this point. DNS queries using the local stub resolver continue to return DNSSEC errors.</p><p><b>2023-07-09 06:24</b> Responses from .com and .net Verisign nameservers in Singapore contain expired DNSSEC signatures, but responses from Verisign TLD nameservers in other locations are fine.</p><p><b>2023-07-09 06:41</b> We contact Verisign and notify them about expired DNSSEC signatures.</p><p><b>2023-07-09 06:50</b> In order to remediate the impact, we reroute DNS traffic via IPv4 for the .com and .net Verisign nameserver IPs to US west IPs instead. This immediately leads to a substantial drop in the error rate.</p><p><b>2023-07-09 07:06</b> We also reroute DNS traffic via IPv6 for the .com and .net Verisign nameserver IPs to US west IPs. This leads to the error rate going down to zero.</p><p><b>2023-07-10 09:23</b> The rerouting is still in place, but the underlying issue with expired signatures in the Asia Pacific region has still not been resolved.</p><p><b>2023-07-10 18:23</b> Verisign gets back to us confirming that they “were serving stale data” at their local site and have resolved the issues.</p>
    <div>
      <h3>Technical description of the error and how it happened</h3>
      <a href="#technical-description-of-the-error-and-how-it-happened">
        
      </a>
    </div>
    <p>As mentioned in the introduction, the underlying cause for this incident was expired DNSSEC signatures for .net and .com zones. Expired DNSSEC signatures will cause a DNS response to be interpreted as invalid. There are two main scenarios in which this error was observed by a user:</p><ol><li><p><a href="https://developers.cloudflare.com/dns/cname-flattening/">CNAME flattening</a> for external DNS resolution. This is when our authoritative nameservers attempt to return the IP address(es) that a CNAME record target resolves to rather than the CNAME record itself.</p></li><li><p>CNAME target lookup for origin DNS resolution. This is most commonly used when an HTTPS request is sent to a Cloudflare anycast IP address, and we need to determine what IP address to forward the request to. See <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/how-cloudflare-works/">How Cloudflare works</a> for more details.</p></li></ol><p>In both cases, behind the scenes the DNS query goes through an in-house recursive DNS resolver in order to lookup what a hostname resolves to. The purpose of this resolver is to cache queries, optimize future queries and provide DNSSEC validation. If the query from this resolver fails for whatever reason, our authoritative DNS system will not be able to perform the two scenarios outlined above.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qE0JFXaLHPwt3orOsNBm5/37d2d34d396d4cc4c2a22b7241e4120f/image3-1.png" />
            
            </figure><p>During the incident, the recursive resolver was failing to validate the DNSSEC signatures in DNS responses for domains ending in .com and .net. These signatures are sent in the form of an RRSIG record alongside the other DNS records they cover. Together they form a Resource Record set (RRset). Each RRSIG has the corresponding fields:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aZjsnpM6WSE70sPrnkHbr/6af366bd2accafc3f06296e241ceaba5/image4.png" />
            
            </figure><p>As you can see, the main part of the payload is associated with the signature and its corresponding metadata. Each recursive resolver is responsible for not only checking the signature but also the expiration time of the signature. It is important to obey the expiration time in order to avoid returning responses for RRsets that have been signed by old keys, which could have potentially been compromised by that time.</p><p>During this incident, Verisign, the authoritative operator for the .com and .net TLD zones, was occasionally returning expired signatures in its DNS responses in the Asia Pacific region. As a result our recursive resolver was not able to validate the corresponding RRset. Ultimately this meant that a percentage of DNS queries would return SERVFAIL as response code to our authoritative nameserver. This in turn caused connection issues for users trying to connect to a domain on Cloudflare, because we weren't able to resolve the upstream target of affected domain names and thus didn’t know where to send proxied HTTPS requests to upstream servers.</p>
    <div>
      <h3>Remediation and follow-up steps</h3>
      <a href="#remediation-and-follow-up-steps">
        
      </a>
    </div>
    <p>Once we had identified the root cause we started to look at different ways to remedy the issue. We came to the conclusion that the fastest way to work around this very regionalized issue was to stop using the local route to Verisign's nameservers. This means that, at the time of writing this, our outgoing DNS traffic towards Verisign's nameservers in the Asia Pacific region now traverses the Pacific and ends up at the US west coast, rather than being served by closer nameservers. Due to the nature of DNS and the important role of DNS caching, this has less impact than you might initially expect. Frequently looked up names will be cached after the first request, and this only needs to happen once per data center, as we share and pool the local DNS recursor caches to maximize their efficiency.</p><p>Ideally, we would have been able to fix the issue right away as it potentially affected others in the region too, not just our customers. We will therefore work diligently to improve our incident communications channels with other providers in order to ensure that the DNS ecosystem remains robust against issues such as this. Being able to quickly get hold of other providers that can take action is vital when urgent issues like these arise.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>This incident <a href="/october-2021-facebook-outage/">once again</a> shows how impactful DNS failures are and how crucial this technology is for the Internet. We will investigate how we can improve our systems to detect and resolve issues like this more efficiently and quickly if they occur again in the future.</p><p>While Cloudflare was not the cause of this issue, and we are certain that we were not the only ones affected by this, we are still sorry for the disruption to our customers and to all the users who were unable to access Internet properties during this incident.</p><p><b>Update</b>: On Tue Jul 11 22:24:21 UTC 2023_,_ Verisign posted an <a href="https://lists.dns-oarc.net/pipermail/dns-operations/2023-July/022174.html">announcement</a>, providing more details:</p><blockquote><p><i>Last week, during a migration of one of our DNS resolution sites in Singapore, from one provider to another, we unexpectedly lost management access and the ability to deliver changes and DNS updates to the site. Following our standard procedure, we disabled all transit links to the affected site. Unfortunately, a peering router remained active, which was not immediately obvious to our teams due to the lack of connectivity there.</i></p></blockquote><blockquote><p><i>Over the weekend, this caused an issue that may have affected the ability of some internet users in the region to reach some .com and .net domains, as DNSSEC signatures on the site began expiring. The issue was resolved by powering off the site’s peering router, causing the anycast route announcement to be withdrawn and traffic to be directed to other sites.</i></p></blockquote><blockquote><p><i>We are updating our processes and procedures and will work to prevent such issues from recurring in the future.</i></p></blockquote><blockquote><p><i>The Singapore site is part of a highly redundant constellation of more than 200 sites that make up our global network. This issue had no effect on the core resolution of .com and .net resolution globally. We apologize to those who may have been affected.</i></p></blockquote><p></p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">3ZlvMILKrfS2Z4IQ0qumTD</guid>
            <dc:creator>Christian Elmerot</dc:creator>
            <dc:creator>Alex Fattouche</dc:creator>
            <dc:creator>Hannes Gerhart</dc:creator>
        </item>
        <item>
            <title><![CDATA[DNSSEC issues take Fiji domains offline]]></title>
            <link>https://blog.cloudflare.com/dnssec-issues-fiji/</link>
            <pubDate>Wed, 09 Mar 2022 10:11:24 GMT</pubDate>
            <description><![CDATA[ DNSSEC issues with the .fj ccTLD caused problems reaching websites on the island nation ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On the morning of March 8, a <a href="https://news.ycombinator.com/item?id=30596404">post to Hacker News</a> stated that “All .fj domains have gone offline”, listing several hostnames in domains within the Fiji top level domain (known as a ccTLD) that had become unreachable. Commenters in the associated discussion thread had mixed results in being able to reach .fj hostnames—some were successful, while others saw failures. The fijivillage news site also <a href="https://www.fijivillage.com/news/All-websites--apps-in-Fiji-with-dotcomfj-suffix-are-down-and-this-has-also-affected-M-PAiSA-services-x854fr/">highlighted the problem</a>, noting that the issue also impacted Vodafone’s M-PAiSA app/service, preventing users from completing financial transactions.</p><p>The impact of this issue can be seen in traffic to Cloudflare customer zones in the .com.fj second-level domain. The graph below shows that HTTP traffic to these zones dropped by approximately 40% almost immediately starting around midnight UTC on March 8. Traffic volumes continued to decline throughout the rest of the morning.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/MlEPEaiJ0hsxd5pPkBJKr/aa486dd964a65ebb60857c4f95c9c1cc/image1-6.png" />
            
            </figure><p>Looking at Cloudflare’s 1.1.1.1 resolver data for queries for .com.fj hostnames, we can also see that error volume associated with those queries climbs significantly starting just after midnight as well. This means that our resolvers encountered issues with the answers from .fj servers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1j8wWMQ23FgK4nojhFGMr1/3fd0624e9191cfc00504905e7f5981d9/image3-2-1.png" />
            
            </figure><p>This observation suggests that the problem was strictly DNS related, rather than connectivity related—Cloudflare Radar does not show any indication of an Internet disruption in Fiji coincident with the start of this problem.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/doKQ7vFJj4JJTDumpq7Zq/eb4febc5689aea7c99dbae1717362d62/image4.png" />
            
            </figure><p>It was suggested within the <a href="https://news.ycombinator.com/item?id=30597091">Hacker News comments</a> that the problem could be DNSSEC related. Upon further investigation, it appears that may be the cause. In verifying the DNSSEC record for the .fj ccTLD, shown in the <code>dig</code> output below, we see that it states <code>EDE: 9 (DNSKEY Missing): 'no SEP matching the DS found for fj.'</code></p>
            <pre><code>kdig fj. soa +dnssec @1.1.1.1 
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY; status: SERVFAIL; id: 12710
;; Flags: qr rd ra; QUERY: 1; ANSWER: 0; AUTHORITY: 0; ADDITIONAL: 1
 
;; EDNS PSEUDOSECTION:
;; Version: 0; flags: do; UDP size: 1232 B; ext-rcode: NOERROR
;; EDE: 9 (DNSKEY Missing): 'no SEP matching the DS found for fj.'
 
;; QUESTION SECTION:
;; fj.                          IN      SOA
 
;; Received 73 B
;; Time 2022-03-08 08:57:41 EST
;; From 1.1.1.1@53(UDP) in 17.2 ms</code></pre>
            <p>Extended DNS Error 9 (EDE: 9) is <a href="https://datatracker.ietf.org/doc/html/draft-ietf-dnsop-extended-error-16#section-4.10">defined</a> as “A DS record existed at a parent, but no supported matching DNSKEY record could be found for the child.” The Cloudflare Learning Center <a href="https://www.cloudflare.com/learning/dns/dns-records/dnskey-ds-records/">article on DNSKEY and DS records</a> explains this relationship:</p><blockquote><p><i>The DS record is used to verify the authenticity of child zones of DNSSEC zones. The DS key record on a parent zone contains a hash of the KSK in a child zone. A DNSSEC resolver can therefore verify the authenticity of the child zone by hashing its KSK record, and comparing that to what is in the parent zone's DS record.</i></p></blockquote><p>Ultimately, it appears that around midnight UTC, the .fj zone started to be signed with a key that was not in the root zone DS, possibly as the result of a scheduled rollover that happened without checking that the root zone was updated first by IANA, which updates the root zone. (IANA owns contact with the TLD operators, and instructs the Root Zone Publisher on the changes to make in the next version of the root zone.)</p><p>DNSSEC problems as the root cause of the observed issue align with the observation in the Hacker News comments that some were able to access .fj websites, while others were not. Users behind resolvers doing strict DNSSEC validation would have seen an error in their browser, while users behind less strict resolvers would have been able to access the sites without a problem.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Further analysis of Cloudflare resolver metrics indicates that the problem was resolved around 1400 UTC, when the DS was updated. When DNSSEC is improperly configured for a single domain name, it can cause problems accessing websites or applications in that zone. However, when the misconfiguration occurs at a ccTLD level, the impact is much more significant. Unfortunately, this seems to <a href="https://ianix.com/pub/dnssec-outages.html">occur</a> all too often.</p><p>(Thank you to Ólafur Guðmundsson for his DNSSEC expertise.)</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Outage]]></category>
            <guid isPermaLink="false">3vh0IyUNZyNMMXK7xmbeRR</guid>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Is BGP Safe Yet? No. But we are tracking it carefully]]></title>
            <link>https://blog.cloudflare.com/is-bgp-safe-yet-rpki-routing-security-initiative/</link>
            <pubDate>Fri, 17 Apr 2020 15:00:00 GMT</pubDate>
            <description><![CDATA[ BGP leaks and leaks and hijacks have been accepted as an unavoidable part of the Internet for far too long. Today, we are releasing isBGPSafeYet.com, a website to track deployments and filtering of invalid routes by the major networks. ]]></description>
            <content:encoded><![CDATA[ <p>BGP leaks and hijacks have been accepted as an unavoidable part of the Internet for far too long. We relied on protection at the upper layers like TLS and DNSSEC to ensure an untampered delivery of packets, but a hijacked route often results in an unreachable IP address. Which results in an Internet outage.</p><p>The Internet is too vital to allow this known problem to continue any longer. It's time networks prevented leaks and hijacks from having any impact. It's time to make BGP safe. No more excuses.</p><p>Border Gateway Protocol (BGP), a protocol to exchange routes has existed and evolved since the 1980s. Over the years it has had security features. The most notable security addition is Resource Public Key Infrastructure (RPKI), a security framework for routing. It has been the subject of <a href="/tag/rpki/">a few blog posts</a> following our deployment in mid-2018.</p><p>Today, the industry considers RPKI mature enough for widespread use, with a sufficient ecosystem of <a href="https://github.com/cloudflare/gortr">software</a> and <a href="https://rpki.cloudflare.com">tools</a>, including tools we've written and open sourced. We have fully deployed Origin Validation on all our BGP sessions with our peers and signed our prefixes.</p><p>However, the Internet can only be safe if the major network operators <a href="https://www.ndss-symposium.org/wp-content/uploads/2017/09/ndss2017_06A-3_Gilad_paper.pdf">deploy RPKI</a>. Those networks have the ability to spread a leak or hijack far and wide and it's vital that they take a part in stamping out the scourge of BGP problems whether inadvertent or deliberate.</p><p>Many like AT&amp;T and Telia pioneered global deployments of RPKI in 2019. They were successfully followed by Cogent and NTT in 2020. Hundreds networks of all sizes have done a tremendous job over the last few years but there is still work to be done.</p><p>If we observe the customer-cones of the networks that have deployed RPKI, we see around 50% of the Internet is more protected against route leaks. That's great, but it's nothing like enough.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Uh9D7PVerLfXQ68eqAnlh/ae920ca803b130eae81550f7c36a3c7c/isbgpsafeyet.png" />
            
            </figure><p>Today, we are releasing <a href="https://isbgpsafeyet.com/">isBGPSafeYet.com</a>, a website to track deployments and filtering of invalid routes by the major networks.</p><p>We are hoping this will help the community and we will crowdsource the information on the website. The source code is available on <a href="https://github.com/cloudflare/isbgpsafeyet.com">GitHub</a>, we welcome suggestions and contributions.</p><p>We expect this initiative will make RPKI more accessible to everyone and ultimately will reduce the impact of route leaks. Share the message with your Internet Service Providers (ISP), hosting providers, transit networks to build a safer Internet.</p><p>Additionally, to monitor and test deployments, we decided to announce two bad prefixes from our 200+ data centers and via the 233+ Internet Exchange Points (IXPs) we are connected to:</p><ul><li><p>103.21.244.0/24</p></li><li><p>2606:4700:7000::/48</p></li></ul><p>Both these prefixes should be considered <i>invalid</i> and should not be routed by your provider if RPKI is implemented within their network. This makes it easy to demonstrate how far a bad route can go, and test whether RPKI is working in the real world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76jlyQka8osAb5vX9EuzKf/6709edfcecc29419cca07da0748ed3de/Screen-Shot-2020-04-16-at-6.36.48-PM.png" />
            
            </figure><p><a href="https://rpki.cloudflare.com/?validateRoute=13335_103.21.244.0%2F24">A Route Origin Authorization for 103.21.244.0/24 on rpki.cloudflare.com</a></p><p>In the test you can run on <a href="https://isBGPSafeYet.com">isBGPSafeYet.com</a>, your browser will attempt to fetch two pages: the first one valid.rpki.cloudflare.com, is behind an RPKI-valid prefix and the second one, invalid.rpki.cloudflare.com, is behind the RPKI-invalid prefix.</p><p>The test has two outcomes:</p><ul><li><p>If both pages were correctly fetched, your ISP accepted the invalid route. It does not implement RPKI.</p></li><li><p>If only valid.rpki.cloudflare.com was fetched, your ISP implements RPKI. You will be less sensitive to route-leaks.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ihoJvfZjpxRs6GGJ2cZL6/90dcb2481e75ce40c4800c3e99ae9fc4/blogpost2-01.png" />
            
            </figure><p>a simple test of RPKI invalid reachability</p><p>We will be performing tests using those prefixes to check for propagation. <a href="https://en.wikipedia.org/wiki/Traceroute">Traceroutes</a> and probing helped us in the past by creating visualizations of deployment.</p><p>A simple indicator is the number of networks sending the accepted route to their peers and collectors:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wdk01mSlPLv5TVHzuyDHp/4b56692fa0b12154d860023723290c9d/invalid-prefixes-reachability.png" />
            
            </figure><p>Routing status from online route collection tool <a href="https://stat.ripe.net/widget/routing-status#w.resource=103.21.244.0/24&amp;w.min_peers_seeing=0">RIPE Stat</a></p><p>In December 2019, we released a <a href="https://xkcd.com/195/">Hilbert curve</a> map of the IPv4 address space. Every pixel represents a /20 prefix. If a dot is yellow, the prefix responded only to the probe from a RPKI-valid IP space. If it is blue, the prefix responded to probes from both RPKI valid and invalid IP space.</p><p>To summarize, the yellow areas are IP space behind networks that drop RPKI invalid prefixes. The Internet isn't safe until the blue becomes yellow.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2tiSr4Gzbk0qrtdGfMgIw/9be9478693140ad1132542148afe02ce/blogpost-hilbert-01.png" />
            
            </figure><p>Hilbert Curve Map of IP address space behind networks filtering RPKI invalid prefixes</p><p>Last but not least, we would like to thank every network that has already deployed RPKI and every developer that contributed to validator-software code bases. The last two years have shown that the Internet can become safer and we are looking forward to the day where we can call route leaks and hijacks an incident of the past.</p> ]]></content:encoded>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[RPKI]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <guid isPermaLink="false">1gQz546mYPaLOpkdVu7O1K</guid>
            <dc:creator>Louis Poinsignon</dc:creator>
        </item>
        <item>
            <title><![CDATA[RFC8482 - Saying goodbye to ANY]]></title>
            <link>https://blog.cloudflare.com/rfc8482-saying-goodbye-to-any/</link>
            <pubDate>Fri, 15 Mar 2019 17:01:17 GMT</pubDate>
            <description><![CDATA[ Ladies and gentlemen, I would like you to welcome the new shiny RFC8482, which effectively deprecates DNS ANY query type. DNS ANY was a "meta-query" - think about it as a similar thing to the common A, AAAA, MX or SRV query types, but unlike these it wasn't a real query type - it was special. ]]></description>
            <content:encoded><![CDATA[ <p>Ladies and gentlemen, I would like you to welcome the new shiny <a href="https://tools.ietf.org/html/rfc8482">RFC8482</a>, which effectively deprecates the DNS ANY query type. DNS ANY was a "meta-query" - think of it as a similar thing to the common A, AAAA, MX or SRV query types, but unlike these it wasn't a real query type - it was special. Unlike the standard query types, ANY didn't age well. It was hard to implement on modern DNS servers, the semantics were poorly understood by the community and it unnecessarily exposed the DNS protocol to abuse. RFC8482 allows us to clean it up - it's a good thing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PCmdGHjKE7wTlBgmp6igp/e77a20411c26932eba98e8fcede95bfd/Screenshot-from-2019-03-15-14-22-51.png" />
            
            </figure><p>But let's rewind a bit.</p>
    <div>
      <h2>Historical context</h2>
      <a href="#historical-context">
        
      </a>
    </div>
    <p>It all started in 2015, when we were looking at the code of our authoritative DNS server. The code flow was generally fine, but it was all peppered with naughty statements like this:</p>
            <pre><code>if qtype == "ANY" {
    // special case
}</code></pre>
            <p>This special code was ugly and error prone. This got us thinking: do we really need it? "ANY" is not a popular query type - no legitimate software uses it (with the notable exception of qmail).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oOaG7G1WJfwM7TX5qGESz/c6e5c5359f4eb3c499d6b3e891d4f19b/11235945713_5bf22a701d_z.jpg" />
            
            </figure><p><a href="https://www.flickr.com/photos/cmichel67/11235945713/">Image</a> by <a href="https://www.flickr.com/photos/cmichel67/">Christopher Michel</a>CC BY 2.0</p>
    <div>
      <h2>ANY is hard for modern DNS servers</h2>
      <a href="#any-is-hard-for-modern-dns-servers">
        
      </a>
    </div>
    <p>"ANY" queries, also called "* queries" in old RFCs, are supposed to return "all records" (citing <a href="https://tools.ietf.org/html/rfc1035">RFC1035</a>). There are two problems with this notion.</p><p>First, it assumes the server is able to retrieve "all records". In our implementation - we can't. Our DNS server, like many modern implementations, doesn't have a single "zone" file listing all properties of a DNS zone. This design allows us to respond fast and with information always up to date, but it makes it incredibly hard to retrieve "all records". Correct handling of "ANY" adds unreasonable code complexity for an obscure, rarely used query type.</p><p>Second, many of the DNS responses are generated on-demand. To mention just two use cases:</p><ul><li><p>Some of our DNS responses <a href="/dnssec-done-right/">are based on location</a></p></li><li><p><a href="/black-lies/">We are using black lies and DNS shotgun for DNSSEC</a></p></li></ul><p>Storing data in modern databases and dynamically generating responses poses a fundamental problem to ANY.</p>
    <div>
      <h2>ANY is hard for clients</h2>
      <a href="#any-is-hard-for-clients">
        
      </a>
    </div>
    <p>Around the same time a catastrophe happened - <a href="https://lists.dns-oarc.net/pipermail/dns-operations/2015-March/012899.html">Firefox started shipping with DNS code issuing "ANY" types</a>. The intention was, as usual, benign. Firefox developers wanted to get the TTL value for A and AAAA queries.</p><p>To cite a DNS guru <a href="https://icannwiki.org/Andrew_Sullivan">Andrew Sullivan</a>:</p><blockquote><p>In general, ANY is useful for troubleshooting but should never be usedfor regular operation. Its output is unpredictable given the effectsof caches. It can return enormous result sets.</p></blockquote><p>In user code you can't rely on anything sane to come out of an "ANY" query. While an "ANY" query has somewhat defined semantics on the DNS authoritative side, it's undefined on the DNS resolver side. Such a query can confuse the resolver:</p><ul><li><p>Should it forward the "ANY" query to authoritative?</p></li><li><p>Should it respond with any record that is already in cache?</p></li><li><p>Should it do some a mixture of the above behaviors?</p></li><li><p>Should it cache the result of "ANY" query and re-use the data for other queries?</p></li></ul><p>Different implementations do different things. "ANY" does not mean "ALL", which is the main source of confusion. To our joy, <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1093983#c14">Firefox quickly backpedaled</a> on the change and stopped issuing ANY queries.</p>
    <div>
      <h2>ANY is hard for network operators</h2>
      <a href="#any-is-hard-for-network-operators">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FBoDQ01mFhkylbffAGioR/b08d10f697d24c84f232a1a2ca9bff02/Screenshot-from-2019-03-15-14-44-58.png" />
            
            </figure><p>A typical 50Gbps DNS amplification targeting one of our customers. The attack lasted about 4 hours.</p><p>Furthermore, since the "ANY" query can generate a large response, they were often used for DNS reflection attacks. Authoritative providers receive a spoofed ANY query and send the large answer to a target, potentially causing DoS damage. We have blogged about that many times:</p><ul><li><p><a href="https://blog.cloudflare.com/the-ddos-that-knocked-spamhaus-offline-and-ho/">The DDoS that knocked Spamhaus offline</a></p></li><li><p><a href="https://blog.cloudflare.com/deep-inside-a-dns-amplification-ddos-attack/">Deep inside a DNS amplification attack</a></p></li><li><p><a href="https://blog.cloudflare.com/reflections-on-reflections/">Reflections on reflections</a></p></li><li><p><a href="https://blog.cloudflare.com/how-the-consumer-product-safety-commission-is-inadvertently-behind-the-internets-largest-ddos-attacks/">How the CPSC is inadvertently behind the largest attacks</a></p></li></ul><p>The DoS problem with ANY is ancient. Here is a discussion about a <a href="https://lists.dns-oarc.net/pipermail/dns-operations/2013-May/010178.html">patch to bind tweaking ANY from 2013</a>.</p><p>There is also a second angle to the ANY DoS problem. Some reports suggested that performant DNS servers (authoritative or resolvers) <a href="https://fanf.livejournal.com/140566.html">can fill their outbound network capacity</a> with numerous ANY responses.</p><p>The recommendation is simple - network operators must use <a href="https://kb.isc.org/docs/aa-00994">"response rate limiting"</a> when answering large DNS queries, otherwise they pose a DoS threat. The "ANY" query type just happens to often give such large responses, while providing little value to legitimate users.</p>
    <div>
      <h2>Terminating ANY</h2>
      <a href="#terminating-any">
        
      </a>
    </div>
    <p>In 2015 frustrated with the experience we announced we would like to stop giving responses to "ANY" queries and wrote a (controversial at a time) blog post:</p><ul><li><p><a href="https://blog.cloudflare.com/deprecating-dns-any-meta-query-type/">Deprecating DNS ANY meta-query type</a></p></li></ul><p>A year later we followed up explaining possible solutions:</p><ul><li><p><a href="https://blog.cloudflare.com/what-happened-next-the-deprecation-of-any/">What happened next - the deprecation of ANY</a></p></li></ul><p>And here we come today! With <a href="https://tools.ietf.org/html/rfc8482">RFC8482</a> we have an RFC proposed standard clarifying that controversial query.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VXjlDZD1THaZzfyH53uo0/59098d02034b3b987d143ba9d5936a90/Screenshot-from-2019-03-15-13-19-21.png" />
            
            </figure><p>ANY queries are a background noise. Under normal circumstances, we see a very small volume of ANY queries.</p>
    <div>
      <h2>The future for our users</h2>
      <a href="#the-future-for-our-users">
        
      </a>
    </div>
    <p>What precisely can be done about "ANY" queries? RFC8482 specifies that:</p><blockquote><p>A DNS responder that receives an ANY query MAY decline to provide aconventional ANY response or MAY instead send a response with asingle RRset (or a larger subset of available RRsets) in the answersection.</p></blockquote><p>This clearly defines the corner case - from now on the authoritative server may respond with, well, any query type to an "ANY" query. Sometimes simple stuff like this matters most.</p><p>This opens a gate for implementers - we can prepare a simple answer to these queries. As an implementer you may stick "A", or "AAAA" or anything else in the response if you wish. Furthermore, the spec recommends returning a special (and rarely used thus far) HINFO type. This is in fact what we do:</p>
            <pre><code>$ dig ANY cloudflare.com @ns3.cloudflare.com. 
;; ANSWER SECTION:
cloudflare.com.		3789	IN	HINFO	"ANY obsoleted" "See draft-ietf-dnsop-refuse-any"</code></pre>
            <p>Oh, we need to update the message to mention the fresh RFC number! NS1 agrees with our implementation:</p>
            <pre><code>$ dig ANY nsone.net @dns1.p01.nsone.net.
;; ANSWER SECTION:
nsone.net.		3600	IN	HINFO	"ANY not supported." "See draft-ietf-dnsop-refuse-any"</code></pre>
            <p>Our ultimate hero is <code>wikipedia.org</code>, which does exactly what the RFC recommends:</p>
            <pre><code>$ dig ANY wikipedia.org @ns0.wikimedia.org.
;; ANSWER SECTION:
wikipedia.org.		3600	IN	HINFO	"RFC8482" ""</code></pre>
            <p>On our resolver service we stop ANY queries with NOTIMP code. This makes us more confident the resolver isn't used to perform DNS reflections:</p>
            <pre><code>$ dig ANY cloudflare.com @1.1.1.1
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOTIMP, id: 14151</code></pre>
            
    <div>
      <h2>The future for developers</h2>
      <a href="#the-future-for-developers">
        
      </a>
    </div>
    <p>On the client side, just don't use ANY DNS queries. On the DNS server side - you are allowed to rip out all the gory QTYPE::ANY handling code, and replace it with a top level HINFO message or first RRset found. Enjoy cleaning your codebase!</p>
    <div>
      <h2>Summary</h2>
      <a href="#summary">
        
      </a>
    </div>
    <p>It took the DNS community some time to agree on the specifics, but here we are at the end. RFC8482 cleans up the last remaining DNS meta-qtype, and allows for simpler DNS authoritative and DNS resolver implementations. It finally clearly defines the semantics of ANY queries going through resolvers and reduces the DoS risk for the whole Internet.</p><p>Not all the effort must go to new shiny protocols and developments, sometimes, cleaning the bitrot is as important. Similar cleanups are being done <a href="https://tools.ietf.org/html/draft-davidben-tls-grease-00">in other areas</a>. Keep up the good work!</p><p>We would like to thank the co-authors of RFC8482, and the community scrutiny and feedback. For us, RFC8482 is definitely a good thing, and allowed us to simplify our codebase and make the Internet safer.</p><p>Mission accomplished! One step at a time we can help make the Internet a better place.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">58CcadkrVsMxWuJbB7efLi</guid>
            <dc:creator>Marek Majkowski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Registrar at three months]]></title>
            <link>https://blog.cloudflare.com/registrar-after-three-months/</link>
            <pubDate>Fri, 22 Feb 2019 19:42:56 GMT</pubDate>
            <description><![CDATA[ We’re excited to make Cloudflare Registrar available to all of our customers and we’d like to share some insights and data about domain registration that we learned during the early access period. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We <a href="/cloudflare-registrar/">announced</a> Cloudflare Registrar in September. We launched the product by making it available in waves to our existing customers. During that time we gathered feedback and continued making improvements to the product while also adding more TLDs.</p><p>Staring today, we’re excited to make <a href="https://www.cloudflare.com/products/registrar/">Cloudflare Registrar</a> available to all of our customers. Cloudflare Registrar only charges you what we pay to the registry for your domain and any user can now rely on that at-cost pricing to manage their domain. As part of this announcement, we’d like to share some insights and data about domain registration that we learned during the early access period.</p>
    <div>
      <h3>One-click DNS security makes a difference</h3>
      <a href="#one-click-dns-security-makes-a-difference">
        
      </a>
    </div>
    <p>When you launch your domain to the world, you rely on the Domain Name System (DNS) to direct your users to the address for your site. However, DNS cannot guarantee that your visitors reach your content because DNS, in its basic form, lacks authentication. If someone was able to poison the DNS responses for your site, they could hijack your visitors' DNS requests.</p><p>The Domain Name System Security Extensions (DNSSEC) can help prevent that type of attack by adding a chain of trust to DNS queries. When you enable DNSSEC for your site, you can ensure that the DNS response your users receive is the authentic IP address of your domain.</p><p>Across the industry, adoption of DNSSEC is abysmal. According to Verisign, 1% of .com domains use DNSSEC; less than 0.8% of .net domains do. Why is adoption so low? It’s inconvenient to enable DNSSEC for a site. Additionally, some registrars charge for the feature. APNIC <a href="https://blog.apnic.net/2017/12/06/dnssec-deployment-remains-low/">observed</a> that registrars who charge for DNSSEC see significantly lower adoption.</p><p>Cloudflare has made DNSSEC available for free for years, but we could not address the convenience factor until we launched our registrar. While we can create DS records, your registrar has to post them to the registry. Now that Cloudflare is a registrar, in addition to an authoritative DNS provider, we can make it one-click. We <a href="/one-click-dnssec-with-cloudflare-registrar/">announced</a> that feature in January. Since launching, 25% of domains on Cloudflare Registrar now use DNSSEC.</p><p>We’re going to keep working to make it even easier to enable for your domains. We want to help our customers reach 100% DNSSEC enablement by removing the need for even a single click.</p>
    <div>
      <h3>Users do not want to wait for transfers</h3>
      <a href="#users-do-not-want-to-wait-for-transfers">
        
      </a>
    </div>
    <p>When you begin a <a href="https://www.cloudflare.com/learning/dns/how-to-transfer-a-domain-name/">domain transfer to Cloudflare</a>, we ask that you input an auth code that your current registrar provides and that is unique to each domain that you transfer. We use that auth code to send your request to the registry, who manages all domain names for given TLD. The registry confirms that the code is valid and then tells your current registrar to release the domain.</p><p>Once your current registrar receives that request, you have two options: manually approve the transfer or wait five days. If you wait five days and do nothing, the transfer will complete. While that might feel easier, we’ve been surprised to see that 62% of transfers were completed by manual approval.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50cXIZ5PqBNt12b0aRUZ5v/0d615f0b9ef25099043f71694f21cad3/Elli3ybddHONVtZykyguKNYyj4BbduVRhzrLq4LJV5H0-TC3U20m6Qhu16eU8I5jJTMxfuwu1X6I6c1KxJZwEBJE_6p8PIp3QrQlB-d1GMdNQJ5XrK3f1oMr-xue.png" />
            
            </figure>
    <div>
      <h3>gTLDs continue to dominate registrations</h3>
      <a href="#gtlds-continue-to-dominate-registrations">
        
      </a>
    </div>
    <p>Historically, domains used either country-code TLDs (ccTLDs) or generic TLDs (gTLDs). The generic ones include the 4 extensions behind the world’s most popular domains: .com, .net, <a href="https://www.cloudflare.com/application-services/products/registrar/buy-org-domains/">.org</a> and .info. In 2005, ICANN <a href="https://newgtlds.icann.org/en/about/program">began</a> considering adding new top-level domain extensions. In 2012, ICANN started accepting applications from registries, current and prospective, who wanted to manage TLDs. They received 1,930.</p><p>Of those 1,930 applications, 1,232 <a href="https://newgtlds.icann.org/en/program-status/statistics">became</a> supported extensions and were classified as new gTLDs (ngTLDs). Today, Cloudflare Registrar supports all 4 legacy gTLDs, 1 ccTLD and 241 ngTLDs. gTLDs continue to represent the vast majority of domains registered with Cloudflare. That distribution is consistent with <a href="https://www.verisign.com/en_GB/domain-names/dnib/index.xhtml">trends</a> in the domain name industry. We expect that to change a bit as we expand into more ccTLDs.</p>
    <div>
      <h3>A world of TLDs and we want to support them</h3>
      <a href="#a-world-of-tlds-and-we-want-to-support-them">
        
      </a>
    </div>
    <p>2,081 different TLDs are represented on Cloudflare and use our authoritative DNS. I imagine that number has grown in the time it took to publish this post. We <a href="https://www.cloudflare.com/tld-policies/">support 246 TLDs</a> on Registrar today. We know that many of you have domains you want to transfer that use TLDs we do not support currently, particularly amongst ccTLDs. From massive ccTLDs like .uk, to more obscure ngTLDs like .boutique, we’ve received a lot of requests to expand the list. For a reason I don’t understand yet, members of the Cloudflare engineering team own over 2% of all active .horse domains in the world and use them for internal testing projects. We’re working on that one, too, so we can make <a href="https://doescloudflaresupport.horse/">this page built on</a> Workers return a Yes.</p><p>We’re working on it. Most ccTLDs require a unique accreditation and validation flow. We’re working every day to add to that list of supported TLDs, starting with the largest ones on Cloudflare.</p>
    <div>
      <h3>Available to all users</h3>
      <a href="#available-to-all-users">
        
      </a>
    </div>
    <p>Cloudflare Registrar is now <a href="https://blog.cloudflare.com/registrar-for-everyone/">available to all users</a>. You can start transferring your domains by following this link <a href="https://dash.cloudflare.com/domains">here</a>. Have questions? Instructions are available <a href="https://developers.cloudflare.com/registrar/">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Registrar]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">6vOYjygFwHucLJTdTuGtIJ</guid>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[One-Click DNSSEC with Cloudflare Registrar]]></title>
            <link>https://blog.cloudflare.com/one-click-dnssec-with-cloudflare-registrar/</link>
            <pubDate>Wed, 16 Jan 2019 17:01:00 GMT</pubDate>
            <description><![CDATA[ When you launch a domain, you rely on the Domain Name System to direct your users to your site. However, DNS can't guarantee that visitors reach your content because basic DNS lacks authentication. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When you launch your domain to the world, you rely on the Domain Name System (DNS) to direct your users to the address for your site. However, DNS cannot guarantee that your visitors reach your content because DNS, in its basic form, lacks authentication. If someone was able to poison the DNS responses for your site, they could hijack your visitors' requests.</p><p>The Domain Name System Security Extensions (DNSSEC) can help prevent that type of attack by adding a chain of trust to DNS queries. When you enable DNSSEC for your site, you can ensure that the DNS response your users receive is the authentic address of your site.</p><p>We <a href="/dnssec-an-introduction/">launched</a> support for DNSSEC in 2014. We made it free for all users, but we couldn’t make it easy to set up. Turning on DNSSEC for a domain was still a multistep, manual process. With the <a href="/cloudflare-registrar/">launch</a> of Cloudflare Registrar, we can finish the work to make it simple to enable for your domain.</p><p>You can now enable DNSSEC with a single click if your domain is registered with <a href="https://www.cloudflare.com/products/registrar/">Cloudflare Registrar</a>. Visit the DNS tab in the Cloudflare dashboard, click "Enable DNSSEC", and we'll handle the rest. If you are not on Cloudflare Registrar, you can read more about transferring your domain <a href="https://www.cloudflare.com/learning/dns/how-to-transfer-a-domain-name/">here</a>.</p>
    <div>
      <h2>A quick introduction to DNSSEC</h2>
      <a href="#a-quick-introduction-to-dnssec">
        
      </a>
    </div>
    <p>The <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a> translates a site’s <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a>, like cloudflare.com, to the address of the server hosting that site. When users request your website, their browser starts with a <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS query</a> to find that IP address.</p><p>The query first asks the Internet root servers to locate the servers responsible for the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">top-level domain (TLD)</a>. In the case of .com, those servers are managed by the registry Verisign. Verisign then finds the authoritative nameservers for that particular domain and requests the IP from them. If you use Cloudflare for your site’s DNS, Cloudflare manages those nameservers and we respond with an anycast IP for your site, which is ultimately returned to your visitor.</p><p>DNS assumes each request in that chain can be trusted, but the protocol does not actually verify the response. That presumption leaves the series of requests vulnerable to attack. In that scenario, an attacker poisons the responses for your site with directions to a malicious one. Instead of arriving at your webpage, your visitors are directed to a site that can be used for phishing or other malicious purposes. To solve that problem, a layer is needed to verify that each response can be trusted.</p><p><a href="https://www.cloudflare.com/dns/dnssec/how-dnssec-works/">DNSSEC</a> builds that trust by adding cryptographic signatures to each handoff in the relay. Those signatures establish a chain of trust from the authoritative nameservers, through the TLD server, and all the way to the root servers of the Internet. Your visitors’ DNS resolver can validate that the IP address returned for your domain name was provided by the authentic source.</p>
    <div>
      <h2>Expanding DNSSEC adoption with Cloudflare Registrar</h2>
      <a href="#expanding-dnssec-adoption-with-cloudflare-registrar">
        
      </a>
    </div>
    <p>We began advocating for DNSSEC in 2014 and launched beta support in 2015. We’re committed to expanding its adoption on the internet. However, we’ve only been able to provide DNSSEC for your domain when you completed a series of manual actions. To make DNSSEC ubiquitous, we first have to make it easy to enable like we did for one-click SSL.</p><p>Historically, enabling DNSSEC required you to generate a DS record from a service like Cloudflare, copy it down, and then save it to your <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrar</a> so they could send it to your registry. That’s tedious. We can now remove those steps for you. When Cloudflare is your registrar, we can automatically apply DNSSEC through our <a href="/automatically-provision-and-maintain-dnssec/">support</a> for CDS and CDNSKEY.</p><p>Instead of asking you to save the records yourself, Cloudflare Registrar automatically scans available DS records (and validates them) for domains that use our nameservers. When we notice that you have DNSSEC enabled, we grab the details and send it to the registry for you.</p><p>To turn on DNSSEC, navigate to the DNS tab for your domain in the Cloudflare dashboard. In the DNSSEC card, select “Enable” and that’s it. We’ll handle the rest. Your records will be set in the next 24-36 hours. It’s free, it’s one-click, and it helps secure your site.</p><p>If you have started <a href="https://www.cloudflare.com/learning/dns/how-to-transfer-a-domain-name/">transferring your domain</a> to Cloudflare registrar, you can use the one-click DNSSEC feature as soon as the transfer completes. If you already have DS records for your domain, the domain transfer will protect the DS record and make sure it’s still current after the transfer.</p>
    <div>
      <h2>What's next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>While this feature removes some of the chore to enable DNSSEC, we’re committed to removing any hurdle to making the Internet safer. We’re working on supporting DNSSEC by default for sites on Cloudflare. We have some work to do to reach this goal, but we’re excited to help make DNSSEC the new normal.</p><p>Interested in helping us with that work? Visit the Cloudflare jobs page <a href="https://www.cloudflare.com/careers/">here</a> to join our team.</p> ]]></content:encoded>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Registrar]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">5OInFwji1NfLPv39NIuFtR</guid>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[Expanding DNSSEC Adoption]]></title>
            <link>https://blog.cloudflare.com/automatically-provision-and-maintain-dnssec/</link>
            <pubDate>Tue, 18 Sep 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare first started talking about DNSSEC in 2014 and at the time, Nick Sullivan wrote: “DNSSEC is a valuable tool for improving the trust and integrity of DNS, the backbone of the modern Internet.” ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare first started talking about <a href="https://www.cloudflare.com/dns/dnssec/universal-dnssec/">DNSSEC</a> in <a href="/dnssec-an-introduction/">2014</a> and at the time, <a href="https://twitter.com/grittygrease">Nick Sullivan</a> wrote: “DNSSEC is a valuable tool for improving the trust and integrity of <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>, the backbone of the modern Internet.”</p><p>Over the past four years, it has become an even more critical part of securing the internet. While <a href="/chrome-not-secure-for-http/">HTTPS</a> has gone a long way in preventing user sessions from being hijacked and maliciously (or innocuously) redirected, not all internet traffic is HTTPS. A safer Internet should secure every possible layer between a user and the origin they are intending to visit.</p><p>As a quick refresher, DNSSEC allows a user, application, or recursive resolver to trust that the answer to their DNS query is what the domain owner intends it to be. Put another way: DNSSEC proves authenticity and integrity (though not confidentiality) of a response from the authoritative nameserver. Doing so makes it much harder for a bad actor to inject malicious DNS records into the resolution path through <a href="/bgp-leaks-and-crypto-currencies/">BGP Leaks</a> and cache poisoning. Trust in DNS matters even more when a domain is publishing <a href="/additional-record-types-available-with-cloudflare-dns/">record types</a> that are used to declare trust for other systems. As a specific example, DNSSEC is helpful for preventing malicious actors from obtaining fraudulent certificates for a domain. <a href="https://blog.powerdns.com/2018/09/10/spoofing-dns-with-fragments/">Research</a> has shown how DNS responses can be spoofed for domain validation.</p><p>This week we are announcing our full support for CDS and CDNSKEY from <a href="https://datatracker.ietf.org/doc/rfc8078/">RFC 8078</a>. Put plainly: this will allow for setting up of DNSSEC without requiring the user to login to their <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrar</a> to upload a DS record. Cloudflare customers on supported registries will be able to enable DNSSEC with the click of one button in the Cloudflare dashboard.</p>
    <div>
      <h3>Validation by Resolvers</h3>
      <a href="#validation-by-resolvers">
        
      </a>
    </div>
    <p>DNSSEC’s largest problem has been adoption. The number of DNS queries validated by recursive resolvers for DNSSEC has remained flat. Worldwide, less than 14% of DNS requests have DNSSEC validated by the resolver according to our friends at <a href="https://stats.labs.apnic.net/dnssec/XA?c=XA&amp;x=1&amp;g=1&amp;r=1&amp;w=7&amp;g=0">APNIC</a>. The blame here falls on the shoulders of the default DNS providers that most devices and users receive from DHCP via their ISP or network provider. Data shows that some countries do considerably better: Sweden, for example, has over <a href="https://stats.labs.apnic.net/dnssec/XE?o=cXAw7x1g1r1">80% of their requests validated</a>, showing that the default DNS resolvers in those countries validate the responses as they should. APNIC also has a fun <a href="https://stats.labs.apnic.net/dnssec">interactive map</a> so you can see how well your country does.</p><p>So what can we do? To ensure your resolver supports DNSSEC, visit <a href="http://brokendnssec.net/">brokendnssec.net</a> in your browser. If the page <b>loads,</b> you are not protected by a DNSSEC validating resolver and should <a href="https://1.1.1.1/#setup-instructions">switch your resolver</a>. However, in order to really move the needle across the internet, Cloudflare encourages network providers to either turn on the validation of DNSSEC in their software or switch to publicly available resolvers that validate DNSSEC by default. Of course we have <a href="https://one.one.one.one">a favourite</a>, but there are other fine choices as well.</p>
    <div>
      <h3>Signing of Zones</h3>
      <a href="#signing-of-zones">
        
      </a>
    </div>
    <p>Validation handles the user side, but another problem has been the signing of the zones themselves. Initially, there was concern about adoption at the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">TLD</a> level given that TLD support is a requirement for DNSSEC to work. This is now largely a non-issue with over 90% of TLDs signed with DS records in the root zone, as of <a href="http://stats.research.icann.org/dns/tld_report/">2018-08-27</a>.</p><p>It’s a different story when it comes to the individual domains themselves. Per <a href="https://usgv6-deploymon.antd.nist.gov/cgi-bin/generate-com">NIST data</a>, a woefully low 3% of the Fortune 1000 sign their primary domains. Some of this is due to apathy by the domain owners. However, some large DNS operators do not yet support the option at all, requiring domain owners who want to protect their users to move to another provider altogether. If you are on a service that does not support DNSSEC, we encourage you to switch to one that does and let them know that was the reason for the switch. Other large operators, such as GoDaddy, charge for DNSSEC. Our stance here is clear: DNSSEC should be available and included at all DNS operators for free.</p>
    <div>
      <h3>The DS Parent Issue</h3>
      <a href="#the-ds-parent-issue">
        
      </a>
    </div>
    <p>In December of 2017, APNIC wrote about <a href="https://blog.apnic.net/2017/12/06/dnssec-deployment-remains-low/">why DNSSEC deployment remains so low</a> and that remains largely true today. One key point was that the number of domain owners who attempt DNSSEC activation but do not complete it is very high. Using Cloudflare as an example, APNIC measured that 40% of those who enabled DNSSEC in the Cloudflare Dash (evidenced by the presence of a DNSKEY record) were actually successful in serving a DS key from the registry. Current data over a recent 90 day period is slightly better: we are seeing just over 50% of all zones which attempted to enable DNSSEC were able to complete the process with the registry (Note: these domains still resolve, they are just still not secured). Of our most popular TLDs, .be and .nl have success rates of over 70%, but these numbers are still not where we would want them to be in an ideal world. The graph below shows the specific rates for the most popular TLDs (most popular from left to right).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20XNgbCpC85y8X91Bpo7zS/2e85b5a9563fbe1d9e726141989d01df/Graph.png" />
            
            </figure><p>This end result is likely not surprising to anyone who has tried to add a DS record to their registrar. Locating the part of the registrar UI that houses DNSSEC can be problematic, as can the UI of adding the record itself. Additional factors such as varying degrees of technical knowledge amongst users and simply having to manage multiple logins and roles can also explain the lack of completion in the process. Finally, varying levels of DNSSEC compatibility amongst registrars may prevent even knowledgeable users from creating DS records in the parent.</p><p>As an example, at Cloudflare, we took a minimalist UX approach for adding DS records for delegated child domains. A novice user may not understand the fields and requirements for the DS record:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uBtcWkLicobwesLsEJ1MF/68a4051c4b0746cfafb08fc93c4c0f0a/pasted-image-0.png" />
            
            </figure>
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>CDS and CDNSKEY</p><p>As mentioned in the aforementioned APNIC blog, Cloudflare is supportive of <a href="https://datatracker.ietf.org/doc/rfc8078/">RFC 8078</a> and the CDS and CDNSKEY records. This should come as no surprise given that our own <a href="https://twitter.com/OGudm">Olafur Gudmundsson</a> is a co-author of the RFC. CDS and CDNSKEY are records that mirror the DS and DNSKEY record types but are designated to signal the parent/registrar that the child domain wishes to enable DNSSEC and have a DS record presented by the registry. We have been pushing for automated solutions in this space for <a href="/updating-the-dns-registration-model-to-keep-pace-with-todays-internet/">years</a> and are encouraging the industry to move with us.</p><p>Today, we are announcing General Availability and full support for CDS and CDNSKEY records for all Cloudflare managed domains that enable DNSSEC in the Cloudflare dash.</p>
    <div>
      <h3>How It Works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Cloudflare will publish CDS and CDNSKEY records for all domains who enable DNSSEC. Parent registries should scan the nameservers of the  domains under their purview and check for these rrsets. The presence of a CDS key for a domain delegated to Cloudflare indicates that a verified Cloudflare user has enabled DNSSEC within their dash and that the parent operator (a registrar or the registry itself) should take the CDS record content and create the requisite DS record to start signing the domain. TLDs .ch and .cz already support this automated method through Cloudflare and any other DNS operators that choose to support RFC8078. The registrar <a href="https://www.gandi.net/">Gandi</a> and a number of TLDs have indicated support in the near future.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NWwLVYnosedEpsDH0NNQS/36198c65294dbdcce40045262c5f61a9/Flow.png" />
            
            </figure><p>Cloudflare also supports CDS0 for the removal of the DS record in the case that the user <a href="https://www.cloudflare.com/learning/dns/how-to-transfer-a-domain-name/">transfers</a> their domain off of Cloudflare or otherwise disables DNSSEC.</p>
    <div>
      <h3>Best Practices for Parent Operators</h3>
      <a href="#best-practices-for-parent-operators">
        
      </a>
    </div>
    <p>Below are a number of suggested procedures that parent registries may take to provide for the best experience for our users:</p><ul><li><p><i>Scan Selection</i> - Parent Operators should only scan their child domains who have nameservers pointed at Cloudflare (or other DNS operators who adopt RFC8078). Cloudflare nameservers are indicated *.ns.cloudflare.com.</p></li><li><p><i>Scan Regularly</i> - Parent Operators should scan at regular intervals for the presence and change of CDS records. A scan every 12 hours should be sufficient, though faster is better.</p></li><li><p><i>Notify Domain Contacts</i> - Parent Operators should notify their designated contacts through known channels (such as email and/or SMS) for a given child domain upon detection of a new CDS record and an impending change of their DS record. The Parent Operator may also wish to provide a standard delay (24 hours) before changing the DS record to allow the domain contact to cancel or otherwise change the operation.</p></li><li><p><i>Verify Success</i> - Parent Operators must ensure that the domain continues to resolve after being signed. Should the domain fail to resolve immediately after changing the DS record, the Parent Operator must fall back to the previous functional state and should notify designated contacts.</p></li></ul>
    <div>
      <h3>What Does This All Mean and What’s Next?</h3>
      <a href="#what-does-this-all-mean-and-whats-next">
        
      </a>
    </div>
    <p>For Cloudflare customers, this means an easier implementation of DNSSEC once your registry/registrar supports CDS and CDNSKEY. Customers can also enable DNSSEC for free on Cloudflare and manually enter the DS to the parent. To check your domain’s DNSSEC status, <a href="http://dnsviz.net/d/cloudflare.com/dnssec/">DNSViz (example cloudflare.com</a>) has one of the most standards compliant tools online.</p><p>For registries and registrars, we are taking this step with the hope that more providers support RFC8078 and help increase the global adoption of technology that helps end users be less vulnerable to DNS attacks on the internet.</p><p>For other DNS operators, we encourage you to join us in supporting this method as the more major DNS operators that publish CDS and CDNSKEY, the more likely it will be that the registries will start looking for and use them.</p><p>Cloudflare will continue pushing down this path and has plans to create and open source additional tools to help registries and operators push and consume records. If this sounds interesting to you, we are <a href="https://www.cloudflare.com/careers/">hiring</a>.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4SOBfg9SIxbV23r9vS1Vlt/072c7daa0d365194497c3c11f0d6c807/Crypto-Week-1-1-1.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">7FVga47DKN7yhAMwIAvQrV</guid>
            <dc:creator>Sergi Isasi</dc:creator>
            <dc:creator>Vicky Shrestha</dc:creator>
        </item>
        <item>
            <title><![CDATA[End-to-End Integrity with IPFS]]></title>
            <link>https://blog.cloudflare.com/e2e-integrity/</link>
            <pubDate>Mon, 17 Sep 2018 13:02:00 GMT</pubDate>
            <description><![CDATA[ Use Cloudflare’s IPFS gateway to set up a website which is end-to-end secure, while maintaining the performance and reliability benefits of being served from Cloudflare’s edge network. ]]></description>
            <content:encoded><![CDATA[ <p>This post describes how to use Cloudflare's IPFS gateway to set up a website which is end-to-end secure, while maintaining the performance and reliability benefits of being served from Cloudflare’s edge network. If you'd rather read an introduction to the concepts behind IPFS first, you can find that in <a href="/distributed-web-gateway/">our announcement</a>. Alternatively, you could skip straight to the <a href="https://developers.cloudflare.com/distributed-web/">developer docs</a> to learn how to set up your own website.</p><p>By 'end-to-end security', I mean that neither the site owner nor users have to trust Cloudflare to serve the correct documents, like they do now. This is similar to how using HTTPS means you don't have to trust your ISP to not modify or inspect traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TAh0shMDYcLioHSrr05YS/9df521abdbf0ddc64596066f864466a4/ipfs-blog-post-image-1-copy_3.5x--1-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/75MIGMU5KJSYIZudpdJcNM/666111e79475ef39ca6701ad7e0cc27e/ipfs-blog-post-image-2-copy_3.5x--1-.png" />
            
            </figure>
    <div>
      <h3>CNAME Setup with Universal SSL</h3>
      <a href="#cname-setup-with-universal-ssl">
        
      </a>
    </div>
    <p>The first step is to choose a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> for your website. Websites should be given their own domain name, rather than served directly from the gateway by root hash, so that they are considered a distinct origin by the browser. This is primarily to prevent cache poisoning, but there are several functional advantages as well. It gives websites their own instance of localStorage and their own cookie jar which are sandboxed from inspection and manipulation by malicious third-party documents. It also lets them run Service Workers without conflict, and request special permissions of the user like access to the webcam or GPS. But most importantly, having a domain name makes a website easier to identify and remember.</p><p>Now that you've <a href="https://www.cloudflare.com/products/registrar/">chosen a domain</a>, rather than using it as-is, you’ll need to add "ipfs-sec" as the left-most subdomain. So for example, you'd use "ipfs-sec.example.com" instead of just "example.com". The ipfs-sec subdomain is special because it signals to the user and to their browser that your website is capable of being served with end-to-end integrity.</p><p>In addition to that, ipfs-sec domains require <a href="/dnssec-an-introduction/">DNSSEC</a> to be properly setup to prevent spoofing. Unlike with standard HTTPS, where DNS spoofing can't usually result in a on-path attacker attack, this is exactly what DNS spoofing does to IPFS because the root hash of the website is stored in DNS. Many <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> make enabling DNSSEC as easy as the push of a button, though some don't support it at all.</p><p>With the ipfs-sec domain, you can now follow the <a href="https://developers.cloudflare.com/distributed-web/ipfs-gateway/connecting-website/">developer documentation</a> on how to serve a generic static website from IPFS. Note that you'll need to use a CNAME setup and retain control of your <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>, rather than the easier method of just signing up for Cloudflare. This helps maintain a proper separation between the party managing the DNSSEC signing keys and the party serving content to end-users. Keep in mind that CNAME setups tend to be problematic and get into cases that are difficult to debug, which is why we reserve them for technically sophisticated customers.</p><p>You should now be able to access your website over HTTP and HTTPS, backed by our gateway.</p>
    <div>
      <h3>Verifying what the Gateway Serves</h3>
      <a href="#verifying-what-the-gateway-serves">
        
      </a>
    </div>
    <p>HTTPS helps makes sure that nobody between the user and Cloudflare's edge network has tampered with the connection, but it does nothing to make sure that Cloudflare actually serves the content the user asked for. To solve this, we built two connected projects: a modified gateway service and a browser extension.</p><p>First, we <a href="https://github.com/cloudflare/go-ipfs">forked the go-ipfs repository</a> and gave it the ability to offer cryptographic proofs that it was serving content honestly, which it will do whenever it sees requests with the HTTP header <code>X-Ipfs-Secure-Gateway: 1</code>. The simplest case for this is when users request a single file from the gateway by its hash -- all the gateway has to do is serve the content and any metadata that might be necessary to re-compute the given hash.</p><p>A more complicated case is when users request a file from a directory. Luckily, directories in IPFS are just files containing a mapping from file name to the hash of the file, and very large directories can be transparently split up into several smaller files, structured into a search tree called a <a href="https://idea.popcount.org/2012-07-25-introduction-to-hamt/">Hash Array Mapped Trie (HAMT)</a>. To convince the client that the gateway is serving the contents of the correct file, the gateway first serves the file corresponding to the directory, or every node in the search path if the directory is a HAMT. The client can hash this file (or search tree node), check that it equals the hash of the directory they asked for, and look up the hash of the file they want from within the directory's contents. The gateway then serves the contents of the requested file, which the client can now verify because it knows the expected hash.</p><p>Finally, the most complicated case by far is when the client wants to access content by domain name. It's complicated because the protocol for authenticating DNS, called DNSSEC, has very few client-side implementations. DNSSEC is also not widely deployed, even though some registrars make it even easier than setting up HTTPS. In the end, we ended up writing our own simple DNSSEC-validating resolver that's capable of outputting a cryptographically-convincing proof that it did some lookup correctly.</p><p>It works the same way as certificate validation in HTTPS: we start at the bottom, with a signature from some authority claiming to be example.com over the DNS records they want us to serve. We then lookup a delegation (DS record) from an authority claiming to be .com, that says "example.com is the authority with these public keys" which is in turn signed by the .com authority's private key. And finally, we lookup a delegation from the root authority, ICANN (whose public keys we already have), attesting to the public keys used by the .com authority. All of these lookups bundled together form an authenticated chain starting at ICANN and ending at the exact records we want to serve. These constitute the proof.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qVP41QjM9flXihnLEj4CH/441ee5dd840fbac451e12854248da9cd/IPFS-tech-post-_3.5x.png" />
            
            </figure><p><i>Chain of trust in DNSSEC.</i></p><br /><p>The second project we built out was a browser extension that requests these proofs from IPFS gateways and ipfs-sec domains, and is capable of verifying them. The extension uses the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest">webRequest API</a> to sit between the browser's network stack and its rendering engine, preventing any unexpected data from being show to the user or unexpected code from being executed. The code for the browser extension is <a href="https://github.com/cloudflare/ipfs-ext">available on Github</a>, and can be installed through <a href="https://addons.mozilla.org/en-US/firefox/addon/cloudflare-ipfs-validator/">Firefox's add-on store</a>. We’re excited to add support for Chrome as well, but that can’t move forward until <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=487422">this ticket</a> in their bug tracker is addressed.</p><p>On the other hand, if a user doesn't have the extension installed, the gateway won't see the <code>X-Ipfs-Secure-Gateway</code> header and will serve the page like a normal website, without any proofs. This provides a graceful upgrade path to using IPFS, either through our extension that uses a third-party gateway or perhaps another browser extension that runs a proper IPFS node in-browser.</p>
    <div>
      <h3>Example Application</h3>
      <a href="#example-application">
        
      </a>
    </div>
    <p>My favorite website on IPFS so far has been the <a href="https://cloudflare-ipfs.com/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/">mirror of English Wikipedia</a> put up by <a href="https://ipfs.io/blog/24-uncensorable-wikipedia/">the IPFS team at Protocol Labs</a>. It's fast, fun to play with, and above all has practical utility. One problem that stands out though, is that the mirror has no search feature so you either have to know the URL of the page you want to see or try to find it through Google. The <a href="https://ipfs.io/ipfs/QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX/wiki/Anasayfa.html">Turkish-language mirror</a> has in-app search but it requires a call to a dynamic API on the same host, and doesn't work through Cloudflare's gateway because we only serve static content.</p><p>I wanted to provide an example of the kinds of secure, performant applications that are possible with IPFS, and this made building a search engine seem like a prime candidate. Rather than steal Protocol Labs' idea of 'Wikipedia on IPFS', we decided to take the <a href="http://www.kiwix.org/">Kiwix</a> archives of all the different StackExchange websites and build a distributed search engine on top of that. You can play with the finished product here: <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com">ipfs-sec.stackexchange.cloudflare-ipfs.com</a>.</p><p>The way it's built is actually really simple, at least as far as search engines go: We build an inverted index and publish it with the rest of each StackExchange, along with a JavaScript client that can read the index and quickly identify documents that are relevant to a user's query. Building the index takes two passes over the data:</p><ol><li><p>The first pass decides what words/tokens we want to allow users to search by. Tokens shouldn't be too popular (like the top 100 words in English), because then the list of all documents containing that token is going to be huge and it's not going to improve the search results anyways. They also shouldn't be too rare (like a timestamp with sub-second-precision), because then the index will be full of meaningless tokens that occur in only one document each. You can get a good estimate of the most frequent K tokens, using only constant-space, with the really simple space-saving algorithm from <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.9563&amp;rep=rep1&amp;type=pdf">this paper</a>.</p></li><li><p>Now that the first pass has given us the tokens we want to track, the second pass through the data actually builds the inverted index. That is, it builds a map from every token to the list of documents that contain that token, called a postings list. When a client wants to find only documents that contain some set of words/tokens, they download the list for each individual token and intersect them. It sounds less efficient than it is -- in reality, the postings lists are unnoticeably small (&lt;30kb) even in the worst case. And the browser can 'pipeline' the requests for the postings lists (meaning, send them all off at once) which makes getting a response to several requests about as fast as getting a response to one.</p></li></ol><p>We also store some simple statistics in each postings list to help rank the results. Essentially, documents that contain a query token more often are ranked higher than those that don't. And among the tokens in a query, those tokens that occur in fewer documents have a stronger effect on ranking than tokens that occur in many documents. That's why when I search for <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/crypto/search.html?q=AES+SIV">"AES SIV"</a> the first result that comes back is:</p><ul><li><p><a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/crypto/A/question/54413.html">"Why is SIV a thing if MAC-and-encrypt is not the most secure way to go?"</a></p></li></ul><p>while the following is the fourth result, even though it has a higher score and greater number of total hits than first result:</p><ul><li><p><a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/crypto/A/question/31835.html">"Why is AES-SIV not used, but AESKW, AKW1?"</a></p></li></ul><p>(AES is a very popular and frequently discussed encryption algorithm, while SIV is a lesser-known way of using AES.)</p><p>But this is what really makes it special: because the search index is stored in IPFS, the user can convince themselves that no results have been modified, re-arranged, or omitted without having to download the entire corpus. There's one small trick to making this statement hold true: All requests made by the search client must succeed, and if they don't, it outputs an error and no search results.</p><p>To understand why this is necessary, think about the search client when it first gets the user's query. It has to tokenize the query and decide which postings lists to download, where not all words in the user's query may be indexed. A naive solution is to try to download the postings list for every word unconditionally, and interpret a non-200 HTTP status code as "this postings list must not exist". In this case, a network adversary could block the search client from being able to access postings lists that lead to undesirable results, causing the client to output misleading search results either through omission or re-arranging.</p><p>What we do instead is store the dictionary of every indexed token in a file in the root of the index. The client can download the dictionary once, cache it, and use it for every search afterwards. This way, the search client can consult the dictionary to find out which requests should succeed and only send those.</p>
    <div>
      <h3>From Here</h3>
      <a href="#from-here">
        
      </a>
    </div>
    <p>We were incredibly excited when we realized the new avenues and types of applications that combining IPFS with Cloudflare open up. Of course, our IPFS gateway and the browser extension we built will need time to mature into a secure and reliable platform. But the ability to securely deliver web pages through third-party hosting providers and CDNs is incredibly powerful, and its something cryptographers and internet security professionals have been working towards for a long time. As much fun as we had building it, we're even more excited to see what you build with it.</p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Zg5pDJxaCCTQXzqquORuu/1a2f514eff601ee0f88f245945a3ea54/CRYPTO-WEEK-banner-plus-logo_2x.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2ZSYs23n0hZhgRFnzpS5O1</guid>
            <dc:creator>Brendan McMillion</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway]]></title>
            <link>https://blog.cloudflare.com/distributed-web-gateway/</link>
            <pubDate>Mon, 17 Sep 2018 13:01:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer. We hope that our gateway, hosted at <a href="https://cloudflare-ipfs.com">cloudflare-ipfs.com</a>, will serve as the platform for many new highly-reliable and security-enhanced web applications. The IPFS Gateway is the first product to be released as part of our <a href="https://www.cloudflare.com/distributed-web-gateway">Distributed Web Gateway</a> project, which will eventually encompass all of our efforts to support new distributed web technologies.</p><p>This post will provide a brief introduction to IPFS. We’ve also written an accompanying blog post <a href="/e2e-integrity">describing what we’ve built</a> on top of our gateway, as well as <a href="https://developers.cloudflare.com/distributed-web/">documentation</a> on how to serve your own content through our gateway with your own custom hostname.</p>
    <div>
      <h3>Quick Primer on IPFS</h3>
      <a href="#quick-primer-on-ipfs">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hS4Q3j1BBgCA4If6fFImz/d25f3b24017cb8dfcfa9208a68f8ed03/spaaaace-ipfs_3.5x-1.png" />
            
            </figure><p>Usually, when you access a website from your browser, your browser tracks down the origin server (or servers) that are the ultimate, centralized repository for the website’s content. It then sends a request from your computer to that origin server, wherever it is in the world, and that server sends the content back to your computer. This system has served the Internet well for decades, but there’s a pretty big downside: centralization makes it impossible to keep content online any longer than the origin servers that host it. If that origin server is hacked or taken out by a natural disaster, the content is unavailable. If the site owner decides to take it down, the content is gone. In short, mirroring is not a first-class concept in most platforms (<a href="https://www.cloudflare.com/always-online/">Cloudflare’s Always Online</a> is a notable exception).</p><p>The InterPlanetary File System aims to change that. IPFS is a peer-to-peer file system composed of thousands of computers around the world, each of which stores files on behalf of the network. These files can be anything: cat pictures, 3D models, or even entire websites. Over 5,000,000,000 files had been uploaded to <a href="https://cloudflare-ipfs.com/ipfs/QmWimYyZHzChb35EYojGduWHBdhf9SD5NHqf8MjZ4n3Qrr/Filecoin-Primer.7-25.pdf">IPFS already</a>.</p>
    <div>
      <h3>IPFS vs. Traditional Web</h3>
      <a href="#ipfs-vs-traditional-web">
        
      </a>
    </div>
    <p>There are two key differences between IPFS and the web as we think of it today.</p><p>The first is that with IPFS anyone can cache and serve any content—for free. Right now, with the traditional web, most typically rely on big hosting providers in remote locations to store content and serve it to the rest of the web. If you want to set up a website, you have to pay one of these major services to do this for you. With IPFS, anyone can sign up their computer to be a node in the system and start serving data. It doesn’t matter if you’re working on a Raspberry Pi or running the world’s biggest server. You can still be a productive node in the system.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66pmjeMDzYrBDczI8gesbH/8b9b8588bd3bf911aa71bc5e87bbd671/Decentralized-Network-1.png" />
            
            </figure><p>The second key difference is that data is content-addressed, rather than location-addressed. That’s a bit of a subtle difference, but the ramifications are substantial, so it’s worth breaking down.</p><p>Currently, when you open your browser and navigate to example.com, you’re telling the browser “fetch me the data stored at example.com’s IP address” (this happens to be 93.184.216.34). That IP address marks where the content you want is stored in the network. You then send a request to the server at that IP address for the “example.com” content and the server sends back the relevant information. So at the most basic level, you tell the network where to look and the network sends back what it found.</p><p>IPFS turns that on its head.</p><p>With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents, i.e., a long string of letters and numbers that is unique to that block. When you want a piece of data in IPFS, you request it by its hash. So rather than asking the network “get me the content stored at 93.184.216.34,” you ask “get me the content that has a hash value of <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>.” (<code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code> happens to be the hash of a .txt file containing the string “I’m trying out IPFS”).</p>
    <div>
      <h3>How is this so different?</h3>
      <a href="#how-is-this-so-different">
        
      </a>
    </div>
    <p>Remember that with IPFS, you tell the network what to look for, and the network figures out where to look.</p>
    <div>
      <h3>Why does this matter?</h3>
      <a href="#why-does-this-matter">
        
      </a>
    </div>
    <p>First off, it makes the network more resilient. The content with a hash of <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code> could be stored on dozens of nodes, so if one node that was caching that content goes down, the network will just look for the content on another node.</p><p>Second, it introduces an automatic level of security. Let’s say you know the hash value of a file you want. So you ask the network, “get me the file with hash <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>” (the example.txt file from above). The network responds and sends the data. When you receive all the data, you can rehash it. If the data was changed at all in transit, the hash value you get will be different than the hash you asked for. You can think of the hash as like a unique fingerprint for the file. If you’re sent back a different file than you were expecting to receive, it’s going to have a different fingerprint. This means that the system has a built-in way of knowing whether or not content has been tampered with.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14TxfLu3bArvharLWoFCJY/64e2fadd810da7c0a51149d8f71a9f95/ipfs-blog-post-image-1-copy_3.5x.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hO98vSSGsM8wh0r4w10GO/d4c1d501a571e241c1632a4645cfc8a3/ipfs-blog-post-image-2-copy_3.5x.png" />
            
            </figure>
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>A Note on IPFS Addresses and Cryptographic Hashes</p><p>Since we’ve spent some time going over why this content-addressed system is so special, it’s worth talking a little bit about how the IPFS addresses are built. Every address in IPFS is a <a href="https://github.com/multiformats/multihash">multihash</a>, which means that the address combines information about both the hashing algorithm used and the hash output into one string. IPFS multihashes have three distinct parts: the first byte of the mulithash indicates which hashing algorithm has been used to produce the hash; the second byte indicates the length of the hash; and the remaining bytes are the value output by the hash function. By default, IPFS uses the <a href="https://en.wikipedia.org/wiki/SHA-2">SHA-256</a> algorithm, which produces a 32-byte hash. This is represented by the string “Qm” in <a href="https://en.wikipedia.org/wiki/Base58">Base58</a> (the default encoding for IPFS addresses), which is why all the example IPFS addresses in this post are of the form “Qm…”.</p><p>While SHA-256 is the standard algorithm used today, this multihash format allows the IPFS protocol to support addresses produced by other hashing algorithms. This allows the IPFS network to move to a different algorithm, should the world discover flaws with SHA-256 sometime in the future. If someone hashed a file with another algorithm, the address of that file would start some characters other than “Qm”.</p><p>The good news is that, at least for now, SHA-256 is believed to have a number of qualities that make it a strong cryptographic hashing algorithm. The most important of these is that SHA-256 is collision resistant. A collision occurs when there are two different files that produce the same hash when run through the SHA-256 algorithm. To understand why it’s important to prevent collisions, consider this short scenario. Imagine some IPFS user, Alice, uploads a file with some hash, and another user, Bob, uploads a different file that happens to produce the exact same hash. If this happened, there would be two different files in the network with the exact same address. So if some third person, Carol, sent out an IPFS request for the content at that address, she wouldn't necessarily know whether she was going to receive Bob’s file or Alice’s file.</p><p>SHA-256 makes collisions extremely unlikely. Because SHA-256 computes a 256-bit hash, there are 2^256 possible IPFS addresses that the algorithm could produce. Hence, the chance that there are two files in IPFS that produce a collision is low. Very low. If you’re interested in more details, the <a href="https://en.wikipedia.org/wiki/Birthday_attack#Mathematics">birthday attack</a> Wikipedia page has a cool table showing exactly how unlikely collisions are, given a sufficiently strong hashing algorithm.</p>
    <div>
      <h3>How exactly do you access content on IPFS?</h3>
      <a href="#how-exactly-do-you-access-content-on-ipfs">
        
      </a>
    </div>
    <p>Now that we’ve walked through all the details of what IPFS is, you’re probably wondering how to use it. There are a number of ways to access content that’s been stored in the IPFS network, but we’re going to address two popular ones here. The first way is to download IPFS onto your computer. This turns your machine into a node of the IPFS network, and it’s the best way to interact with the network if you want to get down in the weeds. If you’re interested in playing around with IPFS, the Go implementation can be downloaded <a href="https://ipfs.io/docs/install/">here</a>.</p><p>But what if you want access to content that’s stored on IPFS without the hassle of operating a node locally on your machine? That’s where IPFS gateways come into play. IPFS gateways are third-party nodes that fetch content from the IPFS network and serve it to you over <a href="https://www.cloudflare.com/learning/ssl/what-is-https/">HTTPS</a>. To use a gateway, you don’t need to download any software or type any code. You simply open up a browser and type in the gateway’s name and the hash of the content you’re looking for, and the gateway will serve the content in your browser.</p><p>Say you know you want to access the example.txt file from before, which has the hash <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>, and there’s a public gateway that is accessible at <code>https://example-gateway.com</code></p><p>To access that content, all you need to do is open a browser and type</p>
            <pre><code>https://example-gateway.com/ipfs/QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code></pre>
            <p>and you’ll get back the data stored at that hash. The combination of the /ipfs/ prefix and the hash is referred to as the file path. You always need to provide a full file path to access content stored in IPFS.</p>
    <div>
      <h3>What can you do with Cloudflare’s Gateway?</h3>
      <a href="#what-can-you-do-with-cloudflares-gateway">
        
      </a>
    </div>
    <p>At the most basic level, you can access any of the billions of files stored on IPFS from your browser. But that’s not the only cool thing you can do. Using Cloudflare’s gateway, you can also build a website that’s hosted entirely on IPFS, but still available to your users at a custom domain name. Plus, we’ll issue any website connected to our gateway a <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificate</a>, ensuring that each website connected to Cloudflare's gateway is secure from snooping and manipulation. For more on that, check out the <a href="https://developers.cloudflare.com/distributed-web/">Distributed Web Gateway developer docs</a>.</p><p>A fun example we’ve put together using the <a href="http://www.kiwix.org/">Kiwix</a> archives of all the different StackExchange websites and build a distributed search engine on top of that using only IPFS. Check it out <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/">here</a>.</p>
    <div>
      <h3>Dealing with abuse</h3>
      <a href="#dealing-with-abuse">
        
      </a>
    </div>
    <p>IPFS is a peer-to-peer network, so there is the possibility of users sharing abusive content. This is not something we support or condone. However, just like how Cloudflare works with more traditional customers, Cloudflare’s IPFS gateway is simply a cache in front of IPFS. Cloudflare does not have the ability to modify or remove content from the IPFS network. If any abusive content is found that is served by the Cloudflare IPFS gateway, you can use the standard abuse reporting mechanism described <a href="https://www.cloudflare.com/abuse/">here</a>.</p>
    <div>
      <h3>Embracing a distributed future</h3>
      <a href="#embracing-a-distributed-future">
        
      </a>
    </div>
    <p>IPFS is only one of a family of technologies that are embracing a new, decentralized vision of the web. Cloudflare is excited about the possibilities introduced by these new technologies and we see our gateway as a tool to help bridge the gap between the traditional web and the new generation of distributed web technologies headlined by IPFS. By enabling everyday people to explore IPFS content in their browser, we make the ecosystem stronger and support its growth. Just like when Cloudflare launched back in 2010 and changed the game for web properties by providing the <a href="https://www.cloudflare.com/security/">security</a>, <a href="https://www.cloudflare.com/performance/">performance</a>, and <a href="https://www.cloudflare.com/performance/ensure-application-availability/">availability</a> that was previously only available to the Internet giants, we think the IPFS gateway will provide the same boost to content on the distributed web.</p><p>Dieter Shirley, CTO of Dapper Labs and Co-founder of CryptoKitties said the following:</p><blockquote><p>We’ve wanted to store CryptoKitty art on IPFS since we launched, but the tech just wasn’t ready yet. Cloudflare’s announcement turns IPFS from a promising experiment into a robust tool for commercial deployment. Great stuff!</p></blockquote><p>The IPFS gateway is exciting, but it’s not the end of the road. There are other equally interesting distributed web technologies that could benefit from Cloudflare’s massive global network and we’re currently exploring these possibilities. If you’re interested in helping build a better internet with Cloudflare, <a href="https://www.cloudflare.com/careers/"><b>we’re hiring!</b></a></p><p><a href="/subscribe/"><i>Subscribe to the blog</i></a><i> for daily updates on our announcements.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kIF9JJRHMU2pmS0vA2xbc/4261d639ac630d4c0f55e676621ddd51/Crypto-Week-1-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IPFS]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">3gBDsqNt0ufJh5O7aQBBxd</guid>
            <dc:creator>Andy Parker</dc:creator>
        </item>
        <item>
            <title><![CDATA[Additional Record Types Available with Cloudflare DNS]]></title>
            <link>https://blog.cloudflare.com/additional-record-types-available-with-cloudflare-dns/</link>
            <pubDate>Mon, 06 Aug 2018 16:45:17 GMT</pubDate>
            <description><![CDATA[ Cloudflare recently updated the authoritative DNS service to support nine new record types. Since these records are less commonly used than what we previously supported, we thought it would be a good idea to do a brief explanation of each record type and how it is used. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Photo by <a href="https://unsplash.com/@minkmingle?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Mink Mingle</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>Cloudflare recently updated the authoritative DNS service to support nine new record types. Since these records are less commonly used than what we previously supported, we thought it would be a good idea to do a brief explanation of each record type and how it is used.</p>
    <div>
      <h3>DNSKEY and DS</h3>
      <a href="#dnskey-and-ds">
        
      </a>
    </div>
    <p>DNSKEY and DS work together to allow you to enable DNSSEC on a child zone (subdomain) that you have delegated to another Nameserver. DS is useful if you are delegating DNS (through an NS record) for a child to a separate system and want to keep using DNSSEC for that child zone; without a DS entry in the parent, the child data will not be validated. We’ve blogged about the details of <a href="/introducing-universal-dnssec/">Cloudflare’s DNSSEC</a> <a href="/tag/dnssec/">implementation</a> and <a href="/bgp-leaks-and-crypto-currencies/">why it is important</a> in the past, and this new feature allows for more flexible adoption for customers who need to delegate subdomains.</p>
    <div>
      <h3>Certificate Related Record Types</h3>
      <a href="#certificate-related-record-types">
        
      </a>
    </div>
    <p>Today, there is no way to restrict which <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS (SSL) certificates</a> are trusted to be served for a host. For example if an attacker were able to maliciously generate an SSL certificate for a host, they could use a on-path attacker attack to appear as the original site. With SSHFP, TLSA, SMIMEA, and CERT, a website owner can configure the exact certificate public key that is allowed to be used on the domain, stored inside the DNS and secured with DNSSEC, reducing the risk of these kinds of attacks working.</p><p><b>It is critically important that if you rely on these types of records that you enable and configure DNSSEC for your domain.</b></p>
    <div>
      <h4><a href="https://tools.ietf.org/html/rfc4255">SSHFP</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>This type of record is an answer to the question “When I’m connecting via SSH to this remote machine, it’s authenticating me, but how do I authenticate it?” If you’re the only person connecting to this machine, your <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH client</a> will compare the fingerprint of the public host key to the one it kept in the known_hosts file during the first connection. However across multiple machines or multiple users from an organization, you need to verify this information against a common source of trust. In essence, you need the equivalent of the authentication that a certificate authority provides by signing an HTTPS certificate, but for SSH. Although it’s possible to set certificate authorities for SSH and to have them sign public host keys, another way is to publish the fingerprint of the keys in the domain via the SSHFP record type.</p><p><b>Again, for these fingerprints to be trustworthy it is important to enable DNSSEC on your zone.</b></p><p>The SSHFP record type is similar to TLSA record. You are specifying the algorithm type, the signature type, and then the signature itself within the record for a given hostname.</p><p>If the domain and remote server have SSHFP set and you are running an SSH client (such as <a href="https://www.openbsd.org/openssh/txt/release-5.1">OpenSSH 5.1+</a>) that supports it, you can now verify the remote machine upon connection by adding the following parameters to your connection:</p><p><code>❯ ssh -o "VerifyHostKeyDNS=yes" -o "StrictHostKeyChecking=yes" [insertremoteserverhere]</code></p>
    <div>
      <h4>TLSA and SMIMEA</h4>
      <a href="#tlsa-and-smimea">
        
      </a>
    </div>
    <p>TLSA records were designed to specify which keys are allowed to be used for a given domain when connecting via TLS. They were introduced in the <a href="https://tools.ietf.org/html/rfc6698">DANE</a> specification and allow domain owners to announce which certificate can and should be used for specific purposes for the domain. While most major browsers do not support TLSA, it may still be valuable for non browser specific applications and services.</p><p>For example, I’ve set a TLSA record for the domain hasvickygoneonholiday.com for TCP traffic over port 443. There are a number of ways to generate the record, but the easiest is likely through <a href="https://www.huque.com/bin/gen_tlsa">Shumon Huque’s tool</a>.</p><p>For most of the examples in this post we will be using <a href="https://www.knot-dns.cz/docs/2.6/html/man_kdig.html">kdig</a> rather than the ubiquitous dig. Generally preinstalled dig versions can be old and may not handle newer record types well. If your queries do not quite match up, you should either upgrade your version of dig or install knot.</p>
            <pre><code>;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY; status: NOERROR; id: 2218
;; Flags: qr rd ra ad; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 1

;; QUESTION SECTION:
;; _443._tcp.hasvickygoneonholiday.com. 	IN	TLSA

;; ANSWER SECTION:
_443._tcp.hasvickygoneonholiday.com. 300	IN	TLSA	3 1 1 4E48ED671DFCDF6CBF55E52DBC8B9C9CC21121BD149BC24849D1398DA56FB242
_443._tcp.hasvickygoneonholiday.com. 300	IN	RRSIG	TLSA 13 4 300 20180803233834 20180801213834 35273 hasvickygoneonholiday.com. JvC9mZLfuAyEHZUZdq4n8kyRbF09vwgx4c1fas24Ag925LILr1armjHbr7ZTp8ycS/Go3y3lgyYCuBeW/vT/3w==

;; Received 232 B
;; Time 2018-08-02 15:38:34 PDT
;; From 192.168.1.1@53(UDP) in 28.5 ms</code></pre>
            <p>From the above request and response, we can see that a) the response for the zone is secured and signed with DNSSEC (Flag: <i><b>ad</b></i>) and that I should be verifying a certificate with the public key (3 <i><b>1</b></i> 1) SHA256 hash (3 1 <i><b>1</b></i>) of 4E48ED671DFCDF6CBF55E52DBC8B9C9CC21121BD149BC24849D1398DA56FB242. We can use openssl (v1.1.x or higher) to verify the results:</p>
            <pre><code>❯ openssl s_client  -connect hasvickygoneonholiday.com:443 -dane_tlsa_domain "hasvickygoneonholiday.com" -dane_tlsa_rrdata "
3 1 1 4e48ed671dfcdf6cbf55e52dbc8b9c9cc21121bd149bc24849d1398da56fb242"
CONNECTED(00000003)
depth=0 C = US, ST = CA, L = San Francisco, O = "CloudFlare, Inc.", CN = hasvickygoneonholiday.com
verify return:1
---
Certificate chain
 0 s:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=hasvickygoneonholiday.com
   i:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2
 1 s:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2
   i:/C=IE/O=Baltimore/OU=CyberTrust/CN=Baltimore CyberTrust Root
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIE7jCCBJSgAwIBAgIQB9z9WxnovNf/lt2Lkrfq+DAKBggqhkjOPQQDAjBvMQsw
...
---
SSL handshake has read 2666 bytes and written 295 bytes
Verification: OK
Verified peername: hasvickygoneonholiday.com
DANE TLSA 3 1 1 ...149bc24849d1398da56fb242 matched EE certificate at depth 0</code></pre>
            <p><a href="https://tools.ietf.org/html/rfc8162">SMIMEA</a> records function similar to TLSA but are specific to email addresses. The domain for these records should be prefixed by “_smimecert.” and specific formatting is required to attach a SMIMEA record to an email address. The local-part (username) of the email address must be treated in a specific format and SHA-256 hashed as detailed in the RFC. From the RFC example: “ For example, to request an SMIMEA resource record for a user whose email address is "<a>hugh@example.com</a>", an SMIMEA query would be placed for the following QNAME: <code>c93f1e400f26708f98cb19d936620da35eec8f72e57f9eec01c1afd6._smimecert.example.com</code></p>
    <div>
      <h4><a href="https://tools.ietf.org/html/rfc4398">CERT</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>CERT records are used for generically storing certificates within DNS and are most commonly used by systems for email encryption. To create a CERT record, you must specify the certificate type, the key tag, the algorithm, and then the certificate, which is either the certificate itself, the CRL, a URL of the certificate, or fingerprint and a URL.</p>
    <div>
      <h3>Other Newly Supported Record Types</h3>
      <a href="#other-newly-supported-record-types">
        
      </a>
    </div>
    
    <div>
      <h4>PTR</h4>
      <a href="#ptr">
        
      </a>
    </div>
    <p>PTR (Pointer) records are pointers to canonical names. They are similar to CNAME in structure, meaning they only contain one FQDN (fully qualified domain name) but the RFC dictates that subsequent lookups are not done for PTR records, the value should just be returned back to the requestor. This is different to a CNAME where a recursive resolver would follow the target of the canonical name. The most common use of a PTR record is in reverse DNS, where you can look up which domains are meant to exist at a given IP address. These are useful for outbound mailservers as well as authoritative DNS servers.</p><p>It is only possible to delegate the authority for IP addresses that you own from your Regional Internet Registry (RIR). Creating reverse zones and PTR records for IPs that you can not (or do not) delegate does not serve any practical purpose.</p><p>For example, looking up the A record for marek.ns.cloudflare.com gives us the IP of 173.245.59.202.</p>
            <pre><code>❯ kdig a marek.ns.cloudflare.com +short
173.245.59.202</code></pre>
            <p>Now imagine we want to know if the owner of this IP ‘authorizes’ <code>marek.ns.cloudflare.com</code> to point to it. Reverse Zones are specifically crafted child zones within <code>in-addr.arpa.</code> (for IPv4) and <code>ip6.arpa.</code> (for IPv6) whom are delegated via the Regional Internet Registries to the owners of the IP address space. That is to say if you own a /24 from ARIN, ARIN will delegate the reverse zone space for your /24 to you to control. The IPv4 address is represented inverted as the subdomain in in-addr.arpa. Since Cloudflare owns the IP, we’ve delegated the reverse zone and created a PTR there.</p>
            <pre><code>❯ kdig -x 173.245.59.202
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY; status: NOERROR; id: 18658
;; Flags: qr rd ra; QUERY: 1; ANSWER: 1; AUTHORITY: 0; ADDITIONAL: 0

;; QUESTION SECTION:
;; 202.59.245.173.in-addr.arpa.		IN	PTR

;; ANSWER SECTION:
202.59.245.173.in-addr.arpa.	1222	IN	PTR	marek.ns.cloudflare.com.</code></pre>
            <p>For completeness, here is the +trace for the 202.59.245.173.in-addr.arpa zone. We can see that the /24 59.245.173.in-addr.arpa has been delegated to Cloudflare from ARIN:</p>
            <pre><code>❯ dig 202.59.245.173.in-addr.arpa +trace

; &lt;&lt;&gt;&gt; DiG 9.8.3-P1 &lt;&lt;&gt;&gt; 202.59.245.173.in-addr.arpa +trace
;; global options: +cmd
.			48419	IN	NS	a.root-servers.net.
.			48419	IN	NS	b.root-servers.net.
.			48419	IN	NS	c.root-servers.net.
.			48419	IN	NS	d.root-servers.net.
.			48419	IN	NS	e.root-servers.net.
.			48419	IN	NS	f.root-servers.net.
.			48419	IN	NS	g.root-servers.net.
.			48419	IN	NS	h.root-servers.net.
.			48419	IN	NS	i.root-servers.net.
.			48419	IN	NS	j.root-servers.net.
.			48419	IN	NS	k.root-servers.net.
.			48419	IN	NS	l.root-servers.net.
.			48419	IN	NS	m.root-servers.net.
;; Received 228 bytes from 2001:4860:4860::8888#53(2001:4860:4860::8888) in 25 ms

in-addr.arpa.		172800	IN	NS	e.in-addr-servers.arpa.
in-addr.arpa.		172800	IN	NS	d.in-addr-servers.arpa.
in-addr.arpa.		172800	IN	NS	b.in-addr-servers.arpa.
in-addr.arpa.		172800	IN	NS	f.in-addr-servers.arpa.
in-addr.arpa.		172800	IN	NS	c.in-addr-servers.arpa.
in-addr.arpa.		172800	IN	NS	a.in-addr-servers.arpa.
;; Received 421 bytes from 192.36.148.17#53(192.36.148.17) in 8 ms

173.in-addr.arpa.	86400	IN	NS	u.arin.net.
173.in-addr.arpa.	86400	IN	NS	arin.authdns.ripe.net.
173.in-addr.arpa.	86400	IN	NS	z.arin.net.
173.in-addr.arpa.	86400	IN	NS	r.arin.net.
173.in-addr.arpa.	86400	IN	NS	x.arin.net.
173.in-addr.arpa.	86400	IN	NS	y.arin.net.
;; Received 165 bytes from 199.180.182.53#53(199.180.182.53) in 300 ms

59.245.173.in-addr.arpa. 86400	IN	NS	ns1.cloudflare.com.
59.245.173.in-addr.arpa. 86400	IN	NS	ns2.cloudflare.com.
;; Received 95 bytes from 2001:500:13::63#53(2001:500:13::63) in 188 ms</code></pre>
            
    <div>
      <h4><a href="https://tools.ietf.org/html/rfc2915">NAPTR</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>Naming Authority Pointer Records are used in conjunction with SRV records, generally as a part of the SIP protocol. NAPTR records point to domains to specific services, if available for that domain. <a href="http://twitter.com/anders94">Anders Brownworth</a> has an excellent description in detail on his <a href="https://anders.com/cms/264/">blog</a>. The start of his example, with his permission:</p><blockquote><p>Let’s consider a call to <a>2125551212@example.com</a>. Given only this address though, we don't know what IP address, port or protocol to send this call to. We don't even know if example.com supports SIP or some other VoIP protocol like H.323 or IAX2. I'm implying that we're interested in placing a call to this URL but if no VoIP service is supported, we could just as easily fall back to emailing this user instead. To find out, we start with a NAPTR record lookup for the domain we were given:</p></blockquote>
            <pre><code>#host -t NAPTR example.com
example.com NAPTR 10 100 "S" "SIP+D2U" "" _sip._udp.example.com.
example.com NAPTR 20 100 "S" "SIP+D2T" "" _sip._tcp.example.com.
example.com NAPTR 30 100 "S" "E2U+email" "!^.*$!mailto:info@example.com!i" _sip._tcp.example.com.</code></pre>
            <blockquote><p>Here we find that example.com gives us three ways to contact example.com, the first of which is "SIP+D2U" which would imply SIP over UDP at _sip._udp.example.com.</p></blockquote>
    <div>
      <h4><a href="https://tools.ietf.org/html/rfc7553">URI</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>Uniform Resource Identifier records are commonly used as a compliment to NAPTR records and per the RFC, can be used to replace SRV records. As such, they contain a Weight and Priority field as well as Target, similar to SRV.</p><p>One use case is proposed by this <a href="https://tools.ietf.org/html/draft-mccallum-kitten-krb-service-discovery-03">draft RFC</a> is to replace SRV records with URI records for discovering Kerberos key distribution centers (KDC). It minimizes the number of requests over SRV records and allows the domain owner to specify preference for TCP or UDP.</p><p>In the below example, it specifies that we should use a KDC on TCP at the default port and UDP on port 89 should the primary connection fail.</p>
            <pre><code>❯ kdig URI _kerberos.hasvickygoneonholiday.com
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY; status: NOERROR; id: 8450
;; Flags: qr rd ra; QUERY: 1; ANSWER: 2; AUTHORITY: 0; ADDITIONAL: 0

;; QUESTION SECTION:
;; _kerberos.hasvickygoneonholiday.com. 	IN	URI

;; ANSWER SECTION:
_kerberos.hasvickygoneonholiday.com. 283	IN	URI	1 10 "krb5srv:m:tcp:kdc.hasbickygoneonholiday.com"
_kerberos.hasvickygoneonholiday.com. 283	IN	URI	1 20 "krb5srv:m:udp:kdc.hasbickygoneonholiday.com:89"</code></pre>
            
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>Cloudflare now supports CERT, DNSKEY, DS, NAPTR, PTR, SMIMEA, SSHFP, and TLSA in our authoritative DNS products. We would love to hear if you have any interesting example use cases for the new record types and what other record types we should support in the future.</p><p>Our DNS engineering teams in London and San Francisco are both hiring if you would like to contribute to the fastest authoritative and recursive DNS services in the world.</p><p><a href="https://boards.greenhouse.io/cloudflare/jobs/1213352?gh_jid=1213352">Software Engineer</a></p> ]]></content:encoded>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4hEsGfxkUT0Q2b2B53HPel</guid>
            <dc:creator>Sergi Isasi</dc:creator>
            <dc:creator>Etienne Labaume</dc:creator>
        </item>
        <item>
            <title><![CDATA[It’s Hard To Change The Keys To The Internet And It Involves Destroying HSM’s]]></title>
            <link>https://blog.cloudflare.com/its-hard-to-change-the-keys-to-the-internet-and-it-involves-destroying-hsms/</link>
            <pubDate>Tue, 06 Feb 2018 22:33:19 GMT</pubDate>
            <description><![CDATA[ The root of the DNS tree has been using DNSSEC to protect the zone content since 2010. DNSSEC is simply a mechanism to provide cryptographic signatures alongside DNS records that can be validated, i.e. prove the answer is correct and has not been tampered with.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Photo by <a href="https://unsplash.com/@niko_photos?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Niko Soikkeli</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>The root of the DNS tree has been using DNSSEC to protect the zone content since 2010. DNSSEC is simply a mechanism to provide cryptographic signatures alongside DNS records that can be validated, i.e. prove the answer is correct and has not been tampered with. To learn more about why DNSSEC is important, you can read our <a href="/dnssec-an-introduction/">earlier blog post</a>.</p><p>Today, the root zone is signed with a 2048 bit RSA “Trust Anchor” key. This key is used to sign further keys and is used to establish the <a href="https://en.wikipedia.org/wiki/Chain_of_trust">Chain of trust</a> that exists in the public DNS at the moment.</p><p>With access to this root Trust Anchor, it would be possible to re-sign the DNS tree and tamper with the content of DNS records on any domain, implementing an on-path DNS attack… without causing recursors and resolvers to consider the data invalid.</p><p>As explained in this <a href="https://www.cloudflare.com/dns/dnssec/root-signing-ceremony/">blog</a> the key is very well protected with eye scanners and fingerprint readers and fire-breathing dragons patrolling the gate (okay, maybe not dragons). Operationally though, the root zone uses two different keys, the mentioned Trust Anchor key (that is called the Key Signing Key or KSK for short) and the Zone Signing Key (ZSK).</p><p>The ZSK (Zone Signing Key) is used to generate signatures for all of the Resource Records (RRs) in a zone.</p><p>You can query for the DNSSEC signature (the RRSIG record) of “<a href="http://www.cloudflare.com">www.cloudflare.com</a>” using your friendly dig command.</p>
            <pre><code>$ dig www.cloudflare.com +dnssec</code></pre>
            
            <pre><code>;; QUESTION SECTION:
;www.cloudflare.com.		IN	A
;; ANSWER SECTION:
www.cloudflare.com.	4	IN	A	198.41.215.162
www.cloudflare.com.	4	IN	A	198.41.214.162
www.cloudflare.com.	4	IN	RRSIG	A 13 3 5 20180207170906 20180205150906 35273 cloudflare.com. 4W4mJXJRnd/wHnDyNo5minGvZY6hVNSXITnUI+pO6fzhnkpsEp1ko8K7 1PQ6r0s9SwLgrgfneqXyPs4b5X0YDw==</code></pre>
            <p>The two A records shown here can be cryptographically verified using the RRSIG and ZSK in the zone. The ZSK can itself be verified using the KSK, and so on… this continues upwards following the “chain of trust” until the root KSK is found.</p><p>The <a href="http://dnsviz.net/">http://dnsviz.net/</a> tool can be used to help visualize how this verification can be done for any domain on the internet, for example here is the trust chain for “<a href="http://www.cloudflare.com”">www.cloudflare.com”</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XNseB8e3GOtwhK8uckjfV/7f8bc719d84fcdd0844152bc2b6dd56b/Screen-Shot-2018-02-06-at-1.58.42-PM.png" />
            
            </figure><p>To verify the RRSIG on “<a href="http://www.cloudflare.com”">www.cloudflare.com”</a> we would need to cryptographically verify the signatures in reverse order on the diagram. First “cloudflare.com”, then “com”, and finally “.” – the root zone.</p><p>If you are able to access the secret key that’s used to sign the root, it’s possible to trick resolvers into verifying a "forged" answer.</p><p>While this DNSSEC signing has been deployed on the root zone, for over seven years, there is one operation that has never been attempted: rolling the Key Signing Key. This means to generate a new key and update every part of DNS infrastructure on the internet that needs it, retiring the old one completely.</p><p>The ZSK (Zone Signing Key) has been rolled religiously every quarter since 2010, however rolling the Key Signing Key is a much scarier operation. If it goes wrong it could leave the root zone signing invalid, meaning a large part of the internet would not trust any of the content, effectively knocking DNS offline for validating resolvers. After DNSSEC was designed, a mechanism was devised for rolling out a new Key Signing Key in RFC5011, this operation is commonly known as the 5011 roll-over.</p>
    <div>
      <h3>What is a KEY rollover?</h3>
      <a href="#what-is-a-key-rollover">
        
      </a>
    </div>
    <p>All cryptographic keys have a life cycle that can represented by states:Generated == the key is created but only the “owner” knows of its properties.Published == the key has been made public either as a public key or a hash of it.Active == the key is in useRetired == the has been withdrawn from service but is still publishedRevoked == they key has been marked as not to be trusted ever again.Removed == taken out of publication</p><p>Different keys move through the states in different ways depending on the usage, for example some keys are never revoked, just removing them is sufficient, for example the root ZSK’s are never revoked. When rolled, the root KSK will pass through all states.</p>
    <div>
      <h3>Why is the Root KSK different ?</h3>
      <a href="#why-is-the-root-ksk-different">
        
      </a>
    </div>
    <p>For most keys used in DNS the trust is derived by a relationship between the parent zone and the child zone. The parent publishes a special record, the DS (Delegation Signer), that contains cryptographically strong binding to the actual key, a hash. The child has a DNSKEY RRset at the top of its zone that has at least one key that matches one of the DS records in the parent. To complete the chain of trust the DNSKEY RRset MUST be signed by that key.</p><p>The root zone has no parent, thus trust cannot be derived in the same way. Instead, validating resolvers must be configured with the root Trust Anchor. This anchor must be refreshed during a key rollover or the validating resolver will not trust anything it sees in the root zone after the old KSK (from 2010) is retired from service. The Trust Anchors can be updated in a number of ways, such as a manual update, a software update, or an in-band update. The preferred update mechanism is the previously mentioned in-band update mechanism RFC5011-roll.</p><p>The process outlined in RFC 5011 relies on two factors, first that the new key is published in the DNSKEY RRset – which is signed by the old KSK, and is kept there for at least a hold-down period of 30 days. Validating resolvers that follow the procedure will check frequently to see if there is a new KSK in the DNSKEY set. The new key can be trusted because it has been signed with a key that is already in service. When there is new key, it is placed in PendingAddition state If at any point one of the key’s in PendingAddition is removed from the DNSKEY set, the resolver will forget about it. This means that if the key were to appear again, it would start a new 30 day hold-down period.</p><p>After the key has been in PendingAddtional for 30 consecutive days it is accepted into Active state and will be trusted to sign the DNSKEY set for the root. From this point onwards, the new key can be used to sign the Zone Signing Key, and in turn the root zone content itself.</p>
    <div>
      <h3>Why are we rolling the root key trust anchor?</h3>
      <a href="#why-are-we-rolling-the-root-key-trust-anchor">
        
      </a>
    </div>
    <p>There are two main reasons;</p><ol><li><p>The community wants to be a sure that the RFC5011 mechanism works in practice. Knowing this makes future rollovers possible, and less risky. Regular rollovers are something to be done as a matter of good key hygiene, like changing your password regularly</p></li><li><p>Enables thinking about switching to different algorithms. RSA with a large key size is a strong algorithm, but using it causes DNS packets to be larger. There are other algorithms like the ones that Cloudflare uses that are based on elliptic curves have smaller keys but increased safety per bit. To switch to a new algorithm would require a new key.</p></li></ol><p>Some people advocated rolling the key and changing the algorithm at the same time but that was deemed too risky. The right time to start talking about that is after the current roll concludes successfully.</p>
    <div>
      <h3>What has happened so far?</h3>
      <a href="#what-has-happened-so-far">
        
      </a>
    </div>
    <p><a href="https://www.icann.org/en/system/files/files/ksk-rollover-operational-implementation-plan-22jul16-en.pdf">ICANN started the rollover process last year</a>. The new keys has been created and replicated to all the <a href="https://en.wikipedia.org/wiki/Hardware_security_module">HSM’s (Hardware Security Modules)</a> in the two facilities that ICANN operates. From now on we will use the terms <b>KSK2010 (the old key)</b> and <b>KSK2017 (the new key)</b>.</p><p>Before starting the roll-over process, testing of RFC5011 implementations took place and most implementations reported success.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4muimZ8Ndzr9qyosAIIIFt/b5fc9a1b736e33a15e6cf378377686f1/Screen-Shot-2018-02-05-at-7.46.40-PM.png" />
            
            </figure><p>The new key was published in DNS on July 11th 2017, thus the DNSKEY set now contains two KSKs. At that point the new key/KSK2017 has entered Published state. It was scheduled to become Active on October 11th 2017. Any validating resolver that has been operating for at least 30 days during the July 11-October 11 window should have placed the new Trust Anchor in “Active” state before October 11th. But sometimes things do not go according to plan.</p><p>One of the things that was put in place before the rollover was a way for <a href="https://tools.ietf.org/html/rfc8145">resolvers to signal to authoritative servers what trust anchor the resolver trusts RFC8145</a>. RFC8145 was only published in April 2017, thus during the KSK2017 key publication phase, only the latest version of Bind-9 supported it by default.</p><p>The mechanism works by resolvers periodically sending a query to the root nodes, with a query name formatted like “_ta-4a5c” or “_ta-4a5c-4f66”. The name contains HEX encoded versions of the Trust Anchor identifiers, 19036 and 20326 respectively. This at least allows root operators to estimate the % of resolvers that have implemented RFC8145 AND are aware of each Trust Anchor.</p><p>On September 29 <a href="https://www.icann.org/news/announcement-2017-09-27-en">ICANN postponed the roll</a> based on evidence from the resolvers that sent in reports.It was concerning that the latest and <a href="https://www.icann.org/news/blog/update-on-the-root-ksk-rollover-project">greatest version of Bind-9 in 4%</a> of cases did not pick up the new Trust Anchor, this was explained in more detail in a <a href="https://indico.dns-oarc.net/event/27/session/1/contribution/11/material/slides/0.pdf">DNS-OARC presentation</a>. But this still leaves us with the question, why?</p><p>It is also important to note that although other implementations of RFC8145 did not enable it by default thus most of the reports were by Bind-9.</p><p>Rolling the KSK at this point would have resulted in the remaining resolvers not trusting the content of the root zone, ultimately breaking all DNS resolution through them.</p>
    <div>
      <h3>Operational reality vs the protocol design</h3>
      <a href="#operational-reality-vs-the-protocol-design">
        
      </a>
    </div>
    <p>At Cloudflare we operate validating resolvers in all of our &gt;120 data centers, and we monitored the adoption of trust anchors on a weekly basis, expecting everything to work correctly. After 6 weeks we noticed that things were not going right, some of the resolvers had picked up the new trust anchor and others had not accepted the new trust anchor even though more than enough time had passed.</p><p>First let’s look at the assumptions that RFC5011 makes.</p><ul><li><p>The resolver is a long running process that understands time and and can keep state</p></li><li><p>The resolver has access to persistent writeable storage that will work across reboots.</p></li></ul><p>In the protocol community we had worried about the first one a lot, for the second one we had identified two failure cases: machine configured from old read-only medium, and new machine takes over. Both were considered rare enough and operators would know to deal with those exceptions.</p><p>Turns out the second assumption in RFC5011 had more failure modes than the community expected.</p><p>For example in Bind-9, it originally had a hardcoded list of “trusted-keys”. Later when RFC5011 support was added the configuration option “managed-keys” was added. It looks like some installations while religiously updating the software never changed from the fixed configuration to the RFC5011 managed one. In this case the only recovery is to change the configuration, and in some cases the operator selected this operating mode assuming he/she would distribute a new configuration file during rollover, but the person may have left or forgotten.</p><p>Software that uses managed-keys operations (Bind-9, Unbound, Knot-resolver) uses a file to maintain state between restarts. BUT it is possible that the file is read-only and in that case managed-keys works just like trusted-keys. Why anyone would have a configuration like that is a good question? The interesting obersevation is that unless the implementation complains loudly about the read-only state, the operator is not likely to notice. The only recovery option here is to change the configuration so the trust anchor file can be written.</p><p>Software upgrades are another possible reason for not picking up the new trust anchor, but only if the file containing the Trust Anchor state is overwritten or lost. This can happen if the resolver machine has a disk replacement/reformat etc. but in this case the net effect is only slowing down the acceptance of the new trust anchor. This failure is visible as as KSK2017 spends more than 30 days in state “PendingAddition” but that is only visible if someone is looking.</p><p>Modern operating practices use “containers” that are spun up and down, in those cases there is no “persistent” storage. To avoid validation errors in this case the software installed must know about the new key or perform a key discovery upon startup like the <a href="https://www.unbound.net/documentation/unbound-anchor.html">unbound-anchor</a> program performs for Unbound.</p><p>There are probably few other reasons where operations may cause the errors seen by the Trust Anchor Signaling.</p><p>Back to what happened at Cloudflare? In our case the issue was a combination of upgrades and container issues. We were upgrading software on all our nodes and our resolver processes were allocated to different computers. Our fix was to quickly upgrade to a software version that knew about the new trust anchor, so future restarts/migrations would not cause loss in trust.</p>
    <div>
      <h3>What is next for the KSK rollover</h3>
      <a href="#what-is-next-for-the-ksk-rollover">
        
      </a>
    </div>
    <p>ICANN has just asked for comments on restarting the rollover process and <a href="https://www.icann.org/news/blog/announcing-draft-plan-for-continuing-with-the-ksk-roll">perform the roll on October 11th 2018</a>.</p><p>What can you do to prepare for the key rollover?If you operate a validating resolver, make sure you have the latest version of your vendors software, audit the configuration files and file permissions and check that your software supports both KSK2010 (key tag 19036) and KSK2017 (key tag <b>20326</b>).</p><p>If you are a concerned end user right now there is nothing you can do, but the IETF is considering a proposal <a href="https://datatracker.ietf.org/doc/draft-ietf-dnsop-kskroll-sentinel/?include_text=1">to allow remote trust anchor checking via queries</a>. Hopefully this will be standardized soon and DNS resolver software vendors add support, but until then there is no testing possible by you.</p><p>If you speak languages other than English and you worry about your local operators should know about the DNSSEC Key Rollover failure modes, feel free to republish this blog or parts of it in your language.</p>
    <div>
      <h3>HSM destruction at the next KSK ceremony Feb 7th 2018</h3>
      <a href="#hsm-destruction-at-the-next-ksk-ceremony-feb-7th-2018">
        
      </a>
    </div>
    <p>Every quarter there is a new KSK signing ceremony where signatures for 3 months of use of the KSK are generated. <a href="https://www.iana.org/dnssec/ceremonies/32">February 6th 2018 is the next one</a> and it will sign a DNSKEY set with both KSKs but only signed by KSK2010 . You can see the script for the ceremony here and you can even watch it online. But the fun part of this particular ceremony is the destruction of old HSM (Hardware Security Module), via some fancy contraption.</p><p>An HSM is a special kind of equipment that can store private keys and never leak them, and protects its secrets by erasing them when someone tries to access/tamper with the equipment. The secrets remain in the HSM as long as a non-replaceable battery lasts. The old KSK HSMs have a lifetime of 10 years and were made in late 2009 or early 2010 thus the battery is not designed to last much longer. Last year the private keys were safely and securly moved to newer models and the new machines have been in use for about a year. The final step of retiring the old machines is to destroy them during the ceremony, tune in to see how that is done.</p><p>Excited by working on cutting edge stuff? Or building systems at a scale where once-in-a-decade problems can get triggered every day? <a href="https://www.cloudflare.com/careers/">Then join our team</a>.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">6oKXZMilsKD2I3Cq90yIo0</guid>
            <dc:creator>Ólafur Guðmundsson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Broken packets: IP fragmentation is flawed]]></title>
            <link>https://blog.cloudflare.com/ip-fragmentation-is-broken/</link>
            <pubDate>Fri, 18 Aug 2017 17:40:55 GMT</pubDate>
            <description><![CDATA[ As opposed to the public telephone network, the internet has a Packet Switched design. But just how big can these packets be? ]]></description>
            <content:encoded><![CDATA[ <p>As opposed to the <a href="https://en.wikipedia.org/wiki/Public_switched_telephone_network">public telephone network</a>, the internet has a <a href="https://en.wikipedia.org/wiki/Packet_switching">Packet Switched</a> design. But just how big can these packets be?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73Q2p7jy3NbOeARKhzz5Zm/f460ab3c98a3eebf5270e9cef093132f/8093997590_02d7ec3713_z.jpg" />
            
            </figure><p> CC BY 2.0 <a href="https://www.flickr.com/photos/ajmexico/8093997590">image</a> by <a href="https://www.flickr.com/photos/ajmexico/">ajmexico</a>, <a href="https://blog.apnic.net/2016/05/19/fragmenting-ipv6/">inspired by</a></p><p>This is an old question and the IPv4 RFCs answer it pretty clearly. The idea was to split the problem into two separate concerns:</p><ul><li><p>What is the maximum packet size that can be handled by operating systems on both ends?</p></li><li><p>What is the maximum permitted datagram size that can be safely pushed through the physical connections between the hosts?</p></li></ul><p>When a packet is too big for a physical link, an intermediate router might chop it into multiple smaller datagrams in order to make it fit. This process is called "forward" IP fragmentation and the smaller datagrams are called IP fragments<a href="#fn1">[1]</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yOVFEXv5xc8DvPBehp1Zn/acec50ceab6a07ff6d31a50df3ab5287/fragments.jpg" />
            
            </figure><p>Image by <a href="https://blog.apnic.net/2016/01/28/evaluating-ipv4-and-ipv6-packet-frangmentation/">Geoff Huston</a>, reproduced with permission</p><p>The IPv4 specification defines the minimal requirements. From <a href="https://tools.ietf.org/html/rfc791">the RFC791</a>:</p>
            <pre><code>Every internet destination must be able to receive a datagram
of 576 octets either in one piece or in fragments to
be reassembled. [...]

Every internet module must be able to forward a datagram of 68
octets without further fragmentation. [...]</code></pre>
            <p>The first value - permitted reassembled packet size - is typically not problematic. IPv4 defines the minimum as 576 bytes, but popular operating systems can cope with very big packets, typically up to 65KiB.</p><p>The second one is more troublesome. All physical connections have inherent datagram size limits, depending on the specific medium they use. For example Frame Relay can send datagrams between 46 and 4,470 bytes. ATM uses fixed 53 bytes, classical Ethernet can do between 64 and 1500 bytes.</p><p>The spec defines the minimal requirement - each physical link must be able to transmit datagrams of at least 68 bytes. For IPv6 that minimal value has been bumped up to 1,280 bytes (see <a href="https://tools.ietf.org/html/rfc2460#section-5">RFC2460</a>).</p><p>On the other hand, the maximum datagram size that can be transmitted without fragmentation is not defined by any specification and varies by link type. This value is called the MTU (<a href="https://en.wikipedia.org/wiki/Maximum_transmission_unit">Maximum Transmission Unit</a>)<a href="#fn2">[2]</a>.</p><p>The MTU defines a maximum datagram size on a local physical link. The internet is created from non-homogeneous networks, and on the path between two hosts there might be links with shorter MTU values. The maximum packet size that can be transmitted without fragmentation between two remote hosts is called <a href="https://en.wikipedia.org/wiki/Path_MTU_Discovery">a Path MTU</a>, and can potentially be different for every connection.</p>
    <div>
      <h3>Avoid fragmentation</h3>
      <a href="#avoid-fragmentation">
        
      </a>
    </div>
    <p>One might think that it's fine to build applications that transmit very big packets and rely on routers to perform the IP fragmentation. This is not a good idea. The problems with this approach were first discussed by <a href="http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-3.pdf">Kent and Mogul</a> in 1987. Here are a couple of highlights:</p><ul><li><p>To successfully reassemble a packet, all fragments must be delivered. No fragment can become corrupt or get lost in-flight. There simply is no way to notify the other party about missing fragments!</p></li><li><p>The last fragment will almost never have the optimal size. For large transfers this means a significant part of the traffic will be composed of suboptimal short datagrams - a waste of precious router resources.</p></li><li><p>Before the re-assembly a host must hold partial, fragment datagrams in memory. This opens an opportunity for memory exhaustion attacks.</p></li><li><p>Subsequent fragments lack the higher-layer header. TCP or UDP header is only present in the first fragment. This makes it impossible for firewalls to filter fragment datagrams based on criteria like source or destination ports.</p></li></ul><p>A more elaborate description of IP fragmentation problems can be found in these articles by Geoff Huston:</p><ul><li><p><a href="https://blog.apnic.net/2016/01/28/evaluating-ipv4-and-ipv6-packet-frangmentation/">Evaluating IPv4 and IPv6 packet fragmentation</a></p></li><li><p><a href="https://blog.apnic.net/2016/05/19/fragmenting-ipv6/">Fragmenting IPv6</a></p></li></ul>
    <div>
      <h3>Don't fragment - ICMP Packet too big</h3>
      <a href="#dont-fragment-icmp-packet-too-big">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fsgW6ELxQUUQUY7fkrloE/7325c15f203b2058b95da50dac776113/df.jpg" />
            
            </figure><p>Image by <a href="https://blog.apnic.net/2016/01/28/evaluating-ipv4-and-ipv6-packet-frangmentation/">Geoff Huston</a>, reproduced with permission</p><p>A solution to these problems was included in the IPv4 protocol. A sender can set the DF (Don't Fragment) flag in the IP header, asking intermediate routers never to perform fragmentation of a packet. Instead a router with a link having a smaller MTU will send an ICMP message "backward" and inform the sender to reduce the MTU for this connection.</p><p>The TCP protocol always sets the DF flag. The network stack looks carefully for incoming "Packet too big"<a href="#fn3">[3]</a> ICMP messages and keeps track of the "path MTU" characteristic for every connection<a href="#fn4">[4]</a>. This technique is called "path MTU discovery", and it is mostly commonly used for TCP, although it can also be applied to other IP-based protocols. Being able to deliver the ICMP "Packet too big" messages is critical in keeping the TCP stack working optimally.</p>
    <div>
      <h3>How the internet actually works</h3>
      <a href="#how-the-internet-actually-works">
        
      </a>
    </div>
    <p>In a perfect world, internet connected devices would cooperate and correctly handle fragment datagrams and the associated ICMP packets. In reality though, IP fragments and ICMP packets are very often filtered out.</p><p>This is because the modern internet is much more complex than anticipated 36 years ago. Today, basically nobody is plugged directly into the public internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1pgqc0JOIId0GpmtbKoTc7/dbf0e2a316b29cbbdee57a678c1cc495/thecloud2.png" />
            
            </figure><p>Customer devices connect through home routers which do NAT (<a href="https://en.wikipedia.org/wiki/Network_address_translation">Network Address Translation</a>) and usually enforce firewall rules. Increasingly often there is more than one NAT installation on the packet path (e.g. <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT">carrier-grade NAT</a>). Then, the packets hit the ISP infrastructure where there are ISP "middle boxes". They perform all manner of weird things on the traffic: enforce plan caps, throttle connections, perform logging, <a href="https://www.cloudflare.com/learning/security/global-dns-hijacking-threat/">hijack DNS requests</a>, implement government-mandated web site bans, force transparent caching or arguably "optimize" the traffic in some other magical way. The middle boxes are used especially by mobile telcos.</p><p>Similarly, there are often multiple layers between a server and the public internet. Service providers sometimes use <a href="https://en.wikipedia.org/wiki/Anycast#Internet_Protocol_Version_4">Anycast BGP routing</a>. That is: they handle the same IP ranges from multiple physical locations around the world. Within a datacenter on the other hand it's increasingly popular to use ECMP <a href="https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing">Equal Cost Multi Path</a> for load balancing.</p><p>Each of these layers between a client and server can cause a Path MTU problem. Allow me to illustrate this with four scenarios.</p>
    <div>
      <h4>1. Client -&gt; Server DF+ / ICMP</h4>
      <a href="#1-client-server-df-icmp">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Cz9EAKTmEqN1aqJzKXJSF/2ee1ffefb8509d5c8acc92ac5ed4944c/client-dfp.png" />
            
            </figure><p>In the first scenario, a client uploads some data to the server using TCP so the DF flag is set on all of the packets. If the client fails to predict an appropriate MTU, an intermediate router will drop the big packets and send an ICMP “Packet too big” notification back to the client. These ICMP packets might get dropped by misconfigured customer NAT devices or ISP middle boxes.</p><p>According to the <a href="https://www.nlnetlabs.nl/downloads/publications/pmtu-black-holes-msc-thesis.pdf">paper by Maikel de Boer and Jeffrey Bosma</a> from 2012 around 5% of IPv4 and 1% of IPv6 hosts block inbound ICMP packets.</p><p>My experience confirms this. ICMP messages are indeed often dropped for perceived security advantages, but this is relatively easy to fix. A bigger issue is with certain mobile ISPs with weird middle boxes. These often completely ignore ICMP and perform very aggressive connection rewriting. For example Orange Polska not only ignores inbound "Packet too big" ICMP messages, but also rewrites the connection state and <a href="http://lartc.org/howto/lartc.cookbook.mtu-mss.html">clamps the MSS</a> to a non-negotiable 1344 bytes.</p>
    <div>
      <h4>2. Client -&gt; Server DF- / fragmentation</h4>
      <a href="#2-client-server-df-fragmentation">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cc7LrXd6JM9lt0KWGe0m1/0a045a4d544e9f584f7a78e5a8da827b/client-dfm.png" />
            
            </figure><p>In next scenario, a client uploads some data with a protocol other than TCP, which has the DF flag cleared. For example, this might be a user playing a game using UDP, or having a voice call. The big outbound packets might get fragmented at some point in the path.</p><p>We can emulate this by launching <code>ping</code> with a large payload size:</p>
            <pre><code>$ ping -s 2048 facebook.com</code></pre>
            <p>This particular <code>ping</code> will fail with payloads bigger than 1472 bytes. Any larger size will get fragmented and won’t get delivered properly. There are multiple reasons why servers might mishandle fragments, but one of a popular problems is the use of ECMP load balancing. Due to the ECMP hashing, the first datagram containing a protocol header is likely to be load-balanced to a different server than the rest of the fragments, preventing the reassembly.</p><p>For a more detailed discussion of this issue, see:</p><ul><li><p>Our <a href="/path-mtu-discovery-in-practice/">previous write up on ECMP</a>.</p></li><li><p>How Google attempts to solve ECMP fragmentation issues with <a href="https://research.google.com/pubs/archive/44824.pdf">Maglev L4 Load balancer</a>.</p></li></ul><p>Furthermore, server and router misconfiguration is a significant issue. According to <a href="https://tools.ietf.org/html/rfc7872">RFC7852</a> between 30% and 55% of servers drop IPv6 datagrams containing fragmentation header.</p>
    <div>
      <h4>3. Server -&gt; Client DF+ / ICMP</h4>
      <a href="#3-server-client-df-icmp">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UtBkyENFMejLCiBYfwpNI/be2227af40a62f1b8764106d4eecaf64/server-dfp.png" />
            
            </figure><p>The next scenario is about a client downloading some data over TCP. When the server fails to predict the correct MTU, it should receive an ICMP “Packet too big” message. Easy, right?</p><p>Sadly, it's not, again due to ECMP routing. The ICMP message will most likely get delivered to the wrong server - the 5-tuple hash of ICMP packet will not match the 5-tuple hash of the problematic connection. We <a href="/path-mtu-discovery-in-practice/">wrote about this in the past</a>, and developed a simple userspace daemon to solve it. It works by broadcasting the inbound ICMP “Packet too big” notification to all the ECMP servers, hoping that the one with the problematic connection will see it.</p><p>Additionally due to Anycast routing, the ICMP might be delivered to the wrong datacenter altogether! Internet routing is often asymmetric and the best path from an intermediate router might direct the ICMP packets to the wrong place.</p><p>Missing ICMP “Packet too big” notifications can result in connections stalling and timing out. This is often called a <a href="https://en.wikipedia.org/wiki/Path_MTU_Discovery#Problems_with_PMTUD">PMTU blackhole</a>. To aid this pessimistic case Linux implements a workaround - MTU Probing <a href="http://www.ietf.org/rfc/rfc4821.txt">RFC4821</a>. MTU Probing tries to automatically identify packets dropped due to the wrong MTU, and uses heuristics to tune it. This feature is controlled via a sysctl:</p>
            <pre><code>$ echo 1 &gt; /proc/sys/net/ipv4/tcp_mtu_probing</code></pre>
            <p>But MTU probing is not without its own issues. First, it tends to miscategorize congestion-related packet loss as MTU issues. Long running connections tend to end up with a reduced MTU. Secondly, Linux does not implement MTU Probing for IPv6.</p>
    <div>
      <h4>4. Server -&gt; Client DF- / fragmentation</h4>
      <a href="#4-server-client-df-fragmentation">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FLqYyctRkLrYYTyDGXcbP/2b79de43fe7add8866faf603b872f975/server-dfm.png" />
            
            </figure><p>Finally, there is a situation where the server sends big packets using a non-TCP protocol with the DF bit clear. In this scenario, the big packets will get fragmented on the path to the client. This situation is best illustrated with big DNS responses. Here are two DNS requests that will generate large responses and be delivered to the client as multiple IP fragments:</p>
            <pre><code>$ dig +notcp +dnssec DNSKEY org @199.19.56.1
$ dig +notcp +dnssec DNSKEY org @2001:500:f::1</code></pre>
            <p>These requests might fail due to already mentioned the misconfigured <a href="https://en.wikipedia.org/wiki/Customer-premises_equipment">home router</a>, broken NAT, broken ISP installations, or too restrictive firewall settings.</p><p>According to <a href="https://www.nlnetlabs.nl/downloads/publications/pmtu-black-holes-msc-thesis.pdf">Boer and Bosma</a> around 6% of IPv4 and 10% of IPv6 hosts block inbound fragment datagrams.</p><p>Here are some links with more information about the specific fragmentation issues affecting <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a>:</p><ul><li><p><a href="https://www.dns-oarc.net/oarc/services/replysizetest">DNS-OARC Reply Size Test</a></p></li><li><p><a href="http://www.potaroo.net/ispcol/2017-08/xtn-hdrs.html">IPv6, Large UDP Packets and the DNS</a></p></li></ul>
    <div>
      <h4>Yet the internet still works!</h4>
      <a href="#yet-the-internet-still-works">
        
      </a>
    </div>
    <p>With all these things going wrong, how does the internet still manage to work?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cXkQdo8Tuq03pgrmUGLoZ/6c496c142b61f7708631f8cc0973c3bf/nic.jpg" />
            
            </figure><p>CC BY-SA 3.0, <a href="https://commons.wikimedia.org/w/index.php?curid=122201">source: Wikipedia</a></p><p>This mainly due to the success of <a href="https://en.wikipedia.org/wiki/Ethernet_frame">Ethernet</a>. The great majority of the links in the public internet are Ethernet (or derived from it) and support the MTU of 1500 bytes.</p><p>If you blindly assume the MTU of 1500<a href="#fn5">[5]</a>, you will be surprised how often it will work just fine. The internet keeps on working mostly because we are all using an MTU of 1500 and rarely need to do IP fragmentation and send ICMP messages.</p><p>This stops working on an unusual setup with links having a non-standard MTU. VPNs and other network tunnel software must be careful to ensure that the fragments and ICMP messages are working fine.</p><p>This is especially visible in the IPv6 world, where many users connect through tunnels. Having a healthy passage of ICMP in both ways is very important, especially since fragmentation in IPv6 basically doesn't work (we cited two sources claiming between 10% and 50% of IPv6 hosts block IPv6 Fragment header).</p><p>Since the Path MTU issues in IPv6 are so common, many IPv6 servers clamp down Path MTU to the protocol mandated minimum of 1280 bytes. This approach trades a bit of performance for best reliability.</p>
    <div>
      <h3>Online ICMP blackhole checker</h3>
      <a href="#online-icmp-blackhole-checker">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6C5lYgpMbYOFtDd96gWWra/f3650ccf76ba161d577d29e6e40ea4c5/Screen-Shot-2017-08-16-at-2.11.05-AM-1.png" />
            
            </figure><p>To help explore and debug these issues, we built an online checker. You can find two versions of the test:</p><ul><li><p>IPv4 version: <a href="http://icmpcheck.popcount.org">http://icmpcheck.popcount.org</a></p></li><li><p>IPv6 version: <a href="http://icmpcheckv6.popcount.org">http://icmpcheckv6.popcount.org</a></p></li></ul><p>These sites launch two tests:</p><ul><li><p>The first test will deliver ICMP messages to your computer, with the intention of reducing the Path MTU to a laughingly small value.</p></li><li><p>The second test will send fragment datagrams back to you.</p></li></ul><p>Receiving a "pass" in both these tests should give you a reasonable assurance that the internet on your side of the cable is behaving well.</p><p>It's also easy to run the tests from command line, in case you want to run it on the server:</p>
            <pre><code>perl -e "print 'packettoolongyaithuji6reeNab4XahChaeRah1diej4' x 180" &gt; payload.bin
curl -v -s http://icmpcheck.popcount.org/icmp --data @payload.bin
curl -v -s http://icmpcheckv6.popcount.org/icmp --data @payload.bin</code></pre>
            <p>This should reduce the path MTU to our server to 905 bytes. You can to verify this by looking into the routing cache table. On Linux you do this with:</p>
            <pre><code>ip route get `dig +short icmpcheck.popcount.org`</code></pre>
            <p>It's possible to clear the routing cache on Linux:</p>
            <pre><code>ip route flush cache to `dig +short icmpcheck.popcount.org`</code></pre>
            <p>The second test verifies if fragments are properly delivered to the client:</p>
            <pre><code>curl -v -s http://icmpcheck.popcount.org/frag -o /dev/null
curl -v -s http://icmpcheckv6.popcount.org/frag -o /dev/null</code></pre>
            
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>In this blog post we described the problems with detecting Path MTU values in the internet. ICMP and fragment datagrams are often blocked on both sides of the connections. Clients can encounter misconfigured firewalls, NAT devices or use ISPs which aggressively intercept connections. Clients also often use VPN's or IPv6 tunnels which, misconfigured, can cause path MTU issues.</p><p>Servers on the other hand increasingly often rely on Anycast or ECMP. Both of these things, as well as router and firewall misconfiguration are often a cause for ICMP and fragment datagrams being dropped.</p><p>Finally, we hope the online test is useful and can give you more insight into the inner workings of your networks. The test has useful examples of tcpdump syntax, useful to gain more insight. Happy network debugging!</p><p><i>Is fixing fragmentation issues for 10% of the internet exciting? We are hiring system engineers of all stripes, Golang programmers, C wranglers, and interns in multiple locations! </i><a href="https://www.cloudflare.com/careers"><i>Join us</i></a><i> in San Francisco, London, Austin, Champaign and Warsaw.</i></p><hr /><hr /><ol><li><p>In IPv6 the "forward" fragmentation works slightly differently than in IPv4. The intermediate routers are prohibited from fragmenting the packets, but the source can still do it. This is often confusing - a host might be asked to fragment a packet that it transmitted in the past. This makes little sense for stateless protocols like DNS. <a href="#fnref1">↩︎</a></p></li><li><p>On a side note, there also exists a "minimum transmission unit"! In commonly used Ethernet framing, each transmitted datagram must have at least 64 bytes on Layer 2. This translates to 22 bytes on UDP and 10 bytes on TCP layer. Multiple implementations used to leak uninitialized memory on shorter packets! <a href="#fnref2">↩︎</a></p></li><li><p>Strictly speaking in IPv4 the ICMP packet is named "Destination Unreachable, Fragmentation Needed and Don't Fragment was Set". But I find the IPv6 ICMP error description "Packet too big" much clearer. <a href="#fnref3">↩︎</a></p></li><li><p>As a hint, TCP stack also include a maximum allowed "MSS" value in SYN packets (MSS is basically an MTU value reduced by size of IP and TCP headers). This allows the hosts to know what is the MTU on <i>their</i> links. Notice: this doesn't say what is the MTU on the dozens internet links between the two hosts! <a href="#fnref4">↩︎</a></p></li><li><p>Let's err on the safe side. A better MTU is 1492, to accommodate for DSL and PPPoE connections. <a href="#fnref5">↩︎</a></p></li></ol> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DNS]]></category>
            <guid isPermaLink="false">46VhwoWUtsUhF1ekyKZhYo</guid>
            <dc:creator>Marek Majkowski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Changing Internet Standards to Build A Secure Internet]]></title>
            <link>https://blog.cloudflare.com/dk-dnssec/</link>
            <pubDate>Wed, 12 Apr 2017 15:06:07 GMT</pubDate>
            <description><![CDATA[ We’ve been working with registrars and registries in the IETF on making DNSSEC easier for domain owners, and over the next two weeks we’ll be starting out by enabling DNSSEC automatically for .dk domains. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’ve been working with <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> and registries in the IETF on making DNSSEC easier for domain owners, and over the next two weeks we’ll be starting out by enabling DNSSEC automatically for .dk domains.</p>
    <div>
      <h3>DNSSEC: A Primer</h3>
      <a href="#dnssec-a-primer">
        
      </a>
    </div>
    <p>Before we get into the details of how we've improved the DNSSEC experience, we should explain why DNSSEC is important and the function it plays in keeping the web safe.</p><p>DNSSEC’s role is to verify the integrity of DNS answers. When DNS was written in the early 1980’s, it was only a few researchers and academics on the internet. They all knew and trusted each other, and couldn’t imagine a world in which someone malicious would try to operate online. As a result, DNS relies on trust to operate. When a client asks for the address of a hostname like <a href="http://www.cloudflare.com">www.cloudflare.com</a>, without DNSSEC it will trust basically any server that returns the response, even if it wasn’t the same server it originally asked. With DNSSEC, every DNS answer is signed so clients can verify answers haven’t been manipulated over transit.</p>
    <div>
      <h3>The Trouble With DNSSEC</h3>
      <a href="#the-trouble-with-dnssec">
        
      </a>
    </div>
    <p>If DNSSEC is so important, why do so few domains support it? First, for a domain to have the opportunity to enable DNSSEC, not only do its DNS provider, its registrar and its registry all have to support DNSSEC, all three of those parties have to also support the same encryption algorithms.</p><p>For domains that do have the ability to enable DNSSEC, DNSSEC is just not easy enough -- domain owners need to first enable DNSSEC with their DNS provider, and then copy and paste some values (called a DS record) from their DNS provider’s dashboard to their registrar’s dashboard, making sure not to miss any characters when copying and pasting, because that would cut off traffic to their whole domain. What we need here is automation.</p>
    <div>
      <h3>Changing an outdated model</h3>
      <a href="#changing-an-outdated-model">
        
      </a>
    </div>
    <p>It's been Cloudflare's long-standing statement that as the DNS operator, we would like to update the DS automatically for a user, but <a href="/updating-the-dns-registration-model-to-keep-pace-with-todays-internet/">DNS operates on a legacy model</a> where the registrar is able to talk directly to the registry, but the DNS operator (Cloudflare) is left completely out of that model.</p><p>Here at Cloudflare, we’re determined it’s time to change that outdated system. We have <a href="https://tools.ietf.org/html/draft-ietf-regext-dnsoperator-to-rrr-protocol">published an Internet Draft</a> to propose a new model for how DNS operators, registries and registrars could operate and communicate to make specific user-authorized changes to domains. It’s important to point out that the IETF works on the principle of rough consensus and running code. Cloudflare, in conjunction with the .dk registry, has produced running code, and we’re very close to getting consensus. That Internet Draft is now making its way through the Standards Track within the IETF and on it’s way to becoming an fully-fledged RFC.</p>
    <div>
      <h3>How .dk and Cloudflare are working together</h3>
      <a href="#how-dk-and-cloudflare-are-working-together">
        
      </a>
    </div>
    <p>The ccTLD operator for Denmark (ie. the .dk domains) has also realized that the model is outdated. They provide their users (and the operators of nameservers associated with .dk domains) a programmatic way of installing and updating DS records. This is exactly what operators like Cloudflare need.</p><p>Cloudflare has been testing their API and is now ready to kick off an automated, clean, safe and reliable way of updating DS records for our .dk customers. Over the next two weeks we will enable DNSSEC for .dk domains that have started to in the past, but haven’t finished the process.</p><p>Of course, for Cloudflare, there’s no surprise that Denmark is the home to forward thinkers like this!</p>
    <div>
      <h3>Onwards!</h3>
      <a href="#onwards">
        
      </a>
    </div>
    <p>If you have a .dk domain on Cloudflare, you really don’t need to do anything except flip the switch enabling DNSSEC within the Cloudflare login console before we do the migration on Tuesday, April 18, 2017.</p><p>We are excited to work with the .dk registry on this first step to making DNSSEC automatic, and are looking for other TLD’s looking to make DNSSEC easy to use.</p> ]]></content:encoded>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">5laIvCdz888qNvdBTd86rl</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Economical With The Truth: Making DNSSEC Answers Cheap]]></title>
            <link>https://blog.cloudflare.com/black-lies/</link>
            <pubDate>Fri, 24 Jun 2016 16:31:10 GMT</pubDate>
            <description><![CDATA[ We launched DNSSEC late last year and are already signing 56.9 billion DNS record sets per day. At this scale, we care a great deal about compute cost. ]]></description>
            <content:encoded><![CDATA[ <p>We launched DNSSEC late last year and are already signing 56.9 billion DNS record sets per day. At this scale, we care a great deal about compute cost. One of the ways we save CPU cycles is our unique implementation of negative answers in DNSSEC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39pYW2SBON2GibSMDXYXr9/1f9e5f863b435eaa642f67b810acbfbf/217591669_c31a16e301_o.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://www.flickr.com/photos/88478656@N00/217591669">image</a> by <a href="https://www.flickr.com/photos/chris-short/">Chris Short</a></p><p>I will briefly explain a few concepts you need to know about <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">DNSSEC</a> and negative answers, and then we will dive into how CloudFlare saves on compute when asked for names that don’t exist.</p>
    <div>
      <h3>What You Need To Know: DNSSEC Edition</h3>
      <a href="#what-you-need-to-know-dnssec-edition">
        
      </a>
    </div>
    <p>Here’s a quick summary of DNSSEC:</p><p>This is an unsigned DNS answer (unsigned == no DNSSEC):</p>
            <pre><code>cloudflare.com.		299	IN	A	198.41.214.162
cloudflare.com.		299	IN	A	198.41.215.162</code></pre>
            <p>This is an answer with DNSSEC:</p>
            <pre><code>cloudflare.com.		299	IN	A	198.41.214.162
cloudflare.com.		299	IN	A	198.41.215.162
cloudflare.com.		299	IN	RRSIG	A 13 2 300 20160311145051 20160309125051 35273     cloudflare.com. RqRna0qkih8cuki++YbFOkJi0DGeNpCMYDzlBuG88LWqx+Aaq8x3kQZX TzMTpFRs6K0na9NCUg412bOD4LH3EQ==</code></pre>
            <p>Answers with DNSSEC contain a signature for every record type that is returned. (In this example, only A records are returned so there is only one signature.) The signatures allow DNS resolvers to validate the records returned and prevent on-path attackers from intercepting and changing the answers.</p>
    <div>
      <h3>What You Need To Know: Negative Answer Edition</h3>
      <a href="#what-you-need-to-know-negative-answer-edition">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/EDDHJnSfGNBWos6pw07VT/cbc431245dbf7cb97dc788ea6a909fcb/250521158_0c5de0ef97_z.jpg" />
            
            </figure><p>There are two types of negative answers. The first is <code>NXDOMAIN</code>, which means that the name asked for does not exist. An example of this is a query asking for <code>missing.cloudflare.com</code>. <code>missing.cloudflare.com</code> doesn’t exist at all.</p><p>The second type is <code>NODATA</code>, which means that the name does exist, just not in the requested type. An example of this would be asking for the <code>MX</code> record of <code>blog.cloudflare.com</code>. There are <code>A</code> records for <code>blog.cloudflare.com</code> but no <code>MX</code> records so the appropriate response is <code>NODATA</code>.</p>
    <div>
      <h3>What Goes Into An <code>NXDOMAIN</code> With DNSSEC</h3>
      <a href="#what-goes-into-an-nxdomain-with-dnssec">
        
      </a>
    </div>
    <p>To see what gets returned in a negative <code>NXDOMAIN</code> answer, let’s look at the response for a query for <code>bogus.ietf.org</code>.</p><p>The first record that has to be returned in a negative answer with DNSSEC is an SOA, just like in an unsigned negative answer. The SOA contains some metadata about the zone and lets the recursor know how long to cache the negative answer for.</p>
            <pre><code>ietf.org.	1179	IN	SOA	ns0.amsl.com. glen.amsl.com. 1200000325 1800 1800 604800 1800</code></pre>
            <p>Because the domain is signed with DNSSEC, the signature for the <code>SOA</code> is also returned:</p>
            <pre><code>ietf.org.	1179	IN	RRSIG	SOA 5 2 1800 20170308083354 20160308073501 40452 ietf.org. S0gIjTnQGA6TyIBjCeBXL4ip8aEQEgg2y+kCQ3sLtFa3oNy9vj9kj4aP 8EVu4oIexr8X/i9L8Oj5ec4HOrQoYsMGObRUG0FGT0MEbxepi+wWrfed vD/3mq8KZg/pj6TQAKebeSQGkmb8y9eP0PdWdUi6EatH9ZY/tsoiKyqg U4vtq9sWZ/4mH3xfhK9RBI4M7XIXsPX+biZoik6aOt4zSWR5WDq27pXI 0l+BLzZb72C7McT4PlBiF+U86OngBlGxVBnILyW2aUisi2LY6KeO5AmK WNT0xHWe5+JtPD5PgmSm46YZ8jMP5mH4hSYr76jqwvlCtXvq8XgYQU/P QyuCpQ==</code></pre>
            <p>The next part of the negative answer in DNSSEC is a record type called <code>NSEC</code>. The <code>NSEC</code> record returns the previous and next name in the zone, which proves to the recursor that the queried name cannot possibly exist, because nothing exists between the two names listed in the NSEC record.</p>
            <pre><code>www.apps.ietf.org.	1062	IN	NSEC	cloudflare-verify.ietf.org. A RRSIG NSEC</code></pre>
            <p>This <code>NSEC</code> record above tells you that <code>bogus.ietf.org</code> does not exist because no names exist canonically between <code>www.apps.ietf.org</code> and <code>cloudflare-verify.ietf.org</code>. Of course, this record also has a signature contained in the answer:</p>
            <pre><code>www.apps.ietf.org.	1062	IN	RRSIG	NSEC 5 4 1800 20170308083322 20160308073501 40452 ietf.org. NxmjhCkTtoiolJUow/OreeBRxTtf2AnIPM/r2p7oS/hNeOdFI9tpgGQY g0lTOYjcNNoIoDB/r56Kd+5wtuaKT+xsYiZ4K413I+cmrNQ+6oLT+Mz6 Kfzvo/TcrJD99PVAYIN1MwzO42od/vi/juGkuKJVcCzrBKNHCZqu7clu mU3DEqbQQT2O8dYIUjLlfom1iYtZZrfuhB6FCYFTRd3h8OLfMhXtt8f5 8Q/XvjakiLqov1blZAK229I2qgUYEhd77n2pXV6SJuOKcSjZiQsGJeaM wIotSKa8EttJELkpNAUkN9uXfhU+WjouS1qzgyWwbf2hdgsBntKP9his 9MfJNA==</code></pre>
            <p>A second NSEC record is also returned to prove that there is no wildcard that would have covered <code>bogus.ietf.org</code>:</p>
            <pre><code>ietf.org.	1062	IN	NSEC	ietf1._domainkey.ietf.org. A NS SOA MX TXT AAAA RRSIG NSEC DNSKEY SPF</code></pre>
            <p>This record above tells you that a wildcard (<code>*. ietf.org</code>) would have existed between those two names. Because there is no wildcard record at <code>*.ietf.org</code>, as proven by this NSEC record, the DNS resolver knows that really nothing should have been returned for <code>bogus.ietf.org</code>. This <code>NSEC</code> record also has a signature:</p>
            <pre><code>ietf.org.	1062	IN	RRSIG	NSEC 5 2 1800 20170308083303 20160308073501 40452 ietf.org. homg5NrZIKo0tR+aEp0MVYYjT7J/KGTKP46bJ8eeetbq4KqNvLKJ5Yig ve4RSWFYrSARAmbi3GIFW00P/dFCzDNVlMWYRbcFUt5NfYRJxg25jy95 yHNmInwDUnttmzKuBezdVVvRLJY3qSM7S3VfI/b7n6++ODUFcsL88uNB V6bRO6FOksgE1/jUrtz6/lEKmodWWI2goFPGgmgihqLR8ldv0Dv7k9vy Ao1uunP6kDQEj+omkICFHaT/DBSSYq59DVeMAAcfDq2ssbr4p8hUoXiB tNlJWEubMnHi7YmLSgby+m8b97+8b6qPe8W478gAiggsNjc2gQSKOOXH EejOSA==</code></pre>
            <p>All in all, the negative answer for <code>bogus.ietf.org</code> contains an <code>SOA + SOA RRSIG + (2) NSEC + (2) NSEC RRSIG</code>. It is 6 records in total, returning an answer that is 1095 bytes (this is a large DNS answer).</p>
    <div>
      <h3>Zone Walking</h3>
      <a href="#zone-walking">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/IvO8IS03cvULjAZJGiGNh/493fdb6facdde177c88222bd3226f535/4950628049_030625b5e1_z.jpg" />
            
            </figure><p>What you may have noticed is that because the negative answer returns the previous and next name, you can keep asking for next names and essentially “walk” the zone until you learn every single name contained in it.</p><p>For example, if you ask for the <code>NSEC</code> on <code>ietf.org</code>, you will get back the first name in the zone, <code>ietf1._domainkey.ietf.org</code>:</p>
            <pre><code>ietf.org.		1799	IN	NSEC 	ietf1._domainkey.ietf.org.  A NS SOA MX TXT AAAA RRSIG NSEC DNSKEY SPF</code></pre>
            <p>Then if you ask for the <code>NSEC</code> on <code>ietf1._domainkey.ietf.org</code> you will get the next name in the zone:</p>
            <pre><code>ietf1._domainkey.ietf.org. 1799	IN	NSEC 	apps.ietf.org. TXT RRSIG NSEC</code></pre>
            <p>And you can keep going until you get every name in the zone:</p>
            <pre><code>apps.ietf.org.		1799	IN	NSEC 	mail.apps.ietf.org. MX RRSIG NSEC</code></pre>
            <p>The root zone uses <code>NSEC</code> as well, so you can walk the root to see every TLD:</p><p>The root NSEC:</p>
            <pre><code>.			21599	IN	NSEC 	aaa. NS SOA RRSIG NSEC DNSKEY</code></pre>
            <p><code>.aaa NSEC</code>:</p>
            <pre><code>aaa.			21599	IN	NSEC 	aarp. NS DS RRSIG NSEC</code></pre>
            <p><code>.aarp NSEC</code>:</p>
            <pre><code>aarp.			21599	IN	NSEC	 abb. NS DS RRSIG NSEC</code></pre>
            <p>Zone walking was actually considered a feature of the original design:</p><blockquote><p>The complete <code>NXT</code> chains specified in this document enable a resolver to obtain, by successive queries chaining through <code>NXT</code>s, all of the names in a zone. - <a href="https://www.ietf.org/rfc/rfc2535.txt">RFC2535</a></p></blockquote><p>(<a href="https://www.ietf.org/rfc/rfc2535.txt"><code>NXT</code></a> is the original DNS record type that <code>NSEC</code> was based off of)</p><p>However, as you can imagine, this is a terrible idea for some zones. If you could walk the <code>.gov</code> zone, you could learn every US government agency and government agency portal. If you owned a real estate company where every realtor got their own subdomain, a competitor could walk through your zone and find out who all of your realtors are.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3r4Sa7oIA4l8dHp0Vfb8ZE/600b78872744b8a3af69169d1c3b2dfc/3694621030_76a7e356e8_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/kiui/3694621030/in/photolist-6CtU7A-asJTAM-7CPbHo-8EwW4-3eVmjc-hJRKMq-6yqn7i-fHud6f-vkbXQ-dYdajL-Tvem-hhyi-yCJFx-faZG48-4NqD5n-4AznYu-agJiCs-7rHcZi-uctPy-86shBZ-5i6jk8-2FM8ku-4r6pDg-nKT4AD-89gnn7-4raAGu-5GZFu6-ediZYA-4ALrZC-47jTd2-dDUiSp-2UDPN5-dDU83M-89d89Z-dDUhX6-52rBrz-hkLCcd-dDTPvR-9yZg4K-67tzSW-8YUTHK-4YeciD-pdczj5-otQve1-72reNd-dDUoZV-F5Pg-4qFHBC-6yuv4A-54xhoz">image</a> by <a href="https://www.flickr.com/photos/kiui/">KIUI</a></p><p>So the DNS community rallied together and found a solution. They would continue to return previous and next names, but they would hash the outputs. This was defined in an upgrade to <code>NSEC</code> called <a href="https://tools.ietf.org/html/rfc5155"><code>NSEC3</code></a>.</p>
            <pre><code>6rmo7l6664ki2heho7jtih1lea9k6los.icann.org. 3599 IN NSEC3 1 0 5 2C21FAE313005174 6S2J9F2OI56GPVEIH3KBKJGGCL21SKKL A RRSIG</code></pre>
            <p><code>NSEC3</code> was a “close but no cigar” solution to the problem. While it’s true that it made zone walking harder, it did not make it impossible. Zone walking with <code>NSEC3</code> is still possible with a dictionary attack. An attacker can use a list of the most common hostnames, hash them with the hashing algorithm used in the <code>NSEC3</code> record (which is listed in the record itself) and see if there are any matches. Even if the domain owner uses a salt on the hash, the length of the salt is included in the <code>NSEC3</code> record, so there are a finite number of salts to guess.</p><blockquote><p>The Salt Length field defines the length of the salt in octets, ranging in value from 0 to 255.”</p></blockquote><ul><li><p><a href="https://tools.ietf.org/html/rfc5155">RFC5155</a></p></li></ul>
    <div>
      <h3><code>NODATA</code> Responses</h3>
      <a href="#nodata-responses">
        
      </a>
    </div>
    <p>If you recall from above, <code>NODATA</code> is the response from a server when it is asked for a name that exists, but not in the requested type (like an <code>MX</code> record for <code>blog.cloudflare.com</code>). <code>NODATA</code> is similar in output to <code>NXDOMAIN</code>. It still requires <code>SOA</code>, but it only takes one <code>NSEC</code> record to prove the next name, and to specify which types do exist on the queried name.</p><p>For example, if you look for a <code>TXT</code> record on <code>apps.ietf.org</code>, the <code>NSEC</code> record will tell you that while there is no <code>TXT</code> record on <code>apps.ietf.org</code>, there are <code>MX</code>, <code>RRSIG</code> and <code>NSEC</code> records.</p>
            <pre><code>apps.ietf.org.		1799	IN	NSEC 	mail.apps.ietf.org. MX RRSIG NSEC</code></pre>
            
    <div>
      <h3>Problems With Negative Answers</h3>
      <a href="#problems-with-negative-answers">
        
      </a>
    </div>
    <p>There are two problems with negative answers:</p><p>The first is that the authoritative server needs to return the previous and next name. As you’ll see, this is computationally expensive for CloudFlare, and as you’ve already seen, it can leak information about a zone.</p><p>The second is that negative answers require two <code>NSEC</code> records and their two subsequent signatures (or three <code>NSEC3</code> records and three <code>NSEC3</code> signatures) to authenticate the nonexistence of one name. This means that answers are bigger than they need to be.</p>
    <div>
      <h3>The Trouble with Previous and Next Names</h3>
      <a href="#the-trouble-with-previous-and-next-names">
        
      </a>
    </div>
    <p>CloudFlare has a custom in house DNS server built in Go called <a href="/what-weve-been-doing-with-go/">RRDNS</a>. What's unique about RRDNS is that unlike standard DNS servers, it does not have the concept of a zone file. Instead, it has a <a href="/kyoto-tycoon-secure-replication/">key value store</a> that holds all of the DNS records of all of the domains. When it gets a query for a record, it can just pick out the record that it needs.</p><p>Another unique aspect of CloudFlare's DNS is that a lot of our business logic is handled in the DNS. We often dynamically generate DNS answers on the fly, so we don't always know what we will respond with before we are asked.</p><p>Traditional negative answers require the authoritative server to return the previous and next name of a missing name. Because CloudFlare does not have the full view of the zone file, we'd have to ask the database to do a sorted search just to figure out the previous and next names. Beyond that, because we generate answers on the fly, we don’t have a reliable way to know what might be the previous and next name, unless we were to precompute every possible option ahead of time.</p><p>One proposed solution to the previous and next name, and secrecy problems is <a href="https://www.ietf.org/rfc/rfc4470.txt">RFC4470</a>, dubbed 'White Lies'. This RFC proposes that DNS operators make up a previous and next name by randomly generating names that are canonically slightly before and after the requested name.</p><p>White lies is a great solution to block zone walking (and it helps us prevent unnecessary database lookups), but it still requires 2 <code>NSEC</code> records (one for previous and next name and another for the wildcard) to say one thing, so the answer is still bigger than it needs to be.</p>
    <div>
      <h3>When CloudFlare Lies</h3>
      <a href="#when-cloudflare-lies">
        
      </a>
    </div>
    <p>We decided to take lying in negative answers to its fullest extent. Instead of white lies, we do black lies.</p><p>For an <code>NXDOMAIN</code>, we always return <code>\000</code>.(the missing name) as the next name, and because we return an <code>NSEC</code> directly on the missing name, we do not have to return an additional <code>NSEC</code> for the wildcard. This way we only have to return <code>SOA</code>, <code>SOA RRSIG</code>, <code>NSEC</code> and <code>NSEC RRSIG</code>, and we do not need to search the database or precompute dynamic answers.</p><p>Our negative answers are usually around 300 bytes. For comparison, negative answers for <code>ietf.org</code> which uses <code>NSEC</code> and <code>icann.org</code>, which uses <code>NSEC3</code> are both slightly over 1000 bytes, three times the size. The reason this matters so much is that the maximum size of an unsigned UDP packet is typically 512 octets. DNSSEC requires support for at least 1220 octets long messages over UDP, but above that limit, the client may need to upgrade to DNS over TCP. A good practice is to keep enough headroom in order to keep response sizes below fragmentation threshold during zone signing key rollover periods.</p><p><code>NSEC</code>: 1096 bytes</p>
            <pre><code>ietf.org.		1799	IN	SOA	ns0.amsl.com. glen.amsl.com. 1200000317 1800 1800 604800 1800
ietf.org.		1799	IN	RRSIG	SOA 5 2 1800 20170213210533 20160214200831 40452 ietf.org. P8XoJx+SK5nUZAV/IqiJrsoKtP1c+GXmp3FvEOUZPFn1VwW33242LVrJ GMI5HHjMEX07EzOXZyLnQeEvlf2QLxRIQm1wAnE6W4SUp7TgKUZ7NJHP dgLr2gqKYim4CI7ikYj3vK7NgcaSE5jqIZUm7oFxxYO9/YPz4Mx7COw6 XBOMYS2v8VY3DICeJdZsHJnVKlgl8L7/yqrL8qhkSW1yDo3YtB9cZEjB OVk8uRDxK7aHkEnMRz0LODOJ10AngJpg9LrkZ1CO444RhZGgTbwzN9Vq rDyH47Cn3h8ofEOJtYCJvuX5CCzaZDInBsjq9wNAiNBgIQatPkNriR77 hCEHhQ==
ietf.org.		1799	IN	NSEC	ietf1._domainkey.ietf.org. A NS SOA MX TXT AAAA RRSIG NSEC DNSKEY SPF
ietf.org.		1799	IN	RRSIG	NSEC 5 2 1800 20170213210816 20160214200831 40452 ietf.org. B9z/JJs30tkn0DyxVz0zaRlm4HkeNY1TqYmr9rx8rH7kC32PWZ1Fooy6 16qmB33/cvD2wtOCKMnNQPdTG2qUs/RuVxqRPZaQojIVZsy/GYONmlap BptzgOJLP7/HOxgYFgMt5q/91JHfp6Mn0sd218/H86Aa98RCXwUOzZnW bdttjsmbAqONuPQURaGz8ZgGztFmQt5dNeNRaq5Uqdzw738vQjYwppfU 9GSLkT7RCh3kgbNcSaXeuWfFnxG1R2SdlRoDICos+RqdDM+23BHGYkYc /NEBLtjYGxPqYCMe/7lOtWQjtQOkqylAr1r7pSI2NOA9mexa7yTuXH+x o/rzRA==
www.apps.ietf.org.	1799	IN	NSEC	cloudflare-verify.ietf.org. A RRSIG NSEC
www.apps.ietf.org.	1799	IN	RRSIG	NSEC 5 4 1800 20170213210614 20160214200831 40452 ietf.org. U+hEHcTps2IC8VKS61rU3MDZq+U0KG4/oJjIHVYbrWufQ7NdMdnY6hCL OmQtsvuZVRQjWHmowRhMj83JMUagxoZuWTg6GuLPin3c7PkRimfBx7jI wjqORwcuvpBh92A/s/2HXBma3PtDZl2UDLy4z7wdO62rbxGU/LX1jTqY FoJJLJfJ/C+ngVMIE/QVneXSJkAjHV96FSEnreF81V62x9azv3AHo4tl qnoYvRDtK+cR072A5smtWMKDfcIr2fI11TAGIyhR55yAiollPDEz5koj BfMstC/JXVURJMM+1vCPjxvwYzTZN8iICf1AupyyR8BNWxgic5yh1ljH 1AuAVQ==</code></pre>
            <p>Black Lies: 357 bytes</p>
            <pre><code>cloudflare.com.		1799	IN	SOA	ns3.cloudflare.com. dns.cloudflare.com. 2020742566 10000 2400 604800 3600
blog.cloudflare.com.	3599	IN	NSEC	\000.blog.cloudflare.com. RRSIG NSEC
cloudflare.com.		1799	IN	RRSIG	SOA 13 2 86400 20160220230013 20160218210013 35273 cloudflare.com. kgjtJDuuNC/yX8yWQpol4ZUUr8s8yAXZi26KWBI6S3HDtry2t6LnP1ou QK10Ut7DXO/XhyZddRBVj3pIpWYdBQ==
blog.cloudflare.com.	3599	IN	RRSIG	NSEC 13 3 3600 20160220230013 20160218210013 35273 cloudflare.com. 8BKAAS8EXNJbm8DxEI1OOBba8KaiimIuB47mPlteiZf3sVLGN1edsrXE +q+pHaSHEfYG5mHfCBJrbi6b3EoXOw==</code></pre>
            
    <div>
      <h3>DNS Shotgun</h3>
      <a href="#dns-shotgun">
        
      </a>
    </div>
    <p>Our take on <code>NODATA</code> responses is also unique. Traditionally, <code>NODATA</code> responses contain one <code>NSEC</code> record to tell the resolver which types exist on the requested name. This is highly inefficient. To do this, we’d have to search the database for all the types that do exist, just to answer that the requested type does not exist. Remember that’s not even always possible because we have dynamic answers that are generated on the fly.</p><p>What we realized was that <code>NSEC</code> is a denial of existence. What matters in <code>NSEC</code> are the missing types, not the present ones. So what we do is we set all the types. We say, this name does exist, just not on the one type you asked for.</p><p>For example, if you asked for an <code>TXT</code> record of <code>blog.cloudflare.com</code> we would say, all the types exist, just not <code>TXT</code>.</p>
            <pre><code>blog.cloudflare.com.	3599	IN	NSEC	\000.blog.cloudflare.com. A WKS HINFO MX AAAA LOC SRV CERT SSHFP IPSECKEY RRSIG NSEC TLSA HIP OPENPGPKEY SPF</code></pre>
            <p>And then if you queried for a <code>MX</code> on <code>blog.cloudflare.com</code>, we would return saying we have every record type, even <code>TXT</code>, but just not <code>MX</code>.</p>
            <pre><code>blog.cloudflare.com.	3599	IN	NSEC	\000.blog.cloudflare.com. A WKS HINFO TXT AAAA LOC SRV CERT SSHFP IPSECKEY RRSIG NSEC TLSA HIP OPENPGPKEY SPF</code></pre>
            <p>This saves us a database lookup and from leaking any zone information in negative answers. We call this the DNS Shotgun.</p>
    <div>
      <h3>How Are Black Lies and DNS Shotgun Standards Compliant</h3>
      <a href="#how-are-black-lies-and-dns-shotgun-standards-compliant">
        
      </a>
    </div>
    <p>We put a lot of care to ensure CloudFlare’s negative answers are standards compliant. We’re even pushing for them to become an Internet Standard by <a href="https://tools.ietf.org/html/draft-valsorda-dnsop-black-lies">publishing an Internet Draft</a> earlier this year.</p><p><a href="https://www.ietf.org/rfc/rfc4470.txt">RFC4470</a>, White Lies, allows us to randomly generate next names in <code>NSEC</code>. Not setting the second <code>NSEC</code> for the wildcard subdomain is also allowed, so long as there exists an <code>NSEC</code> record on the actual queried name. And lastly, our lie of setting every record type in <code>NSEC</code> records for <code>NODATA</code>, is okay too –– after all, domains are constantly changing, it’s feasible that the zone file changed in between the time the <code>NSEC</code> record indicated to you there was no <code>MX</code> record on <code>blog.cloudflare.com</code> and when you queried successfully for <code>MX</code> on <code>blog.cloudflare.com</code>.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We’re proud of our negative answers. They help us keep packet size small, and CPU consumption low enough for us to provide <a href="https://www.cloudflare.com/dnssec/">DNSSEC for free</a> for any domain. Let us know what you think, we’re looking forward to hearing from you.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[RRDNS]]></category>
            <category><![CDATA[Salt]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">2gk2SvhQNI2AbzoJivm97n</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[python-cloudflare]]></title>
            <link>https://blog.cloudflare.com/python-cloudflare/</link>
            <pubDate>Mon, 09 May 2016 22:47:07 GMT</pubDate>
            <description><![CDATA[ Very early on in the company’s history we decided that everything that CloudFlare does on behalf of its customer-base should be controllable via an API. In fact, when you login to the CloudFlare control panel, you’re really just making API calls to our backend services. ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h3>Using the CloudFlare API via Python</h3>
      <a href="#using-the-cloudflare-api-via-python">
        
      </a>
    </div>
    <p>Very early on in the company’s history we decided that everything that CloudFlare does on behalf of its customer-base should be controllable via an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a>. In fact, when you login to the CloudFlare control panel, you’re really just making API calls to our backend services. Over time that API has matured and improved. We are now on v4 of that API.</p><p>The current CloudFlare API is documented <a href="https://api.cloudflare.com">here</a> and it’s used by both the CloudFlare control panel and directly by umpteen customers every minute of every day. The new API is designed with a clean naming structure and consistent data representation for data. It’s also extensible.</p><p>This blog entry introduces <a href="https://github.com/cloudflare/python-cloudflare">python-cloudflare</a>, a Python wrapper providing full access to the CloudFlare v4 API.</p>
    <div>
      <h3>An example</h3>
      <a href="#an-example">
        
      </a>
    </div>
    <p>Let’s get right into the thick-of-it with the simplest coding example available to show python-cloudflare in action. This example lists all your domains (zones) and also checks some basic features for each zone.</p>
            <pre><code>#!/usr/bin/env python
import CloudFlare
def main():
    cf = CloudFlare.CloudFlare()
    zones = cf.zones.get(params={'per_page':50})
    for zone in zones:
        zone_name = zone['name']
        zone_id = zone['id']
        settings_ipv6 = cf.zones.settings.ipv6.get(zone_id)
        ipv6_on = settings_ipv6['value']
        print zone_id, ipv6_on, zone_name
    exit(0)
if __name__ == '__main__':
    main()</code></pre>
            <p>The structure of the CloudFlare class matches the API documentation. The <code>CloudFlare.zones.get()</code> method returns information about the zones as per the <a href="https://api.cloudflare.com/#zone-list-zones">list zones</a> documentation. The same for <code>CloudFlare.zones.settings.ipv6.get()</code> and its <a href="https://api.cloudflare.com/#zone-settings-get-ipv6-setting">documentation</a>.</p><p>Data is passed into the methods via standard Python structures and they are returned in Python structures that match the API documentation. That means if you see an API call in the documentation, then you can translate it into the Python code in a one-to-one manner.</p><p>For example, take a look at the <a href="https://api.cloudflare.com/#waf-rules-list-rules">WAF list rules</a> API call.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58QXJkTf6PTJW4n69AsL1R/00031b7cac29a93d4da4d3816fd1171d/1-list_rules.png" />
            
            </figure><p>This codes into <code>CloudFlare.zones.dns_records.post()</code> method with the <code>zone_id</code> as the first argument, the <code>package_id</code> as the second argument and the optional parameters passed last (or if there aren’t any; then just drop the third argument for the call). Because this is a <code>GET</code> call there’s a <code>.get()</code> as part of the method name.</p>
            <pre><code>    r = cf.zones.firewall.waf.packages.rules.get(zone_id, package_id, params=params)</code></pre>
            <p>Here’s the much simpler <a href="https://api.cloudflare.com/#dns-records-for-a-zone-create-dns-record">Create DNS record</a> API call.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/giA1qSsZ55yOwwczVcXBk/5f7b3b2a9e56856ccd75af6e054791f2/2-create_dns_record.png" />
            
            </figure><p>This would be coded into the Python method <code>CloudFlare.zones.dns_records.post()</code> with the <code>zone_id</code> as the first argument and the required parameters passed as data. Because this is a <code>POST</code> call there’s a <code>.post()</code> as part of the method name.</p>
            <pre><code>    r = cf.zones.dns_records.post(zone_id, data=dns_record)</code></pre>
            <p>Here’s an example of that Create DNS record call in action. In this code, we add two records to an existing zone. We also show how the error is handled (in classic Python style).</p>
            <pre><code>    zone_name = 'example.com'
    try:
        r = cf.zones.get(params={'name': zone_name})
    except CloudFlare.CloudFlareAPIError as e:
        exit('/zones.get %s - %d %s' % (zone_name, e, e))
    except Exception as e:
        exit('/zones.get %s - %s' % (zone_name, e))
   zone_id = r['id']
   # DNS records to create
    dns_records = [
        {'name':'foo', 'type':'A',    'content':'192.168.100.100'}
        {'name':'foo', 'type':'AAAA', 'content':'2001:d8b::100:100'},
    ]
    for dns_record in dns_records:
        try:
            r = cf.zones.dns_records.post(zone_id, data=dns_record)
        except CloudFlare.CloudFlareAPIError as e:
            exit('/zones.dns_records.post %s - %d %s' % (record['name'], e, e))</code></pre>
            <p>There’s a whole folder of example code available on the GitHub repository.</p>
    <div>
      <h2>All on GitHub for anyone to use</h2>
      <a href="#all-on-github-for-anyone-to-use">
        
      </a>
    </div>
    <p>As we stated above (and as CloudFlare has done many times before) we have placed this code up on <a href="https://github.com/cloudflare/python-cloudflare">GitHub</a> for anyone to download and use. We welcome contributions and will review any pull requests. To install it, just clone it and follow the README instructions. For those that just want to get going right now; here’s the tl;dr install:</p>
            <pre><code>$ git clone https://github.com/cloudflare/python-cloudflare
$ cd python-cloudflare
$ ./setup.py build
$ sudo ./setup.py install</code></pre>
            
    <div>
      <h2>But wait; there’s more!</h2>
      <a href="#but-wait-theres-more">
        
      </a>
    </div>
    <p>Not only do you get the python API calls, you also get a fully functioning CLI (command line interface) that allows quick creation of scripts that interface with CloudFlare.</p><p>From the CLI command you can call any of the CloudFlare API calls. The command responds with the returned JSON data. If you want to filter the results you possibly also want to install the highly-versatile <a href="https://stedolan.github.io/jq/">jq</a> command. Here’s a command to check the nameservers for a specific domain hosted on CloudFlare and then process it via jq.</p>
            <pre><code>$ cli4 name=example.com /zones | jq -c '.[]|{"name_servers":.name_servers}'
{
  "name_servers":[
    "alice.ns.cloudflare.com",
    "bob.ns.cloudflare.com"
  ]
}
$</code></pre>
            <p>The CLI command will convert on-the-fly zone names into zone identifiers. For example; if you want to check the <a href="/dnssec-an-introduction/">DNSSEC</a> status on a zone your operate on CloudFlare; then use this command.</p>
            <pre><code>$ cli4 /zones/:example.com/dnssec | jq '{"status":.status,"ds":.ds}'
{
  "status": "active",
  "ds": "example.com. 3600 IN DS 2371 13 2 00000000000000000000000000000 ..."
}
$</code></pre>
            <p>You can issue <code>GET</code> <code>PUT</code> <code>POST</code> <code>PATCH</code> or <code>DELETE</code> calls into the API. You can pass data into a CloudFlare API call with the CLI command. All documented via the <code>README.md</code> and wiki examples in GitHub.</p><p>Here’s a useful command for customers that need to flush their cache files.</p>
            <pre><code>$ cli4 --delete purge_everything=true /zones/:example.com/purge_cache
{
  "id":"d8afaec3dd2b7f8c1b470e594a21a01d"
}
$</code></pre>
            <p>See how the commands arguments match the <a href="https://api.cloudflare.com/#zone-purge-all-files">purge_cache</a> API documentation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3E5z7ixk8yXPm3sg4hYmGT/549732f7675c9a925056e141571ee9e6/3-purge.png" />
            
            </figure><p>Finally, here’s an example of turning on DNSSEC via the API.</p>
            <pre><code>$ cli4 --patch status=active /zones/:example.com/dnssec | jq -c '{"status":.status}'
{"status":"pending"}
$</code></pre>
            <p>There are plenty more examples available within the GitHub repo.</p>
    <div>
      <h3>CloudFlare API via other languages also available</h3>
      <a href="#cloudflare-api-via-other-languages-also-available">
        
      </a>
    </div>
    <p>Python isn’t the only language you can use to interact with CloudFlare’s API. If you’re a <code>Go</code>, or <code>Node.js</code> user, we also have client libraries you can use to interact with CloudFlare on our <a href="https://github.com/cloudflare/python-cloudflare">GitHub</a>. Find them here <a href="https://github.com/cloudflare/cloudflare-go">Go client</a> and here <a href="https://github.com/cloudflare/node-cloudflare">Node.js client</a>. Want to write something in a different language? Feel free to do that. The <a href="https://api.cloudflare.com/">API</a> spec is online and ready for you to code up.</p><p>If you like what you read here today and are interested in joining one of CloudFlare’s software teams, then checkout our <a href="http://www.cloudflare.com/join-our-team">Join Our Team</a> page.</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[WAF Rules]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Python]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">6BqiD9VfbJKeYp2SbfqE5w</guid>
            <dc:creator>Martin J Levy</dc:creator>
        </item>
        <item>
            <title><![CDATA[What happened next: the deprecation of ANY]]></title>
            <link>https://blog.cloudflare.com/what-happened-next-the-deprecation-of-any/</link>
            <pubDate>Wed, 13 Apr 2016 12:39:32 GMT</pubDate>
            <description><![CDATA[ Almost a year ago, we announced that we were going to stop answering DNS ANY queries. We were prompted by a number of factors: The lack of legitimate ANY use. The abundance of malicious ANY use. The constant use of ANY queries in large DNS amplification DDoS attacks. ]]></description>
            <content:encoded><![CDATA[ <p>Almost a year ago, we announced that we were going to <a href="/deprecating-dns-any-meta-query-type/">stop answering DNS ANY</a> queries. We were prompted by a number of factors:</p><ol><li><p>The lack of legitimate ANY use.</p></li><li><p>The abundance of malicious ANY use.</p></li><li><p>The constant use of ANY queries in large DNS amplification DDoS attacks.</p></li></ol><p>Additionally, we were about to launch <a href="https://www.cloudflare.com/dnssec/universal-dnssec/">Universal DNSSEC</a>, and we could foresee the high cost of assembling ANY answers and providing DNSSEC-on-the-fly for those answers, especially when most of the time, those ANY answers were for malicious, illegitimate, clients.</p>
            <figure>
            <a href="https://dnsreactions.tumblr.com/post/115111883802/cloudflare-responds-to-qtype-any">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76VgmR1nSYbfIkKmmZ7tFC/3dfbf14246951cb4c2bd2d914e24b603/Screen-Shot-2016-04-13-at-9-49-31-AM.png" />
            </a>
            </figure><p>Although we usually make a tremendous effort to maintain backwards compatibility across Internet protocols (recently, for example, continuing to support <a href="/sha-1-deprecation-no-browser-left-behind/">SHA-1-based SSL certificates</a>), it was clear to us that the DNS ANY query was something that was better removed from the Internet than maintained for general use.</p><p>Our proposal at the time was to return an ERROR code to the querier telling them that ANY was not supported, and this sparked a robust discussion in the DNS protocol community. In this blog post, we’ll cover what has happened and what our final plan is.</p><p>Just before we published our blog a popular software started using ANY queries, to get all address records for a name -- something that ANY isn’t actually designed to do. The effect of this software was that our steady ANY query load had grown from few hundred per second to tens of thousands in a matter of days. Luckily, the software in question issued a revised version that did not use ANY, and our steady ANY query load returned to the old level.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hiSIjgf7eL3dVpclsjVMK/457b1fed76df1399beb02ef158542019/5719368307_7475b5594c_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://www.flickr.com/photos/liamq/5719368307/in/photolist-9HpfWp-7vpC9Y-7nQmx5-q1mALK-2NP1Dj-9tSNAV-fs6aGK-9YGHL4-a2MX7m-pdavta-9YG3s2-9HMRfG-vyBqdd-57eEb7-nVMdGs-9XmtSF-9LzUNq-9Hsn2o-ijY6d-stqR-9s2yQP-9UQ1he-9HsbT7-9USt79-8rJpfT-zRD2us-q1fiy4-vReb9v-87zVLF-dix1Mc-bkvWid-9YGk6z-8rVUX-ibFyKE-JNes5-4ZTY63-9Hpsuk-ak22cF-8AYnAR-9rCCPK-7wLkFs-9LzY73-oG2GwQ-3YN2wN-9UQ1Zx-2VDgHs-rgq9Tv-9pTXeC-vQB1HW-pkU3qT">image</a> by <a href="https://www.flickr.com/photos/liamq/">Liam Quinn</a></p>
    <div>
      <h3>The Conversation in the DNS Community</h3>
      <a href="#the-conversation-in-the-dns-community">
        
      </a>
    </div>
    <p>As that was happening, a lively discussion started to form among those in the DNS community. The first fundamental question that had to be answered was: “Does ANY mean ALL?”. That is, was an ANY query meant to be a way to receive all of the records in a zone for the query name?</p><p>Different people had different interpretations of ANY depending on the kind of DNS service they were providing. For example, it is nice as an operator of a DNS resolver to be able to ask your own resolver, “what is stored in the resolver memory/cache for a particular name?”. An ANY query with the right query flags can help answer that. On the other hand, an operator of an authoritative server does not need that functionality, as AXFR provides another reliable way to get that information. In short, the ANY query is a nice tool for people that are trying to understand what is going on in DNS resolution. Some community members argued that ANY should be a restricted query to a privileged few that have the need to know.</p><p>While ANY can be a nice tool to debug and expose inconsistencies in resolver caches, it’s still not a great tool: with more and more Anycast resolver clusters, there is no guarantee that two subsequent queries will hit the same resolver.</p>
    <div>
      <h3>Why is answering ANY expensive for some DNS providers like CloudFlare?</h3>
      <a href="#why-is-answering-any-expensive-for-some-dns-providers-like-cloudflare">
        
      </a>
    </div>
    <p>Our in-house DNS server is optimized to provide dynamic answers to questions. For example, depending on how a CNAME is configured, our server may return the CNAME, fetch the real answer from the target of the CNAME (what we call <a href="/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/">CNAME Flattening</a>), or provide CloudFlare addresses as answers. Thus for us to answer an ANY query, we need to compute all of the possible combinations just to know what to return.</p><p>Beyond that, our DNSSEC implementation signs answers on-the-fly. Thus, returning an answer with many different types of DNS records in the answer requires signing all of them at the edge. Thus providing support for ANY in the “traditional” sense had serious computational and response time implications. This is not unique to CloudFlare; we are not the only DNS implementation that has this high cost factor in answering ANY queries.</p>
    <div>
      <h3>The use of ANY queries in DDoS attacks</h3>
      <a href="#the-use-of-any-queries-in-ddos-attacks">
        
      </a>
    </div>
    <p>In a <a href="https://www.stateoftheinternet.com/downloads/pdfs/2016-state-of-the-internet-threat-advisory-dnssec-ddos-amplification-attacks.pdf">recent paper</a> by Akamai the authors draw the conclusion that DNSSEC is the main cause of large answers used for DDoS attacks. But looking at the packet capture they included in the paper, it’s clear the real cause of the large answers is that the attackers use ANY queries to maximize the amplification factor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67n5OGO94a4jGEf93Gq5ce/380e90087e4e1eb00225458c77792529/542729697_fee708a68f_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://www.flickr.com/photos/vshioshvili/542729697/in/photolist-PXCCz-k6NoRc-cDtNZd-edwgy1-6jQXRQ-dXy1Jr-y7nBK-bc1yiX-3VM5qq-gwmAC2-f9yVtM-oRrjRG-941Srq-4S5XGd-iWWst8-osFsjb-bwNo4s-fmBA2h-gMUKvN-bSYVZ-qyv52o-j5SCMq-hbeFS5-c4YDmC-pBuq2q-7dLGEn-6yMZwa-pQBJaX-8rtb76-8G1gSm-4S6XpX-oNYyuF-8jiMY-oTjWxG-7rMR8G-4TXTS2-pBv71k-9Ti743-pugFk8-8v4Sze-pSheSw-hSNyda-qtj2GB-yvd8-9dDz1m-nMnq18-7Ky43P-tJa4Bm-8UEDst-7R5MQW">image</a> by <a href="https://www.flickr.com/photos/vshioshvili/">Vladimer Shioshvili</a></p><p>We regularly see attacks that attempt to use our powerful DNS system as a source of reflection. We have in response to this created sophisticated systems to detect the attacks and mitigate them in a responsible manner. Our deprecation of ANY is a key part of those protections. One of our main mantra is “<i>do not return larger than needed answers</i>”, exactly to help protect others on the Internet from amplification attacks.</p>
    <div>
      <h3>Evolution of “Suppress ANY” in the DNS protocol community</h3>
      <a href="#evolution-of-suppress-any-in-the-dns-protocol-community">
        
      </a>
    </div>
    <p>Soon after we announced our planned deprecation of ANY, we <a href="https://tools.ietf.org/html/draft-ogud-dnsop-acl-metaqueries-00">submitted an Internet Draft</a> to the IETF proposing to restrict ANY queries to authorized parties only. In the resulting discussion, it became clear that what mattered here was how we deprecated ANY. To make sure no system was adversely affected, we had to take into account how various applications and DNS implementations were using ANY.</p><p>In short, there are two main uses of ANY queries:</p><ol><li><p>Some programs use ANY as a probabilistic optimization attempt to get the answers they need.</p></li><li><p>Some use ANY to debug DNS resolvers when things go wrong.</p></li></ol><p>However, neither of those use cases are satisfied by the current ANY landscape. There is a lack of common behavior among resolvers as to how they treat answers that are cached as a result of an ANY query. Some will return this data when a more specific query matches, while other resolvers will fetch the exact requested data, even though they already have it in cache from an ANY query. This is because resolvers interpret <a href="https://tools.ietf.org/html/rfc2181">RFC2181</a> section 5.4.1 (Ranking Data) in differing ways, and some resolvers were written without applying the rules from RFC2181 at all. Some resolvers will not forward ANY queries to authoritative servers if there exists a single RRset for that name in the resolver’s cache, and others will forward it unless they have a prior ANY answer in the cache. Furthermore, some resolvers will only reuse the results of an ANY query to answer other ANY queries, queries for all other types will result in a direct query for the type requested.</p><p>In many cases, the ANY query results in an answer that is too large to fit in the UDP packet size requested, resulting in a truncated answer, leading to a follow up via TCP. For a while, the DNS community believed that returning truncated answers would stop attacks, but in reality, that will only mitigate simple attacks using forged packets. In attacks that are reflected via open resolvers, returning truncated packets will not work because the open resolvers are happy to fall back to TCP if the UDP answer sets the truncate bit.</p><p>So, there is no common understanding of how the ANY query should be treated to minimize its amplification potential. To be fair, the ANY query is a special type of query called a meta-query i.e. it is not an actual type. Nevertheless, the community was divided into two camps: “ANY == ALL” and “ANY != ALL”.</p><p>The community was also further divided into groups of “ANY is ok for everyone” and “ANY should be restricted to “good” clients”. Our position from the beginning was that “ANY != ALL” and we were looking for a way to help curb the number of large amplified attacks on the Internet that used ANY.</p><p>Over a few months, we engaged in a number of experiments to see how different DNS systems reacted to different non-ANY responses to the ANY query. After a fair amount of experimentation and discussions with colleagues around the world, we decided on an approach that is <i>recursive resolver centric</i>. What we wanted to do was to give answers that are friendly to recursive resolvers, i.e. we give them something small that they can cache and return to repeated ANY queries. Returning an error to a recursive resolver was not a good option, as the resolver will just ask the next authoritative server and visit all the authoritative servers before giving up.</p><p>We also wanted to avoid guessing the intention of the originator of the query, which is why we did not follow one proposal to give out the A+AAAA+MX records or a CNAME if one existed. We do not like that, as the answer is bigger than it has to be and there is more data in the answer than the originator wanted.</p><p>For example, consider an email server that wants an MX record if one exists, but will fallback to an IP address if the MX does not exist. Instead, we decided to return what we call a “harmless” answer––an answer that is not useful for any application on the Internet. We selected an old DNS record type that is not used much anymore, but has the nice property that all test tools display as text: <i>HINFO</i>. This approach is documented in the current Internet Draft <a href="https://datatracker.ietf.org/doc/draft-ietf-dnsop-refuse-any/">Refuse ANY</a> draft that was adopted by the <a href="https://datatracker.ietf.org/wg/dnsop/charter/">DNSOP working group</a> of the IETF that handles DNS protocol issues.</p><p>As you can see below, when asked for ANY, we only return one HINFO record and the optional RRSIG that is only needed when the zone is signed. This record can be cached, and has the added benefit of being small and therefore reducing the amplification factor the attacker expected.</p>
            <pre><code>; &lt;&lt;&gt;&gt; DiG 9.8.3-P1 &lt;&lt;&gt;&gt; @ns2.p31.dynect.net. amazon.com. any +dnssec +norec
; (1 server found)
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 29671
;; flags: qr aa; QUERY: 1, ANSWER: 16, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;amazon.com.			IN	ANY

;; ANSWER SECTION:
amazon.com.		900	IN	SOA	dns-    external-master.amazon.com. root.amazon.com. 2010113317 180 60 3024000 60
amazon.com.		3600	IN	NS	pdns6.ultradns.co.uk.
amazon.com.		3600	IN	NS	pdns1.ultradns.net.
amazon.com.		3600	IN	NS	ns2.p31.dynect.net.
amazon.com.		3600	IN	NS	ns3.p31.dynect.net.
amazon.com.		3600	IN	NS	ns1.p31.dynect.net.
amazon.com.		3600	IN	NS	ns4.p31.dynect.net.
amazon.com.		60	IN	A	54.239.26.128
amazon.com.		60	IN	A	54.239.25.208
amazon.com.		60	IN	A	54.239.25.200
amazon.com.		60	IN	A	54.239.17.7
amazon.com.		60	IN	A	54.239.17.6
amazon.com.		60	IN	A	54.239.25.192
amazon.com.		900	IN	MX	5 amazon-smtp.amazon.com.
amazon.com.		900	IN	TXT	"spf2.0/pra include:spf1.amazon.com include:spf2.amazon.com include:amazonses.com -all"
amazon.com.		900	IN	TXT	"v=spf1 include:spf1.amazon.com include:spf2.amazon.com include:amazonses.com -all"

;; Query time: 30 msec
;; SERVER: 204.13.250.31#53(204.13.250.31)
;; WHEN: Wed Apr 13 09:57:59 2016
;; MSG SIZE  rcvd: 565</code></pre>
            <p>Unsigned answer from large internet company is 565 bytes long.</p><p>In contrast a signed answer from CloudFlare.com is only 224 bytes long or less than ½ the unsigned answer above.</p>
            <pre><code>; &lt;&lt;&gt;&gt; DiG 9.8.3-P1 &lt;&lt;&gt;&gt; cloudflare.com any @ns3.cloudflare.com +dnssec +norec
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 36238
;; flags: qr aa; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 512
;; QUESTION SECTION:
;cloudflare.com.			IN	ANY

;; ANSWER SECTION:
cloudflare.com.		3789	IN	HINFO	"Please stop asking for ANY" "See draft-ietf-dnsop-refuse-any"
cloudflare.com.		3789	IN	RRSIG	HINFO 13 2 3789 20160414100147 20160412080147 35273 cloudflare.com. lGyCY7IC5sgHBfE95IJXDUS4diFjE5kq4vNMhhqP6+2+NyTQh1zAh1qw 3C710mFvvuCWe4VyRiqlu1jUzMnuLg==

;; Query time: 80 msec
;; SERVER: 2400:cb00:2049:1::a29f:21#53(2400:cb00:2049:1::a29f:21)
;; WHEN: Wed Apr 13 10:01:47 2016
;; MSG SIZE  rcvd: 224</code></pre>
            <p>We have been returning the HINFO answer for ANY queries since October 2015, with very few reports of problems in the field, besides a single Twitter rant about us not understanding DNS.</p><p>A few other DNS server vendors and DNS operators have followed our lead or adopted a similar line of defense. The latest example is the <a href="http://fanf.livejournal.com/140566.html">University of Cambridge</a> which modified their BIND implementation to only return a single RRset for an <a href="https://git.csx.cam.ac.uk/x/ucs/ipreg/bind9.git/commitdiff/f8c420dd8e">ANY query</a>. A common DNS server (NSD) has also been limiting ANY responses to only A+AAAA+MX types and/or CNAME, and a <a href="https://gist.github.com/hdais/25cb3fc86335026d40f0">patch to return an even smaller answer</a> has been proposed.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>The moral is that the ANY query is not a useful tool for most DNS operators, but it is a wonderful tool if one is in the business of generating attacks against anyone else on the Internet. Answering ANY queries with a giant answer is not helping mitigate the plague of DoS volume attacks.</p><p>CloudFlare has taken a step to make the Internet a less hostile place and is leading by example. We strongly urge others to follow in our steps and neuter the amplification factor that a single DNS query can achieve. We all want to build a safer Internet, and neutering ANY query is one small step that everyone can take.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">2FlSGPeXk3rPYpnd9NtCZ6</guid>
            <dc:creator>Ólafur Guðmundsson</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Deep Dive Into DNS Packet Sizes: Why Smaller Packet Sizes Keep The Internet Safe]]></title>
            <link>https://blog.cloudflare.com/a-deep-dive-into-dns-packet-sizes-why-smaller-packet-sizes-keep-the-internet-safe/</link>
            <pubDate>Fri, 04 Mar 2016 18:02:35 GMT</pubDate>
            <description><![CDATA[ One way that attackers DDoS websites is by repeatedly doing DNS lookups that have small queries, but large answers. The attackers spoof their IP address so that the DNS answers are sent to the server they are attacking, this is called a reflection attack. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/29233640@N07/7654121138/in/photolist-e9kv19-9j6qxa-cEnnxQ-4fU67j-9GNLLu-4sbEbM-9GNLLo-9Gt7pW-8eWNET-v493-4bjmAN-32Gptn-fEBKM-87B9g9">image</a> by <a href="https://www.flickr.com/photos/29233640@N07/">Robert Couse-Baker</a></p><p>Yesterday we wrote about the <a href="/a-winter-of-400gbps-weekend-ddos-attacks/">400 gigabit per second</a> attacks we see on our network.</p><p>One way that attackers DDoS websites is by repeatedly doing DNS lookups that have small queries, but large answers. The attackers spoof their IP address so that the DNS answers are sent to the server they are attacking, this is called a <a href="/deep-inside-a-dns-amplification-ddos-attack/">reflection attack</a>.</p><p>Domains with DNSSEC, because of the size of some responses, are usually ripe for this type of abuse, and many DNS providers struggle to combat DNSSEC-based DDoS attacks. Just last month, <a href="https://www.akamai.com/uk/en/multimedia/documents/state-of-the-internet/dnssec-amplification-ddos-security-bulletin.pdf">Akamai published a report</a> on attacks using DNS lookups against their DNSSEC-signed .gov domains to DDoS other domains. They say they have seen 400 of these attacks since November.</p><p>To <a href="https://www.cloudflare.com/learning/ddos/how-to-prevent-ddos-attacks/">prevent</a> any domain on CloudFlare being abused for a DNS amplification attack in this way, we took precautions to make sure most DNS answers we send fit in a 512 byte UDP packet, even when the zone is signed with DNSSEC. To do this, we had to be creative in our DNSSEC implementation. We chose a rarely-used-for-DNSSEC signature algorithm and even deprecated a DNS record type along the way.</p>
    <div>
      <h3>Elliptic Curves: Keeping It Tight</h3>
      <a href="#elliptic-curves-keeping-it-tight">
        
      </a>
    </div>
    <p>Dutch mathematician Arjen Lenstra famously talks about cryptography in terms of energy. (We’ve covered him once <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">before on our blog</a>). He takes the amount of energy required to break a cryptographic algorithm and compares that with how much water that energy could boil. To break a 228-bit RSA key requires less energy than it takes to boil a teaspoon of water. On the other hand, to break a 228-bit elliptic curve key requires the amount of energy needed to boil all the water on the earth.</p><p>With elliptic curve cryptography in the ECDSA signature algorithm, we can use smaller keys with the same level of security as a larger RSA key. Our elliptic curve keys are 256 bits long, equivalent in strength to a 3100 bit RSA key (most RSA keys are only 1024 or 2048 bits). You can compare below two signed DNSKEY sets, an RSA implementation against our ECDSA one. Ours is one quarter of the size of the matching RSA keys and signature.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XxfFSJnCzBJdOyaEz0kxs/0f045ffcf4aed84d1fda230b72a19037/dnskey.png" />
            
            </figure><p>As a side benefit, ECDSA is lightning fast, and our engineer Vlad Krasnov actually helped make it even faster. By implementing ECDSA natively in assembler, he was able to <a href="/go-crypto-bridging-the-performance-gap/">speed up signing</a> by 21x. His optimizations are <a href="https://go-review.googlesource.com/#/c/8968">now part of the standard Go crypto library</a> as of Go version 1.6. It now only takes us a split of a second, 0.0001 of a second, to sign records for a DNS answer.</p>
    <div>
      <h3>Deprecating ANY: The Obituary Of A DNS Record Type</h3>
      <a href="#deprecating-any-the-obituary-of-a-dns-record-type">
        
      </a>
    </div>
    <p>In Akamai’s security report, the authors draw the conclusion that DNSSEC is the only cause of the large answers used for DDoS attacks, but the other cause of the large answers is that the attackers use ANY queries to maximize the amplification factor. ANY queries are a built-in debugging tool, meant to return every DNS record that exist for a name. Unfortunately, they are instead more often used for launching large DDoS attacks.</p><p>In September, we stopped answering ANY queries and <a href="https://tools.ietf.org/html/draft-jabley-dnsop-refuse-any-00">published an Internet Draft</a> to begin the process of making ANY deprecation an Internet standard. We did this carefully, and worked closely with the few remaining software vendors who use ANY to ensure that we wouldn’t affect their production systems.</p><p>An ANY query for DNSSEC-enabled cloudflare.com returns an answer that is 231 bytes. The alleged domain in Akamai’s paper, for comparison, returns an ANY query almost 18 times larger, at a whopping 4016 bytes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Zz0xBeAhnXnLUl8gm7LwP/9649ded69ee63e23918e94847b0aaec9/any-1.png" />
            
            </figure>
    <div>
      <h3>ECDSA + ANY</h3>
      <a href="#ecdsa-any">
        
      </a>
    </div>
    <p>By keeping our packet size small enough to fit in a 512 byte UDP packet, we keep the domains on us safe from being the amplification factor of a DDoS attack. If you are interested in using DNSSEC with CloudFlare, <a href="https://www.cloudflare.com/dnssec/universal-dnssec/#cloudflare-makes-dnssec-easy">here are some easy steps</a> to get you setup. If you are interested in working on technical challenges like these, we’d love to <a href="https://www.cloudflare.com/join-our-team/">hear from you</a>.</p> ]]></content:encoded>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[Elliptic Curves]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[DDoS]]></category>
            <guid isPermaLink="false">2GS95uvriRHCaXzQaRU0aR</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Flexible, secure SSH with DNSSEC]]></title>
            <link>https://blog.cloudflare.com/flexible-secure-ssh-with-dnssec/</link>
            <pubDate>Wed, 13 Jan 2016 11:44:21 GMT</pubDate>
            <description><![CDATA[ If you read this blog on a regular basis, you probably use the little tool called SSH, especially its ubiquitous and most popular implementation OpenSSH. ]]></description>
            <content:encoded><![CDATA[ <p><b>UPDATE</b>: Corrected the paragraph about the permissions of the AuthorizedKeys file.</p><hr /><p>If you read this blog on a regular basis, you probably use the little tool called SSH, especially its ubiquitous and most popular implementation <a href="http://www.openssh.com/">OpenSSH</a>.</p><p>Maybe you’re savvy enough to only use it with public/private keys, and therefore protect yourself from dictionary attacks. If you do then you know that in order to configure access to a new host, you need to make a copy of a public key available to that host (usually by writing it to its disk). Managing keys can be painful if you have many hosts, especially when you need to renew one of the keys. What if DNSSEC could help?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Ttkx9Cc5tKfk21Xl0kY5j/4fe2e99e246e69b65e01e5462a2f539a/3923470620_d64bde94dd_z_d-1.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/wneuheisel/3923470620">image</a> by <a href="https://www.flickr.com/photos/wneuheisel">William Neuheisel</a></p><p>With <a href="http://www.openssh.com/txt/release-6.2">version 6.2</a> of OpenSSH came a feature that allows the remote host to retrieve a public key in a customised way, instead of the typical <code>authorized_keys</code> file in the <code>~/.ssh/</code> directory. For example, you can gather the keys of a group of users that require access to a number of machines on a single server (for example, an <a href="http://serverfault.com/questions/653792/ssh-key-authentication-using-ldap">LDAP server</a>), and have all the hosts query that server when they need the public key of the user attempting to log in. This saves a lot of editing of <code>authorized_keys</code> files on each and every host. The downside is that it's necessary to trust the source these hosts retrieve public keys from. An LDAP server on a private network is probably trustworthy (when looked after properly) but for hosts running in the cloud, that’s not really practical.</p><p>DNSSEC is helpful here. That's right: now that we can verify responses from a DNS server, we can safely store public keys in DNS records!</p><p>So let's say we administer <code>example.com</code> and want to give Alice and Bob access to machines <code>foo</code>, <code>bar</code> and <code>baz</code> in that domain. We'll store their respective public keys in TXT<a href="#fn1">[1]</a> records named <code>alice_pubkey.example.com</code> and <code>bob_pubkey.example.com</code>. To be entirely accurate, it doesn’t really matter which zone these records belong to, but I’ll consider here that we only have one domain. The requirements are:</p><ul><li><p>the machines need to run OpenSSH server version 6.2 or later</p></li><li><p>they also need a DNSSEC validating resolver (we'll use <code>unbound-host</code>)</p></li><li><p>Alice and Bob's keys need to be less than 256 characters long (ECDSA or Ed25519 keys will work)</p></li><li><p>DNSSEC needs to be correctly <a href="https://support.cloudflare.com/hc/en-us/articles/209114378">set up</a> on the domain <code>example.com</code> (surprise!)</p></li></ul><p>Alice and Bob generate keys like this:</p>
            <pre><code>foo:~$ ssh-keygen -t ecdsa</code></pre>
            <p>or like this:</p>
            <pre><code>foo:~$ ssh-keygen -t ed25519</code></pre>
            <p>and then follow the instructions. They will of course <i>provide a non-empty passphrase</i>. Then they send us (or whoever administers the zone file for <code>example.com</code>) the public key file, which may look like this:</p>
            <pre><code>ssh-ed25519 AAAAC3N...VY4A= alice@foo</code></pre>
            <p>We can strip the comment <code>alice@foo</code> out of that file, and use the rest as the value to create a TXT record with the name <code>alice_pubkey</code> in the domain <code>example.com</code>. Then, retrieving the key is as easy as this:</p>
            <pre><code> foo:~$ unbound-host -t TXT alice_pubkey.example.com
 alice_pubkey.example.com has TXT record “ssh-ed25519 AAAAC3N…”</code></pre>
            <p>With <code>-v</code>, unbound-host will show us whether the signature has been verified</p>
            <pre><code>foo:~$ unbound-host -v -t TXT alice_pubkey.example.com
alice_pubkey.example.com has TXT record “ssh-ed25519 AAAAC…” (insecure)</code></pre>
            <p>With <code>-D</code>, it will actually check the signature:</p>
            <pre><code>foo:~$ unbound-host -D -v -t TXT alice_pubkey.example.com
alice_pubkey.example.com has TXT record “ssh-ed25519 AAAAC3N…” (secure)</code></pre>
            <p>If no record exists, it will show this:</p>
            <pre><code>foo:~$ unbound-host -D -v -t TXT charlie_pubkey.example.com
charlie_pubkey.example.com has no TXT record (secure)</code></pre>
            <p>Note that the absence of record is also labelled “secure”, thanks to <a href="https://www.dnssec-tools.org/wiki/index.php/NSEC">NSEC</a>.</p><p>Let’s prepare to parse that output. The <a href="http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man5/sshd_config.5?query=sshd_config">sshd_config man page</a> shows that sshd needs a specific user to run the program that will retrieve public keys. This is following the best practices of privilege separation. Let's call that user <code>pubkeygrab</code> and create an account on <code>foo</code>, <code>bar</code> and <code>baz</code>, giving it just the permissions it needs to work and <i>nothing more</i>:</p>
            <pre><code>foo:~$ useradd -m -d /var/empty -s /sbin/nologin pubkeygrab</code></pre>
            <p>Then create the script <code>pubkeygrab.sh</code>, and store it on each of the machines. Obviously, we'll make sure only root can edit it:</p>
            <pre><code>foo:~$ cat /usr/local/bin/pubkeygrab.sh
#!/bin/sh

USER=$1

/usr/sbin/unbound-host -v -D -t TXT ${USER}_pubkey.example.com \\
     | /usr/bin/grep -v "no TXT record" \\
     | /usr/bin/grep ' (secure)$' \\
     | /usr/bin/sed 's/.* "\(.*\)" (secure)$/\1/'</code></pre>
            <p>Now I'm certain that a lot of readers will have something to say about the style or the efficiency of this shell script, I just wrote it that way to highlight what steps need to be taken:</p><ul><li><p>it retrieves a TXT record, and doesn't output anything if the record doesn't exist</p></li><li><p>if <code>unbound-host</code> has not confirmed that the record was correctly DNSSEC signed, it doesn't output anything</p></li><li><p>if the above is successful, it filters out the text to return only the public key</p></li><li><p>it doesn't try to do anything complex, because complexity is the enemy of security (or at least, that’s a point of view that I share with a few people)</p></li><li><p>it works with multiple records</p></li></ul><p>I'm sure you will write your own program to do the above. Just make sure it works only when you want it to. It is critical to ensure that it <i>doesn't return anything</i> at least when:</p><ul><li><p>a record for the corresponding user doesn't exist</p></li><li><p>the records are not signed or not properly signed</p></li><li><p>the local copy of the root key (<code>/var/unbound/root.key</code>, here) is corrupted.</p></li></ul><p>Bonus points to you if you find more cases.</p><p>Now that you have read the warning above, add the following to <code>/etc/ssh/sshd_config</code> on <code>foo</code>, <code>bar</code> and <code>baz</code>:</p>
            <pre><code>AuthorizedKeysCommand /usr/local/bin/pubkeygrab.sh
AuthorizedKeysCommandUser pubkeygrab</code></pre>
            <p>and restart <code>sshd</code>. Check that the users <code>alice</code> and <code>bob</code> exist on each machine too. Note that the above change will also apply to all existing users. Now you can go to your CloudFlare account, select the domain <code>example.com</code>, and create the TXT records <code>alice_pubkey</code> and <code>bob_pubkey</code>. Paste their respective public keys in the value field. Soon after, Alice and Bob can log in. Ask Charlie to try too. If the above works for Alice and Bob, but fails for Charlie, congratulations, you have turned CloudFlare into a PKI for SSH.</p><p>If you remove the TXT records, Alice and Bob’s access should be revoked, and they should be unable to login, once the TTL of the TXT record is expired. However, note that when the output of <code>pubkeygrab.sh</code> is empty, <code>sshd</code> reverts to the usual <code>AuthorizedKeysFile</code> parameter to find a public key. If Alice and Bob are cheeky and want to keep their access after you removed their TXT records, they just need to copy their public key into that file any time before you ban them. If you don't want that, make sure the <code>AuthorizedKeysFile</code> parameter points to a place Alice and Bob can't write to.</p><p>I hope this is showing how interesting DNSSEC can be, and that we have more news on this topic soon.</p><hr /><ol><li><p>Yes, it would be better to have a dedicated record, instead of overloading TXT records. <a href="#fnref1">↩︎</a></p></li></ol> ]]></content:encoded>
            <category><![CDATA[DNSSEC]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">57Me2vjBS226E4c3ylZhGL</guid>
            <dc:creator>Etienne Labaume</dc:creator>
        </item>
    </channel>
</rss>