
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 08:51:52 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Moving k8s communication to gRPC]]></title>
            <link>https://blog.cloudflare.com/moving-k8s-communication-to-grpc/</link>
            <pubDate>Sat, 20 Mar 2021 14:00:00 GMT</pubDate>
            <description><![CDATA[ How we use gRPC in combination with Kubernetes to improve the performance and usability of internal APIs. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Over the past year and a half, Cloudflare has been hard at work moving our back-end services running in our non-edge locations from bare metal solutions and Mesos Marathon to a more unified approach using <a href="https://kubernetes.io/">Kubernetes(K8s)</a>. We chose Kubernetes because it allowed us to split up our monolithic application into many different microservices with granular control of communication.</p><p>For example, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/">ReplicaSet</a> in Kubernetes can provide high availability by ensuring that the correct number of pods are always available. A <a href="https://kubernetes.io/docs/concepts/workloads/pods/">Pod</a> in Kubernetes is similar to a container in <a href="https://www.docker.com/">Docker</a>. Both are responsible for running the actual application. These pods can then be exposed through a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/">Service</a> to abstract away the number of replicas by providing a single endpoint that load balances to the pods behind it. The services can then be exposed to the Internet via an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress</a>. Lastly, a network policy can protect against unwanted communication by ensuring the correct policies are applied to the application. These policies can include L3 or L4 rules.</p><p>The diagram below shows a simple example of this setup.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33zWCqFZw2iuXfhllsHgdk/e290742e17a975a98a195bccc283297f/2-3.png" />
            
            </figure><p>Though Kubernetes does an excellent job at providing the tools for communication and traffic management, it does not help the developer decide the best way to communicate between the applications running on the pods. Throughout this blog we will look at some of the decisions we made and why we made them to discuss the pros and cons of two commonly used API architectures, REST and gRPC.</p>
    <div>
      <h3>Out with the old, in with the new</h3>
      <a href="#out-with-the-old-in-with-the-new">
        
      </a>
    </div>
    <p>When the DNS team first moved to Kubernetes, all of our pod-to-pod communication was done through REST APIs and in many cases also included Kafka. The general communication flow was as follows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GQIkngkqEYcBJNFzv1xNU/fe13bc8911a11a9c26dc5925b4fe6a19/1-5.png" />
            
            </figure><p>We use Kafka because it allows us to handle large spikes in volume without losing information. For example, during a Secondary DNS Zone zone transfer, Service A tells Service B that the zone is ready to be published to the edge. Service B then calls Service A’s REST API, generates the zone, and pushes it to the edge. If you want more information about how this works, I wrote an entire blog post about the <a href="/secondary-dns-deep-dive/">Secondary DNS pipeline</a> at Cloudflare.</p><p>HTTP worked well for most communication between these two services. However, as we scaled up and added new endpoints, we realized that as long as we control both ends of the communication, we could improve the usability and performance of our communication. In addition, sending large DNS zones over the network using HTTP often caused issues with sizing constraints and compression.</p><p>In contrast, gRPC can easily stream data between client and server and is commonly used in microservice architecture. These qualities made gRPC the obvious replacement for our REST APIs.</p>
    <div>
      <h3>gRPC Usability</h3>
      <a href="#grpc-usability">
        
      </a>
    </div>
    <p>Often overlooked from a developer’s perspective, HTTP client libraries are clunky and require code that defines paths, handles parameters, and deals with responses in bytes. gRPC abstracts all of this away and makes network calls feel like any other function calls defined for a struct.</p><p>The example below shows a very basic schema to set up a GRPC client/server system. As a result of gRPC using <a href="https://developers.google.com/protocol-buffers">protobuf</a> for serialization, it is largely language agnostic. Once a schema is defined, the <i>protoc</i> command can be used to generate code for <a href="https://grpc.io/docs/languages/">many languages</a>.</p><p>Protocol Buffer data is structured as <i>messages,</i> with each <i>message</i> containing information stored in the form of fields. The fields are strongly typed, providing type safety unlike JSON or XML. Two messages have been defined, <i>Hello</i> and <i>HelloResponse</i>. Next we define a service called <i>HelloWorldHandler</i> which contains one RPC function called <i>SayHello</i> that must be implemented if any object wants to call themselves a <i>HelloWorldHandler</i>.</p><p>Simple Proto:</p>
            <pre><code>message Hello{
   string Name = 1;
}

message HelloResponse{}

service HelloWorldHandler {
   rpc SayHello(Hello) returns (HelloResponse){}
}</code></pre>
            <p>Once we run our <i>protoc</i> command, we are ready to write the server-side code. In order to implement the <i>HelloWorldHandler</i>, we must define a struct that implements all of the RPC functions specified in the protobuf schema above_._ In this case, the struct <i>Server</i> defines a function <i>SayHello</i> that takes in two parameters, context and <i>*pb.Hello</i>. <i>*pb.Hello</i> was previously specified in the schema and contains one field, <i>Name. SayHello</i> must also return the <i>*pbHelloResponse</i> which has been defined without fields for simplicity.</p><p>Inside the main function, we create a TCP listener, create a new gRPC server, and then register our handler as a <i>HelloWorldHandlerServer</i>. After calling <i>Serve</i> on our gRPC server, clients will be able to communicate with the server through the function <i>SayHello</i>.</p><p>Simple Server:</p>
            <pre><code>type Server struct{}

func (s *Server) SayHello(ctx context.Context, in *pb.Hello) (*pb.HelloResponse, error) {
    fmt.Println("%s says hello\n", in.Name)
    return &amp;pb.HelloResponse{}, nil
}

func main() {
    lis, err := net.Listen("tcp", ":8080")
    if err != nil {
        panic(err)
    }
    gRPCServer := gRPC.NewServer()
    handler := Server{}
    pb.RegisterHelloWorldHandlerServer(gRPCServer, &amp;handler)
    if err := gRPCServer.Serve(lis); err != nil {
        panic(err)
    }
}</code></pre>
            <p>Finally, we need to implement the gRPC Client. First, we establish a TCP connection with the server. Then, we create a new <i>pb.HandlerClient</i>. The client is able to call the server's <i>SayHello</i> function by passing in a *<i>pb.Hello</i> object.</p><p>Simple Client:</p>
            <pre><code>conn, err := gRPC.Dial("127.0.0.1:8080", gRPC.WithInsecure())
if err != nil {
    panic(err)
}
client := pb.NewHelloWorldHandlerClient(conn)
client.SayHello(context.Background(), &amp;pb.Hello{Name: "alex"})</code></pre>
            <p>Though I have removed some code for simplicity, these <i>services</i> and <i>messages</i> can become quite complex if needed. The most important thing to understand is that when a server attempts to announce itself as a <i>HelloWorldHandlerServer</i>, it is required to implement the RPC functions as specified within the protobuf schema. This agreement between the client and server makes cross-language network calls feel like regular function calls.</p><p>In addition to the basic Unary server described above, gRPC lets you decide between four types of service methods:</p><ul><li><p><b>Unary</b> (example above): client sends a single request to the server and gets a single response back, just like a normal function call.</p></li><li><p><b>Server Streaming:</b> server returns a stream of messages in response to a client's request.</p></li><li><p><b>Client Streaming:</b> client sends a stream of messages to the server and the server replies in a single message, usually once the client has finished streaming.</p></li><li><p><b>Bi-directional Streaming:</b> the client and server can both send streams of messages to each other asynchronously.</p></li></ul>
    <div>
      <h3>gRPC Performance</h3>
      <a href="#grpc-performance">
        
      </a>
    </div>
    <p>Not all HTTP connections are created equal. Though Golang natively supports HTTP/2, the HTTP/2 transport must be set by the client and the server must also support HTTP/2. Before moving to gRPC, we were still using HTTP/1.1 for client connections. We could have switched to HTTP/2 for performance gains, but we would have lost some of the benefits of native protobuf compression and usability changes.</p><p>The best option available in HTTP/1.1 is pipelining. Pipelining means that although requests can share a connection, they must queue up one after the other until the request in front completes. HTTP/2 improved pipelining by using connection multiplexing. Multiplexing allows for multiple requests to be sent on the same connection and at the same time.</p><p>HTTP REST APIs generally use JSON for their request and response format. Protobuf is the native request/response format of gRPC because it has a standard schema agreed upon by the client and server during registration. In addition, protobuf is known to be significantly faster than JSON due to its serialization speeds. I’ve run some benchmarks on my laptop, source code can be found <a href="https://github.com/Fattouche/protobuf-benchmark">here</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23TkQ80Iruo8IYflqIuobN/f7323dd5817ca47f9b203b8b63a3a980/image1-26.png" />
            
            </figure><p>As you can see, protobuf performs better in small, medium, and large data sizes. It is faster per operation, smaller after marshalling, and scales well with input size. This becomes even more noticeable when unmarshaling very large data sets. Protobuf takes 96.4ns/op but JSON takes 22647ns/op, a 235X reduction in time! For large DNS zones, this efficiency makes a massive difference in the time it takes us to go from record change in our API to serving it at the edge.</p><p>Combining the benefits of HTTP/2 and protobuf showed almost no performance change from our application’s point of view. This is likely due to the fact that our pods were already so close together that our connection times were already very low. In addition, most of our gRPC calls are done with small amounts of data where the difference is negligible. One thing that we did notice <b>—</b> likely related to the multiplexing of HTTP/2 <b>—</b> was greater efficiency when writing newly created/edited/deleted records to the edge. Our latency spikes dropped in both amplitude and frequency.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PDR6bZkh8zzVVv2j6tmE5/9ce516e00daa832135964246eaf7b95c/image2-19.png" />
            
            </figure>
    <div>
      <h3>gRPC Security</h3>
      <a href="#grpc-security">
        
      </a>
    </div>
    <p>One of the best features in Kubernetes is the NetworkPolicy. This allows developers to control what goes in and what goes out.</p>
            <pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 6379
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978</code></pre>
            <p>In this example, taken from the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/">Kubernetes docs</a>, we can see that this will create a network policy called test-network-policy. This policy controls both ingress and egress communication to or from any pod that matches the role <i>db</i> and enforces the following rules:</p><p>Ingress connections allowed:</p><ul><li><p>Any pod in default namespace with label “role=frontend”</p></li><li><p>Any pod in any namespace that has a label “project=myproject”</p></li><li><p>Any source IP address in 172.17.0.0/16 except for 172.17.1.0/24</p></li></ul><p>Egress connections allowed:</p><ul><li><p>Any dest IP address in 10.0.0.0/24</p></li></ul><p>NetworkPolicies do a fantastic job of protecting APIs at the network level, however, they do nothing to protect APIs at the application level. If you wanted to control which endpoints can be accessed within the API, you would need k8s to be able to not only distinguish between pods, but also endpoints within those pods. These concerns led us to <a href="https://grpc.io/docs/guides/auth/">per RPC credentials</a>. Per RPC credentials are easy to set up on top of the pre-existing gRPC code. All you need to do is add interceptors to both your stream and unary handlers.</p>
            <pre><code>func (s *Server) UnaryAuthInterceptor(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
    // Get the targeted function
    functionInfo := strings.Split(info.FullMethod, "/")
    function := functionInfo[len(functionInfo)-1]
    md, _ := metadata.FromIncomingContext(ctx)

    // Authenticate
    err := authenticateClient(md.Get("username")[0], md.Get("password")[0], function)
    // Blocked
    if err != nil {
        return nil, err
    }
    // Verified
    return handler(ctx, req)
}</code></pre>
            <p>In this example code snippet, we are grabbing the username, password, and requested function from the info object. We then authenticate against the client to make sure that it has correct rights to call that function. This interceptor will run before any of the other functions get called, which means one implementation protects all functions. The client would initialize its secure connection and send credentials like so:</p>
            <pre><code>transportCreds, err := credentials.NewClientTLSFromFile(certFile, "")
if err != nil {
    return nil, err
}
perRPCCreds := Creds{Password: grpcPassword, User: user}
conn, err := grpc.Dial(endpoint, grpc.WithTransportCredentials(transportCreds), grpc.WithPerRPCCredentials(perRPCCreds))
if err != nil {
    return nil, err
}
client:= pb.NewRecordHandlerClient(conn)
// Can now start using the client</code></pre>
            <p>Here the client first verifies that the server matches with the certFile. This step ensures that the client does not accidentally send its password to a bad actor. Next, the client initializes the <i>perRPCCreds</i> struct with its username and password and dials the server with that information. Any time the client makes a call to an rpc defined function, its credentials will be verified by the server.</p>
    <div>
      <h3>Next Steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>Our next step is to remove the need for many applications to access the database and ultimately DRY up our codebase by pulling all DNS-related code into a single API, accessed from one gRPC interface. This removes the potential for mistakes in individual applications and makes updating our database schema easier. It also gives us more granular control over which functions can be accessed rather than which tables can be accessed.</p><p>So far, the DNS team is very happy with the results of our gRPC migration. However, we still have a long way to go before we can move entirely away from REST. We are also patiently waiting for <a href="https://github.com/grpc/grpc/issues/19126">HTTP/3 support</a> for gRPC so that we can take advantage of those super <a href="https://en.wikipedia.org/wiki/QUIC">quic</a> speeds!</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[gRPC]]></category>
            <category><![CDATA[Kubernetes]]></category>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[QUIC]]></category>
            <guid isPermaLink="false">6oeh7RRYqhqnS7vtJ2BpwP</guid>
            <dc:creator>Alex Fattouche</dc:creator>
        </item>
        <item>
            <title><![CDATA[Road to gRPC]]></title>
            <link>https://blog.cloudflare.com/road-to-grpc/</link>
            <pubDate>Mon, 26 Oct 2020 16:40:02 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched support for gRPC during our 2020 Birthday Week. In this post, we’ll do a deep-dive into the technical details of how we implemented support. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/8yumfi3N8b2OaUtMbTeLj/fa69604b79cf96671d6f1d798fba8621/image1-38.png" />
            
            </figure><p>Cloudflare launched support for <a href="/announcing-grpc/">gRPC</a>® during our 2020 Birthday Week. We’ve been humbled by the immense interest in the beta, and we’d like to thank everyone that has applied and tried out gRPC! In this post we’ll do a deep-dive into the technical details on how we implemented support.</p>
    <div>
      <h3>What is gRPC?</h3>
      <a href="#what-is-grpc">
        
      </a>
    </div>
    <p><a href="https://grpc.io/">gRPC</a> is an open source RPC framework running over HTTP/2. RPC (remote procedure call) is a way for one machine to tell another machine to do something, rather than calling a local function in a library. RPC has been around in the history of distributed computing, with different implementations focusing on different areas, for a long time. What makes gRPC unique are the following characteristics:</p><ul><li><p>It requires the modern HTTP/2 protocol for transport, which is now widely available.</p></li><li><p>A full client/server reference implementation, demo, and test suites are available as <a href="https://github.com/grpc">open source</a>.</p></li><li><p>It does not specify a message format, although <a href="https://developers.google.com/protocol-buffers">Protocol Buffers</a> are the preferred serialization mechanism.</p></li><li><p>Both clients and servers can stream data, which avoids having to poll for new data or create new connections.</p></li></ul><p>In terms of the protocol, <a href="https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md">gRPC uses HTTP/2</a> frames extensively: requests and responses look very similar to a normal HTTP/2 request.</p><p>What’s unusual, however, is gRPC’s usage of the HTTP trailer. While it’s not widely used in the wild, <a href="https://tools.ietf.org/html/rfc2616#section-3.6.1">HTTP trailers have been around since 1999, as defined in original HTTP/1.1 RFC2616</a>. HTTP message headers are defined to come before the HTTP message body, but HTTP trailer is a set of HTTP headers that can be appended <i>after</i> the message body. However, because there are not many use cases for trailers, many server and client implementations don't fully support them. While HTTP/1.1 needs to use chunked transfer encoding for its body to send an HTTP trailer, in the case of HTTP/2 the trailer is in HEADER frame after the DATA frame of the body.</p><p>There are some cases where an HTTP trailer is useful. For example, we use an HTTP response code to indicate the status of request, but the response code is the very first line of the HTTP response, so we need to decide on the response code very early. A trailer makes it possible to send some metadata after the body. For example, let’s say your web server sends a stream of large data (which is not a fixed size), and in the end you want to send a SHA256 checksum of the data you sent so that the client can verify the contents. Normally, this is not possible with an HTTP status code or the response header which should be sent at the beginning of the response. Using a HTTP trailer header, you can send another header (e.g. <a href="https://tools.ietf.org/html/draft-ietf-httpbis-digest-headers-04#section-10.11">Digest</a>) after having sent all the data.</p><p>gRPC uses HTTP trailers for two purposes. To begin with, it sends its final status (grpc-status) as a trailer header after the content has been sent. The second reason is to support streaming use cases. These use cases last much longer than normal HTTP requests. The HTTP trailer is used to give the post processing result of the request or the response. For example if there is an error during streaming data processing, you can send an error code using the trailer, which is not possible with the header before the message body.</p><p>Here is a simple example of a gRPC request and response in HTTP/2 frames:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MTBNtDhBqIvqND3l0Fv1w/95fdfb416852a592ce0b05d66dec5865/image3-24.png" />
            
            </figure>
    <div>
      <h3>Adding gRPC support to the Cloudflare Edge</h3>
      <a href="#adding-grpc-support-to-the-cloudflare-edge">
        
      </a>
    </div>
    <p>Since gRPC uses HTTP/2, it may sound easy to natively support gRPC, because Cloudflare already supports <a href="/introducing-http2/">HTTP/2</a>. However, we had a couple of issues:</p><ul><li><p>The HTTP request/response trailer headers were not fully supported by our edge proxy: Cloudflare uses NGINX to accept traffic from eyeballs, and it has limited support for trailers. Further complicating things, requests and responses flowing through Cloudflare go through a set of other proxies.</p></li><li><p>HTTP/2 to origin: our edge proxy uses HTTP/1.1 to fetch objects (whether dynamic or static) from origin. To proxy gRPC traffic, we need support connections to customer gRPC origins using HTTP/2.</p></li><li><p>gRPC streaming needs to allow bidirectional request/response flow: gRPC has two types of protocol flow; one is unary, which is a simple request and response, and another is streaming, which allows non-stop data flow in each direction. To fully support the streaming, the HTTP message body needs to be sent after receiving the response header on the other end. For example, <a href="https://grpc.io/docs/what-is-grpc/core-concepts/#client-streaming-rpc">client streaming</a> will keep sending a request body after receiving a response header.</p></li></ul><p>Due to these reasons, gRPC requests would break when proxied through our network. To overcome these limitations, we looked at various solutions. For example, NGINX has <a href="https://www.nginx.com/blog/nginx-1-13-10-grpc/">a gRPC upstream module</a> to support HTTP/2 gRPC origin, but it’s a separate module, and it also requires HTTP/2 downstream, which cannot be used for our service, as requests cascade through multiple HTTP proxies in some cases. Using HTTP/2 everywhere in the pipeline is not realistic, because of the characteristics of <a href="/keepalives-considered-harmful/">our internal load balancing architecture</a>, and because it would have taken too much effort to make sure all internal traffic uses HTTP/2.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3p9teukTmZfs4wcyWggUZr/07ab8bf40f0bfe57338d3ae770a8bb93/image2-25.png" />
            
            </figure>
    <div>
      <h3>Converting to HTTP/1.1?</h3>
      <a href="#converting-to-http-1-1">
        
      </a>
    </div>
    <p>Ultimately, we discovered a better way: convert gRPC messages to HTTP/1.1 messages without a trailer <i>inside our network,</i> and then convert them back to HTTP/2 before sending the request off to origin. This would work with most HTTP proxies inside Cloudflare that don't support HTTP trailers, and we would need minimal changes.</p><p>Rather than inventing our own format, the gRPC community has already come up with an HTTP/1.1-compatible version: <a href="https://github.com/grpc/grpc-web">gRPC-web</a>. gRPC-web is a modification of the original HTTP/2 based gRPC specification. The original purpose was to be used with the web browsers, which lack direct access HTTP/2 frames. With gRPC-web, the HTTP trailer is moved to the body, so we don’t need to worry about HTTP trailer support inside the proxy. It also comes with streaming support. The resulting HTTP/1.1 message can be still inspected by our security products, such as WAF and Bot Management, to provide the same level of security that Cloudflare brings to other HTTP traffic.</p><p>When an HTTP/2 gRPC message is received at Cloudflare’s edge proxy, the message is “converted” to HTTP/1.1 gRPC-web format. Once the gRPC message is converted, it goes through our pipeline, applying services such as WAF, Cache and Argo services the same way any normal HTTP request would.</p><p>Right before a gRPC-web message leaves the Cloudflare network, it needs to be “converted back” to HTTP/2 gRPC again. Requests that are converted by our system are marked so that our system won’t accidentally convert gRPC-web traffic originated from clients.</p>
    <div>
      <h3>HTTP/2 Origin Support</h3>
      <a href="#http-2-origin-support">
        
      </a>
    </div>
    <p>One of the engineering challenges was to support using HTTP/2 to connect to origins. Before this project, Cloudflare didn't have the ability to connect to origins via HTTP/2.</p><p>Therefore, we decided to build support for HTTP/2 origin support in-house. We built a standalone origin proxy that is able to connect to origins via HTTP/2. On top of this new platform, we implemented the conversion logic for gRPC. gRPC support is the first feature that takes advantage of this new platform. Broader support for HTTP/2 connections to origin servers is on the roadmap.</p>
    <div>
      <h3>gRPC Streaming Support</h3>
      <a href="#grpc-streaming-support">
        
      </a>
    </div>
    <p>As explained above, gRPC has a streaming mode that request body or response body can be sent in stream; in the lifetime of gRPC requests, gRPC message blocks can be sent at any time. At the end of the stream, there will be a HEADER frame indicating the end of the stream. When it’s converted to gRPC-web, we will send the body using chunked encoding and keep the connection open, accepting both sides of the body until we get a gRPC message block, which indicates the end of the stream. This requires our proxy to support bidirectional transfer.</p><p>For example, client streaming is an interesting mode where the server already responds with a response code and its header, but the client is still able to send the request body.</p>
    <div>
      <h3>Interoperability Testing</h3>
      <a href="#interoperability-testing">
        
      </a>
    </div>
    <p>Every new feature at Cloudflare needs proper testing before release. During initial development, we used the <a href="https://www.envoyproxy.io/">envoy</a> proxy with its <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/grpc_web_filter">gRPC-web filter</a> feature and official examples of gRPC. We prepared a test environment with envoy and a gRPC test origin to make sure that the edge proxy worked properly with gRPC requests. Requests from the gRPC test client are sent to the edge proxy and converted to gRPC-web, and forwarded to the envoy proxy. After that, envoy converts back to gRPC request and sends to gRPC test origin. We were able to verify the basic behavior in this way.</p><p>Once we had basic functionality ready, we also needed to make sure both ends’ conversion functionality worked properly. To do that, we built deeper interoperability testing.</p><p>We referenced the existing <a href="https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md">gRPC interoperability test cases</a> for our test suite and ran the first iteration of tests between the edge proxy and the new origin proxy locally.</p><p>For the second iteration of tests we used different gRPC implementations. For example, some servers sent their final status (grpc-status)  in a <a href="https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#responses">trailers-only</a> response when there was an immediate error. This response would contain the HTTP/2 response headers and trailer in a single HEADERS frame block with both the END_STREAM and END_HEADERS flags set. Other implementations sent the final status as trailer in a separate HEADERS frame.</p><p>After verifying interoperability locally we ran the test harness against a development environment that supports all the services we have in production. We were then able to ensure no unintended side effects were impacting gRPC requests.</p><p>We love dogfooding! One of the first services we successfully deployed edge gRPC support to is the <a href="/inside-the-entropy/">Cloudflare drand randomness beacon</a>. Onboarding was easy and we’ve been running the beacon in production for the last few weeks without a hitch.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Supporting a new protocol is exciting work! Implementing support for new technologies in existing systems is exciting <i>and</i> intricate, often involving tradeoffs between speed of implementation and overall system complexity. In the case of gRPC, we were able to build support quickly and in a way that did not require significant changes to the Cloudflare edge. This was accomplished by carefully considering implementation options before settling on the idea of converting between HTTP/2 gRPC and HTTP/1.1 gRPC-web format. This design choice made service integration quicker and easier while still satisfying our user’s expectations and constraints.</p><p>If you are interested in using Cloudflare to secure and accelerate your gRPC service, you can read more <a href="/announcing-grpc/">here</a>. And if you want to work on interesting engineering challenges like the one described in this post, <a href="https://www.cloudflare.com/careers/">apply</a>!</p><p><i>gRPC® is a registered trademark of The Linux Foundation.</i></p> ]]></content:encoded>
            <category><![CDATA[gRPC]]></category>
            <guid isPermaLink="false">2aHUwBYwekbmiMXHlzwUDf</guid>
            <dc:creator>Junho Choi</dc:creator>
            <dc:creator>Yuchen Wu</dc:creator>
            <dc:creator>Sangjo Lee</dc:creator>
            <dc:creator>Andrew Hauck</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing support for gRPC]]></title>
            <link>https://blog.cloudflare.com/announcing-grpc/</link>
            <pubDate>Thu, 01 Oct 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we're excited to announce beta support for proxying gRPC, a next-generation protocol that allows you to build APIs at scale. With gRPC on Cloudflare, you get access to the security, reliability and performance features that you're used to having at your fingertips for traditional APIs. ]]></description>
            <content:encoded><![CDATA[ <p>Today we're excited to announce beta support for proxying <a href="https://grpc.io/">gRPC</a>, a next-generation protocol that allows you to build <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> at scale. With gRPC on Cloudflare, you get access to the security, reliability and performance features that you're used to having at your fingertips for traditional APIs. Sign up for the beta today in the Network tab of the Cloudflare dashboard.</p><p>gRPC has proven itself to be a popular new protocol for building APIs at scale: it’s more efficient and built to offer superior bi-directional streaming capabilities. However, because gRPC uses newer technology, like HTTP/2, under the covers, existing security and performance tools did not support gRPC traffic out of the box. This meant that customers adopting gRPC to power their APIs had to pick between modernity on one hand, and things like security, performance, and reliability on the other. Because supporting modern protocols and making sure people can operate them safely and performantly is in our DNA, we set out to fix this.</p><p>When you put your gRPC APIs on Cloudflare, you immediately gain the benefits that come with Cloudflare. Apprehensive of exposing your APIs to bad actors? Add security features such as WAF and Bot Management. Need more performance? Turn on Argo Smart Routing to decrease time to first byte. Or increase reliability by adding a Load Balancer.</p><p>And naturally, gRPC plugs in to <a href="/introducing-api-shield">API Shield</a>, allowing you to add more security by enforcing client authentication and schema validation at the edge.</p>
    <div>
      <h3>What is gRPC?</h3>
      <a href="#what-is-grpc">
        
      </a>
    </div>
    <p>Protocols like JSON-REST have been the bread and butter of Internet facing APIs for several years. They're great in that they operate over HTTP, their payloads are human readable, and a large body of tooling exists to quickly set up an API for another machine to talk to. However, the same things that make these protocols popular are also weaknesses; JSON, as an example, is inefficient to store and transmit, and expensive for computers to parse.</p><p>In 2015, Google introduced <a href="https://grpc.io/">gRPC</a>, a protocol designed to be fast and efficient, relying on binary protocol buffers to serialize messages before they are transferred over the wire. This prevents (normal) humans from reading them but results in much higher processing efficiency. gRPC has become increasingly popular in the era of microservices because it neatly addresses the shortfalls laid out above.</p><p>JSON</p><p>Protocol Buffers</p><p>{ “foo”: “bar” }</p><p>0b111001001100001011000100000001100001010</p><p></p><p>gRPC relies on HTTP/2 as a transport mechanism. This poses a problem for customers trying to deploy common security technologies like web application firewalls, as most reverse proxy solutions (including Cloudflare’s HTTP stack, until today) downgrade HTTP requests down to HTTP/1.1 before sending them off to an origin.</p><p>Beyond microservices in a datacenter, the original use case for gRPC, adoption has grown in many other contexts. Many popular mobile apps have millions of users, that all rely on messages being sent back and forth between mobile phones and servers. We've seen many customers wire up API connectivity for their mobile apps by using the same gRPC API endpoints they already have inside their data centers for communication with clients in the outside world.</p><p>While this solves the efficiency issues with running services at scale, it exposes critical parts of these customers' infrastructure to the Internet, introducing security and reliability issues. Today we are introducing support for gRPC at Cloudflare, to secure and improve the experience of running gRPC APIs on the Internet.</p>
    <div>
      <h3>How does gRPC + Cloudflare work?</h3>
      <a href="#how-does-grpc-cloudflare-work">
        
      </a>
    </div>
    <p>The engineering work our team had to do to add gRPC support is composed of a few pieces:</p><ol><li><p><b>Changes to the early stages of our request processing pipeline to identify gRPC traffic</b> coming down the wire.</p></li><li><p><b>Additional functionality in our WAF to “understand” gRPC traffic</b>, ensuring gRPC connections are handled correctly within the WAF, including inspecting all components of the initial gRPC connection request.</p></li><li><p><b>Adding support to establish HTTP/2 connections to customer origins</b> for gRPC traffic, allowing gRPC to be proxied through our edge. HTTP/2 to origin support is currently limited to gRPC traffic, though we expect to expand the scope of traffic proxied back to origin over HTTP/2 soon.</p></li></ol><p>What does this mean for you, a Cloudflare customer interested in using our <a href="https://www.cloudflare.com/application-services/solutions/api-security/">tools</a> to secure and accelerate your API? Because of the hard work we’ve done, enabling support for gRPC is a click of a button in the Cloudflare dashboard.</p>
    <div>
      <h3>Using gRPC to build mobile apps at scale</h3>
      <a href="#using-grpc-to-build-mobile-apps-at-scale">
        
      </a>
    </div>
    <p>Why does Cloudflare supporting gRPC matter? To dig in on one use case, let’s look at mobile apps. Apps need quick, efficient ways of interacting with servers to get the information needed to show on your phone. There is no browser, so they rely on <i>APIs</i> to get the information. An API stands for application programming interface and is a standardized way for machines (say, your phone and a server) to talk to each other.</p><p>Let's say we're a mobile app developer with thousands, or even millions of users. With this many users, using a modern protocol, gRPC, allows us to run less compute infrastructure than would be necessary with older, less efficient protocols like JSON-REST. But exposing these endpoints, naked, on the Internet is really scary. Up until now there were very few, if any, options for protecting gRPC endpoints against application layer attacks with a <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> and guarding against volumetric attacks with DDoS mitigation tools. That changes today, with Cloudflare adding gRPC to it’s set of supported protocols.  </p><p>With gRPC on Cloudflare, you get the benefits of our security, reliability and performance products:</p><ul><li><p>WAF for inspection of incoming gRPC requests. Use managed rules or craft your own.</p></li><li><p>Load Balancing to increase reliability: configure multiple gRPC backends to handle the load, let Cloudflare distribute the load across them. Backend selection can be done in round-robin fashion, based on health checks or load.</p></li><li><p>Argo Smart Routing to increase performance by transporting your gRPC messages faster than the Internet would be able to route them. Messages are routed around congestion on the Internet, resulting in an average reduction of time to first byte by 30%.</p></li></ul><p>And of course, all of this works with <a href="/introducing-api-shield">API Shield</a>, an easy way to add mTLS authentication to any API endpoint.</p>
    <div>
      <h3>Enabling gRPC support</h3>
      <a href="#enabling-grpc-support">
        
      </a>
    </div>
    <p>To enable gRPC support, head to the <a href="https://dash.cloudflare.com">Cloudflare dashboard</a> and go to the Network tab. From there you can sign up for the beta.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1dXJ7lGxzg9e6EPFqgBTD7/74f4d6167eb4b151a6186af624a5ed66/image1-1.png" />
            
            </figure><p>We have limited seats available at launch, but will open up more broadly over the next few weeks. After signing up and toggling gRPC support, you’ll have to enable Cloudflare proxying on your domain on the DNS tab to activate Cloudflare for your gRPC API.</p><p>We’re excited to bring gRPC support to the masses, allowing you to add the security, reliability and performance benefits that you’re used to getting with Cloudflare. Enabling is just a click away. Take it for a spin and let us know what you think!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[gRPC]]></category>
            <guid isPermaLink="false">4AG91cetalX3slOwHJWxYz</guid>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
    </channel>
</rss>