
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 23:53:01 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Disrupting FlyingYeti's campaign targeting Ukraine]]></title>
            <link>https://blog.cloudflare.com/disrupting-flyingyeti-campaign-targeting-ukraine/</link>
            <pubDate>Thu, 30 May 2024 13:00:38 GMT</pubDate>
            <description><![CDATA[ In April and May 2024, Cloudforce One employed proactive defense measures to successfully prevent Russia-aligned threat actor FlyingYeti from launching their latest phishing campaign targeting Ukraine ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudforce One is publishing the results of our investigation and real-time effort to detect, deny, degrade, disrupt, and delay threat activity by the Russia-aligned threat actor FlyingYeti during their latest phishing campaign targeting Ukraine. At the onset of Russia’s invasion of Ukraine on February 24, 2022, Ukraine introduced a moratorium on evictions and termination of utility services for unpaid debt. The moratorium ended in January 2024, resulting in significant debt liability and increased financial stress for Ukrainian citizens. The FlyingYeti campaign capitalized on anxiety over the potential loss of access to housing and utilities by enticing targets to open malicious files via debt-themed lures. If opened, the files would result in infection with the PowerShell malware known as <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">COOKBOX</a>, allowing FlyingYeti to support follow-on objectives, such as installation of additional payloads and control over the victim’s system.</p><p>Since April 26, 2024, Cloudforce One has taken measures to prevent FlyingYeti from launching their phishing campaign – a campaign involving the use of Cloudflare Workers and GitHub, as well as exploitation of the WinRAR vulnerability <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-38831">CVE-2023-38831</a>. Our countermeasures included internal actions, such as detections and code takedowns, as well as external collaboration with third parties to remove the actor’s cloud-hosted malware. Our effectiveness against this actor prolonged their operational timeline from days to weeks. For example, in a single instance, FlyingYeti spent almost eight hours debugging their code as a result of our mitigations. By employing proactive defense measures, we successfully stopped this determined threat actor from achieving their objectives.</p>
    <div>
      <h3>Executive Summary</h3>
      <a href="#executive-summary">
        
      </a>
    </div>
    <ul><li><p>On April 18, 2024, Cloudforce One detected the Russia-aligned threat actor FlyingYeti preparing to launch a phishing espionage campaign targeting individuals in Ukraine.</p></li><li><p>We discovered the actor used similar tactics, techniques, and procedures (TTPs) as those detailed in <a href="https://cert.gov.ua/article/6278620">Ukranian CERT's article on UAC-0149</a>, a threat group that has primarily <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">targeted Ukrainian defense entities with COOKBOX malware since at least the fall of 2023</a>.</p></li><li><p>From mid-April to mid-May, we observed FlyingYeti conduct reconnaissance activity, create lure content for use in their phishing campaign, and develop various iterations of their malware. We assessed that the threat actor intended to launch their campaign in early May, likely following Orthodox Easter.</p></li><li><p>After several weeks of monitoring actor reconnaissance and weaponization activity (<a href="https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html">Cyber Kill Chain Stages 1 and 2</a>), we successfully disrupted FlyingYeti’s operation moments after the final COOKBOX payload was built.</p></li><li><p>The payload included an exploit for the WinRAR vulnerability CVE-2023-38831, which FlyingYeti will likely continue to use in their phishing campaigns to infect targets with malware.</p></li><li><p>We offer steps users can take to defend themselves against FlyingYeti phishing operations, and also provide recommendations, detections, and indicators of compromise.</p></li></ul>
    <div>
      <h2>Who is FlyingYeti?</h2>
      <a href="#who-is-flyingyeti">
        
      </a>
    </div>
    <p>FlyingYeti is the <a href="https://www.merriam-webster.com/dictionary/cryptonym">cryptonym</a> given by <a href="/introducing-cloudforce-one-threat-operations-and-threat-research">Cloudforce One</a> to the threat group behind this phishing campaign, which overlaps with UAC-0149 activity tracked by <a href="https://cert.gov.ua/">CERT-UA</a> in <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">February</a> and <a href="https://cert.gov.ua/article/6278620">April</a> 2024. The threat actor uses dynamic DNS (<a href="https://www.cloudflare.com/learning/dns/glossary/dynamic-dns/">DDNS</a>) for their infrastructure and leverages cloud-based platforms for hosting malicious content and for malware command and control (C2). Our investigation of FlyingYeti TTPs suggests this is likely a Russia-aligned threat group. The actor appears to primarily focus on targeting Ukrainian military entities. Additionally, we observed Russian-language comments in FlyingYeti’s code, and the actor’s operational hours falling within the UTC+3 time zone.</p>
    <div>
      <h2>Campaign background</h2>
      <a href="#campaign-background">
        
      </a>
    </div>
    <p>In the days leading up to the start of the campaign, Cloudforce One observed FlyingYeti conducting reconnaissance on payment processes for Ukrainian communal housing and utility services:</p><ul><li><p>April 22, 2024 – research into changes made in 2016 that introduced the use of QR codes in payment notices</p></li><li><p>April 22, 2024 – research on current developments concerning housing and utility debt in Ukraine</p></li><li><p>April 25, 2024 – research on the legal basis for restructuring housing debt in Ukraine as well as debt involving utilities, such as gas and electricity</p></li></ul><p>Cloudforce One judges that the observed reconnaissance is likely due to the Ukrainian government’s payment moratorium introduced at the start of the full-fledged invasion in February 2022. Under this moratorium, outstanding debt would not lead to evictions or termination of provision of utility services. However, on January 9, 2024, the <a href="https://en.interfax.com.ua/news/economic/959388.html">government lifted this ban</a>, resulting in increased pressure on Ukrainian citizens with outstanding debt. FlyingYeti sought to capitalize on that pressure, leveraging debt restructuring and payment-related lures in an attempt to increase their chances of successfully targeting Ukrainian individuals.</p>
    <div>
      <h2>Analysis of the Komunalka-themed phishing site</h2>
      <a href="#analysis-of-the-komunalka-themed-phishing-site">
        
      </a>
    </div>
    <p>The disrupted phishing campaign would have directed FlyingYeti targets to an actor-controlled GitHub page at hxxps[:]//komunalka[.]github[.]io, which is a spoofed version of the Kyiv Komunalka communal housing site <a href="https://www.komunalka.ua">https://www.komunalka.ua</a>. Komunalka functions as a payment processor for residents in the Kyiv region and allows for payment of utilities, such as gas, electricity, telephone, and Internet. Additionally, users can pay other fees and fines, and even donate to Ukraine’s defense forces.</p><p>Based on past FlyingYeti operations, targets may be directed to the actor’s Github page via a link in a phishing email or an encrypted Signal message. If a target accesses the spoofed Komunalka platform at hxxps[:]//komunalka[.]github[.]io, the page displays a large green button with a prompt to download the document “Рахунок.docx” (“Invoice.docx”), as shown in Figure 1. This button masquerades as a link to an overdue payment invoice but actually results in the download of the malicious archive “Заборгованість по ЖКП.rar” (“Debt for housing and utility services.rar”).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22Rnm7YOnwnJocG98RMFDa/def10039081f7e9c6df15980a8b855ac/image4-5.png" />
            
            </figure><p>Figure 1: Prompt to download malicious archive “Заборгованість по ЖКП.rar”</p><p>A series of steps must take place for the download to successfully occur:</p><ul><li><p>The target clicks the green button on the actor’s GitHub page hxxps[:]//komunalka.github[.]io</p></li><li><p>The target’s device sends an HTTP POST request to the Cloudflare Worker worker-polished-union-f396[.]vqu89698[.]workers[.]dev with the HTTP request body set to “user=Iahhdr”</p></li><li><p>The Cloudflare Worker processes the request and evaluates the HTTP request body</p></li><li><p>If the request conditions are met, the Worker fetches the RAR file from hxxps[:]//raw[.]githubusercontent[.]com/kudoc8989/project/main/Заборгованість по ЖКП.rar, which is then downloaded on the target’s device</p></li></ul><p>Cloudforce One identified the infrastructure responsible for facilitating the download of the malicious RAR file and remediated the actor-associated Worker, preventing FlyingYeti from delivering its malicious tooling. In an effort to circumvent Cloudforce One's mitigation measures, FlyingYeti later changed their malware delivery method. Instead of the Workers domain fetching the malicious RAR file, it was loaded directly from GitHub.</p>
    <div>
      <h2>Analysis of the malicious RAR file</h2>
      <a href="#analysis-of-the-malicious-rar-file">
        
      </a>
    </div>
    <p>During remediation, Cloudforce One recovered the RAR file “Заборгованість по ЖКП.rar” and performed analysis of the malicious payload. The downloaded RAR archive contains multiple files, including a file with a name that contains the unicode character “U+201F”. This character appears as whitespace on Windows devices and can be used to “hide” file extensions by adding excessive whitespace between the filename and the file extension. As highlighted in blue in Figure 2, this cleverly named file within the RAR archive appears to be a PDF document but is actually a malicious CMD file (“Рахунок на оплату.pdf[unicode character U+201F].cmd”).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55Vjmg9VLEnAFv3RZQoZ2l/866016a2489f2a6c780c9f3971dd28ca/image2-11.png" />
            
            </figure><p>Figure 2: Files contained in the malicious RAR archive “Заборгованість по ЖКП.rar” (“Housing Debt.rar”)</p><p>FlyingYeti included a benign PDF in the archive with the same name as the CMD file but without the unicode character, “Рахунок на оплату.pdf” (“Invoice for payment.pdf”). Additionally, the directory name for the archive once decompressed also contained the name “Рахунок на оплату.pdf”. This overlap in names of the benign PDF and the directory allows the actor to exploit the WinRAR vulnerability <a href="https://nvd.nist.gov/vuln/detail/CVE-2023-38831">CVE-2023-38831</a>. More specifically, when an archive includes a benign file with the same name as the directory, the entire contents of the directory are opened by the WinRAR application, resulting in the execution of the malicious CMD. In other words, when the target believes they are opening the benign PDF “Рахунок на оплату.pdf”, the malicious CMD file is executed.</p><p>The CMD file contains the FlyingYeti PowerShell malware known as <a href="https://cert.gov.ua/article/6277849?ref=news.risky.biz">COOKBOX</a>. The malware is designed to persist on a host, serving as a foothold in the infected device. Once installed, this variant of COOKBOX will make requests to the DDNS domain postdock[.]serveftp[.]com for C2, awaiting PowerShell <a href="https://learn.microsoft.com/en-us/powershell/scripting/powershell-commands?view=powershell-7.4">cmdlets</a> that the malware will subsequently run.</p><p>Alongside COOKBOX, several decoy documents are opened, which contain hidden tracking links using the <a href="https://canarytokens.com/generate">Canary Tokens</a> service. The first document, shown in Figure 3 below, poses as an agreement under which debt for housing and utility services will be restructured.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20vFV9kNTMmwxFXvpQoJTc/12542fb7a7d2108d49607f2a23fc7575/image5-10.png" />
            
            </figure><p>Figure 3: Decoy document Реструктуризація боргу за житлово комунальні послуги.docx</p><p>The second document (Figure 4) is a user agreement outlining the terms and conditions for the usage of the payment platform komunalka[.]ua.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VHSTwqfrXWXvoryg8lOcE/68eb096bc82f18c7edcb4c88c1ed6d2c/image3-6.png" />
            
            </figure><p>Figure 4: Decoy document Угода користувача.docx <i>(User Agreement.docx)</i></p><p>The use of relevant decoy documents as part of the phishing and delivery activity are likely an effort by FlyingYeti operators to increase the appearance of legitimacy of their activities.</p><p>The phishing theme we identified in this campaign is likely one of many themes leveraged by this actor in a larger operation to target Ukrainian entities, in particular their defense forces. In fact, the threat activity we detailed in this blog uses many of the same techniques outlined in a <a href="https://cert.gov.ua/article/6278620">recent FlyingYeti campaign</a> disclosed by CERT-UA in mid-April 2024, where the actor leveraged United Nations-themed lures involving Peace Support Operations to target Ukraine’s military. Due to Cloudforce One’s defensive actions covered in the next section, this latest FlyingYeti campaign was prevented as of the time of publication.</p>
    <div>
      <h2>Mitigating FlyingYeti activity</h2>
      <a href="#mitigating-flyingyeti-activity">
        
      </a>
    </div>
    <p>Cloudforce One mitigated FlyingYeti’s campaign through a series of actions. Each action was taken to increase the actor’s cost of continuing their operations. When assessing which action to take and why, we carefully weighed the pros and cons in order to provide an effective active defense strategy against this actor. Our general goal was to increase the amount of time the threat actor spent trying to develop and weaponize their campaign.</p><p>We were able to successfully extend the timeline of the threat actor’s operations from hours to weeks. At each interdiction point, we assessed the impact of our mitigation to ensure the actor would spend more time attempting to launch their campaign. Our mitigation measures disrupted the actor’s activity, in one instance resulting in eight additional hours spent on debugging code.</p><p>Due to our proactive defense efforts, FlyingYeti operators adapted their tactics multiple times in their attempts to launch the campaign. The actor originally intended to have the Cloudflare Worker fetch the malicious RAR file from GitHub. After Cloudforce One interdiction of the Worker, the actor attempted to create additional Workers via a new account. In response, we disabled all Workers, leading the actor to load the RAR file directly from GitHub. Cloudforce One notified GitHub, resulting in the takedown of the RAR file, the GitHub project, and suspension of the account used to host the RAR file. In return, FlyingYeti began testing the option to host the RAR file on the file sharing sites <a href="https://pixeldrain.com/">pixeldrain</a> and <a href="https://www.filemail.com/">Filemail</a>, where we observed the actor alternating the link on the Komunalka phishing site between the following:</p><ul><li><p>hxxps://pixeldrain[.]com/api/file/ZAJxwFFX?download=one</p></li><li><p>hxxps://1014.filemail[.]com/api/file/get?filekey=e_8S1HEnM5Rzhy_jpN6nL-GF4UAP533VrXzgXjxH1GzbVQZvmpFzrFA&amp;pk_vid=a3d82455433c8ad11715865826cf18f6</p></li></ul><p>We notified GitHub of the actor’s evolving tactics, and in response GitHub removed the Komunalka phishing site. After analyzing the files hosted on pixeldrain and Filemail, we determined the actor uploaded dummy payloads, likely to monitor access to their phishing infrastructure (FileMail logs IP addresses, and both file hosting sites provide view and download counts). At the time of publication, we did not observe FlyingYeti upload the malicious RAR file to either file hosting site, nor did we identify the use of alternative phishing or malware delivery methods.</p><p>A timeline of FlyingYeti’s activity and our corresponding mitigations can be found below.</p>
    <div>
      <h3>Event timeline</h3>
      <a href="#event-timeline">
        
      </a>
    </div>
    
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Date</span></th>
    <th><span>Event Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>2024-04-18 12:18</span></td>
    <td><span>Threat Actor (TA) creates a Worker to handle requests from a phishing site</span></td>
  </tr>
  <tr>
    <td><span>2024-04-18 14:16</span></td>
    <td><span>TA creates phishing site komunalka[.]github[.]io on GitHub</span></td>
  </tr>
  <tr>
    <td><span>2024-04-25 12:25</span></td>
    <td><span>TA creates a GitHub repo to host a RAR file</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 07:46</span></td>
    <td><span>TA updates the first Worker to handle requests from users visiting komunalka[.]github[.]io</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 08:24</span></td>
    <td><span>TA uploads a benign test RAR to the GitHub repo</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 13:38</span></td>
    <td><span>Cloudforce One identifies a Worker receiving requests from users visiting komunalka[.]github[.]io, observes its use as a phishing page</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 13:46</span></td>
    <td><span>Cloudforce One identifies that the Worker fetches a RAR file from GitHub (the malicious RAR payload is not yet hosted on the site)</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 19:22</span></td>
    <td><span>Cloudforce One creates a detection to identify the Worker that fetches the RAR</span></td>
  </tr>
  <tr>
    <td><span>2024-04-26 21:13</span></td>
    <td><span>Cloudforce One deploys real-time monitoring of the RAR file on GitHub</span></td>
  </tr>
  <tr>
    <td><span>2024-05-02 06:35</span></td>
    <td><span>TA deploys a weaponized RAR (CVE-2023-38831) to GitHub with their COOKBOX malware packaged in the archive</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 10:03</span></td>
    <td><span>TA attempts to update the Worker with link to weaponized RAR, the Worker is immediately blocked</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 10:38</span></td>
    <td><span>TA creates a new Worker, the Worker is immediately blocked</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 11:04</span></td>
    <td><span>TA creates a new account (#2) on Cloudflare</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 11:06</span></td>
    <td><span>TA creates a new Worker on account #2 (blocked)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 11:50</span></td>
    <td><span>TA creates a new Worker on account #2 (blocked)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 12:22</span></td>
    <td><span>TA creates a new modified Worker on account #2</span></td>
  </tr>
  <tr>
    <td><span>2024-05-06 16:05</span></td>
    <td><span>Cloudforce One disables the running Worker on account #2</span></td>
  </tr>
  <tr>
    <td><span>2024-05-07 22:16</span></td>
    <td><span>TA notices the Worker is blocked, ceases all operations</span></td>
  </tr>
  <tr>
    <td><span>2024-05-07 22:18</span></td>
    <td><span>TA deletes original Worker first created to fetch the RAR file from the GitHub phishing page</span></td>
  </tr>
  <tr>
    <td><span>2024-05-09 19:28</span></td>
    <td><span>Cloudforce One adds phishing page komunalka[.]github[.]io to real-time monitoring</span></td>
  </tr>
  <tr>
    <td><span>2024-05-13 07:36</span></td>
    <td><span>TA updates the github.io phishing site to point directly to the GitHub RAR link</span></td>
  </tr>
  <tr>
    <td><span>2024-05-13 17:47</span></td>
    <td><span>Cloudforce One adds COOKBOX C2 postdock[.]serveftp[.]com to real-time monitoring for DNS resolution</span></td>
  </tr>
  <tr>
    <td><span>2024-05-14 00:04</span></td>
    <td><span>Cloudforce One notifies GitHub to take down the RAR file</span></td>
  </tr>
  <tr>
    <td><span>2024-05-15 09:00</span></td>
    <td><span>GitHub user, project, and link for RAR are no longer accessible</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 08:23</span></td>
    <td><span>TA updates Komunalka phishing site on github.io to link to pixeldrain URL for dummy payload (pixeldrain only tracks view and download counts)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 08:25</span></td>
    <td><span>TA updates Komunalka phishing site to link to FileMail URL for dummy payload (FileMail tracks not only view and download counts, but also IP addresses)</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 12:21</span></td>
    <td><span>Cloudforce One downloads PixelDrain document to evaluate payload</span></td>
  </tr>
  <tr>
    <td><span>2024-05-21 12:47</span></td>
    <td><span>Cloudforce One downloads FileMail document to evaluate payload</span></td>
  </tr>
  <tr>
    <td><span>2024-05-29 23:59</span></td>
    <td><span>GitHub takes down Komunalka phishing site</span></td>
  </tr>
  <tr>
    <td><span>2024-05-30 13:00</span></td>
    <td><span>Cloudforce One publishes the results of this investigation</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h2>Coordinating our FlyingYeti response</h2>
      <a href="#coordinating-our-flyingyeti-response">
        
      </a>
    </div>
    <p>Cloudforce One leveraged industry relationships to provide advanced warning and to mitigate the actor’s activity. To further protect the intended targets from this phishing threat, Cloudforce One notified and collaborated closely with GitHub’s Threat Intelligence and Trust and Safety Teams. We also notified CERT-UA and Cloudflare industry partners such as CrowdStrike, Mandiant/Google Threat Intelligence, and Microsoft Threat Intelligence.</p>
    <div>
      <h3>Hunting FlyingYeti operations</h3>
      <a href="#hunting-flyingyeti-operations">
        
      </a>
    </div>
    <p>There are several ways to hunt FlyingYeti in your environment. These include using PowerShell to hunt for WinRAR files, deploying Microsoft Sentinel analytics rules, and running Splunk scripts as detailed below. Note that these detections may identify activity related to this threat, but may also trigger unrelated threat activity.</p>
    <div>
      <h3>PowerShell hunting</h3>
      <a href="#powershell-hunting">
        
      </a>
    </div>
    <p>Consider running a PowerShell script such as <a href="https://github.com/IR-HuntGuardians/CVE-2023-38831-HUNT/blob/main/hunt-script.ps1">this one</a> in your environment to identify exploitation of CVE-2023-38831. This script will interrogate WinRAR files for evidence of the exploit.</p>
            <pre><code>CVE-2023-38831
Description:winrar exploit detection 
open suspios (.tar / .zip / .rar) and run this script to check it 

function winrar-exploit-detect(){
$targetExtensions = @(".cmd" , ".ps1" , ".bat")
$tempDir = [System.Environment]::GetEnvironmentVariable("TEMP")
$dirsToCheck = Get-ChildItem -Path $tempDir -Directory -Filter "Rar*"
foreach ($dir in $dirsToCheck) {
    $files = Get-ChildItem -Path $dir.FullName -File
    foreach ($file in $files) {
        $fileName = $file.Name
        $fileExtension = [System.IO.Path]::GetExtension($fileName)
        if ($targetExtensions -contains $fileExtension) {
            $fileWithoutExtension = [System.IO.Path]::GetFileNameWithoutExtension($fileName); $filename.TrimEnd() -replace '\.$'
            $cmdFileName = "$fileWithoutExtension"
            $secondFile = Join-Path -Path $dir.FullName -ChildPath $cmdFileName
            
            if (Test-Path $secondFile -PathType Leaf) {
                Write-Host "[!] Suspicious pair detected "
                Write-Host "[*]  Original File:$($secondFile)" -ForegroundColor Green 
                Write-Host "[*] Suspicious File:$($file.FullName)" -ForegroundColor Red

                # Read and display the content of the command file
                $cmdFileContent = Get-Content -Path $($file.FullName)
                Write-Host "[+] Command File Content:$cmdFileContent"
            }
        }
    }
}
}
winrar-exploit-detect</code></pre>
            
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>Microsoft Sentinel</p><p>In Microsoft Sentinel, consider deploying the rule provided below, which identifies WinRAR execution via cmd.exe. Results generated by this rule may be indicative of attack activity on the endpoint and should be analyzed.</p>
            <pre><code>DeviceProcessEvents
| where InitiatingProcessParentFileName has @"winrar.exe"
| where InitiatingProcessFileName has @"cmd.exe"
| project Timestamp, DeviceName, FileName, FolderPath, ProcessCommandLine, AccountName
| sort by Timestamp desc</code></pre>
            
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>Splunk</p><p>Consider using <a href="https://research.splunk.com/endpoint/d2f36034-37fa-4bd4-8801-26807c15540f/">this script</a> in your Splunk environment to look for WinRAR CVE-2023-38831 execution on your Microsoft endpoints. Results generated by this script may be indicative of attack activity on the endpoint and should be analyzed.</p>
            <pre><code>| tstats `security_content_summariesonly` count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where Processes.parent_process_name=winrar.exe `windows_shells` OR Processes.process_name IN ("certutil.exe","mshta.exe","bitsadmin.exe") by Processes.dest Processes.user Processes.parent_process_name Processes.parent_process Processes.process_name Processes.process Processes.process_id Processes.parent_process_id 
| `drop_dm_object_name(Processes)` 
| `security_content_ctime(firstTime)` 
| `security_content_ctime(lastTime)` 
| `winrar_spawning_shell_application_filter`</code></pre>
            
    <div>
      <h2>Cloudflare product detections</h2>
      <a href="#cloudflare-product-detections">
        
      </a>
    </div>
    
    <div>
      <h3>Cloudflare Email Security</h3>
      <a href="#cloudflare-email-security">
        
      </a>
    </div>
    <p>Cloudflare Email Security (CES) customers can identify FlyingYeti threat activity with the following detections.</p><ul><li><p>CVE-2023-38831</p></li><li><p>FLYINGYETI.COOKBOX</p></li><li><p>FLYINGYETI.COOKBOX.Launcher</p></li><li><p>FLYINGYETI.Rar</p></li></ul>
    <div>
      <h2>Recommendations</h2>
      <a href="#recommendations">
        
      </a>
    </div>
    <p>Cloudflare recommends taking the following steps to mitigate this type of activity:</p><ul><li><p>Implement Zero Trust architecture foundations:    </p></li><li><p>Deploy Cloud Email Security to ensure that email services are protected against phishing, BEC and other threats</p></li><li><p>Leverage browser isolation to separate messaging applications like LinkedIn, email, and Signal from your main network</p></li><li><p>Scan, monitor and/or enforce controls on specific or sensitive data moving through your network environment with data loss prevention policies</p></li><li><p>Ensure your systems have the latest WinRAR and Microsoft security updates installed</p></li><li><p>Consider preventing WinRAR files from entering your environment, both at your Cloud Email Security solution and your Internet Traffic Gateway</p></li><li><p>Run an Endpoint Detection and Response (EDR) tool such as CrowdStrike or Microsoft Defender for Endpoint to get visibility into binary execution on hosts</p></li><li><p>Search your environment for the FlyingYeti indicators of compromise (IOCs) shown below to identify potential actor activity within your network.</p></li></ul><p>If you’re looking to uncover additional Threat Intelligence insights for your organization or need bespoke Threat Intelligence information for an incident, consider engaging with Cloudforce One by contacting your Customer Success manager or filling out <a href="https://www.cloudflare.com/zero-trust/lp/cloudforce-one-threat-intel-subscription/">this form</a>.</p>
    <div>
      <h2>Indicators of Compromise</h2>
      <a href="#indicators-of-compromise">
        
      </a>
    </div>
    
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Domain / URL</span></th>
    <th><span>Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>komunalka[.]github[.]io</span></td>
    <td><span>Phishing page</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//github[.]com/komunalka/komunalka[.]github[.]io</span></td>
    <td><span>Phishing page</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//worker-polished-union-f396[.]vqu89698[.]workers[.]dev</span></td>
    <td><span>Worker that fetches malicious RAR file</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//raw[.]githubusercontent[.]com/kudoc8989/project/main/Заборгованість по ЖКП.rar</span></td>
    <td><span>Delivery of malicious RAR file</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//1014[.]filemail[.]com/api/file/get?filekey=e_8S1HEnM5Rzhy_jpN6nL-GF4UAP533VrXzgXjxH1GzbVQZvmpFzrFA&amp;pk_vid=a3d82455433c8ad11715865826cf18f6</span></td>
    <td><span>Dummy payload</span></td>
  </tr>
  <tr>
    <td><span>hxxps[:]//pixeldrain[.]com/api/file/ZAJxwFFX?download=</span></td>
    <td><span>Dummy payload</span></td>
  </tr>
  <tr>
    <td><span>hxxp[:]//canarytokens[.]com/stuff/tags/ni1cknk2yq3xfcw2al3efs37m/payments.js</span></td>
    <td><span>Tracking link</span></td>
  </tr>
  <tr>
    <td><span>hxxp[:]//canarytokens[.]com/stuff/terms/images/k22r2dnjrvjsme8680ojf5ccs/index.html</span></td>
    <td><span>Tracking link</span></td>
  </tr>
  <tr>
    <td><span>postdock[.]serveftp[.]com</span></td>
    <td><span>COOKBOX C2</span></td>
  </tr>
</tbody></table></div> ]]></content:encoded>
            <category><![CDATA[Cloud Email Security]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudforce One]]></category>
            <category><![CDATA[CVE]]></category>
            <category><![CDATA[Exploit]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Intrusion Detection]]></category>
            <category><![CDATA[Malware]]></category>
            <category><![CDATA[Microsoft]]></category>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Remote Browser Isolation]]></category>
            <category><![CDATA[Russia]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Threat Data]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[Threat Operations]]></category>
            <category><![CDATA[Ukraine]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <guid isPermaLink="false">5JO10nXN3tLVG2C1EttkiH</guid>
            <dc:creator>Cloudforce One</dc:creator>
        </item>
        <item>
            <title><![CDATA[Eating Dogfood at Scale: How We Build Serverless Apps with Workers]]></title>
            <link>https://blog.cloudflare.com/building-serverless-apps-with-workers/</link>
            <pubDate>Fri, 19 Apr 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ You’ve had a chance to build a Cloudflare Worker. You’ve tried KV Storage and have a great use case for your Worker. You’ve even demonstrated the usefulness to your product or organization.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>You’ve had a chance to build a <a href="https://developers.cloudflare.com/workers/about/">Cloudflare Worker</a>. You’ve tried <a href="https://developers.cloudflare.com/workers/kv/">KV Storage</a> and have a great use case for your Worker. You’ve even demonstrated the usefulness to your product or organization. Now you need to go from writing a single file in the Cloudflare Dashboard UI Editor to source controlled code with multiple environments deployed using your favorite CI tool.</p><p>Fortunately, we have a powerful and flexible <a href="https://developers.cloudflare.com/workers/api/">API</a> for managing your workers. You can customize your deployment to your heart’s content. Our blog has already featured many things made possible by that API:</p><ul><li><p><a href="/introducing-wrangler-cli/">The Wrangler CLI</a></p></li><li><p><a href="/a-ci/">CI/CD Pipeline</a></p></li><li><p><a href="/deploying-workers-with-github-actions-serverless/">Github Actions</a></p></li><li><p><a href="/create-cloudflare-worker-bootstrap-your-cloudflare-worker/">Worker bootstrap template</a></p></li></ul><p>These tools make deployments easier to configure, but it still takes time to manage. The <a href="https://serverless.com/">Serverless Framework</a> <a href="https://serverless.com/plugins/serverless-cloudflare-workers/">Cloudflare Workers plugin</a> removes that deployment overhead so you can spend more time working on your application and less on your deployment.</p>
    <div>
      <h3>Focus on your application</h3>
      <a href="#focus-on-your-application">
        
      </a>
    </div>
    <p>Here at Cloudflare, we’ve been working to rebuild our Access product to run entirely on Workers. The move will allow Access to take advantage of the resiliency, performance, and flexibility of Workers. We’ll publish a more detailed post about that migration once complete, but the experience required that we retool some of our process to match or existing development experience as much as possible.</p><p>To us this meant:</p><ul><li><p>Git</p></li><li><p>Easily deploy</p></li><li><p>Different environments</p></li><li><p>Unit Testing</p></li><li><p>CI Integration</p></li><li><p>Typescript/Multiple Files</p></li><li><p>Everything Must Be Automated</p></li></ul><p>The Cloudflare Access team looked at three options for automating all of these tools in our pipeline. All of the options will work and could be right for you, but custom scripting can be a chore to maintain and Terraform lacked some extensibility.</p><ol><li><p>Custom Scripting</p></li><li><p><a href="https://www.terraform.io/docs/providers/cloudflare/index.html">Terraform</a></p></li><li><p>Serverless Framework</p></li></ol><p>We decided on the Serverless Framework. Serverless Framework provided a tool to mirror our existing process as closely as possible without too much DevOps overhead. Serverless is extremely simple and doesn’t interfere with the application code. You can get a project set up and deployed in seconds. It’s obviously less work than writing your own custom management scripts. But it also requires less boiler plate than Terraform because the Serverless Framework is designed for the “serverless” niche. However, if you are already using Terraform to manage other Cloudflare products, Terraform might be the best fit.</p>
    <div>
      <h3>Walkthrough</h3>
      <a href="#walkthrough">
        
      </a>
    </div>
    <p>Everything for the project happens in a YAML file called serverless.yml. Let’s go through the features of the configuration file.</p><p>To get started, we need to install serverless from npm and generate a new project.</p>
            <pre><code>npm install serverless -g
serverless create --template cloudflare-workers --path myproject
cd myproject
npm install</code></pre>
            <p>If you are an enterprise client, you want to use the cloudflare-workers-enterprise template as it will set up more than one worker (but don’t worry, you can add more to any template). Also, I’ll touch on this later, but if you want to write your workers in Rust, use the cloudflare-workers-rust template.</p><p>You should now have a project that feels familiar, ready to be added to your favorite source control. In the project should be a serverless.yml file like the following.</p>
            <pre><code>service:
    name: hello-world

provider:
  name: cloudflare
  config:
    accountId: CLOUDFLARE_ACCOUNT_ID
    zoneId: CLOUDFLARE_ZONE_ID

plugins:
  - serverless-cloudflare-workers

functions:
  hello:
    name: hello
    script: helloWorld  # there must be a file called helloWorld.js
    events:
      - http:
          url: example.com/hello/*
          method: GET
          headers:
            foo: bar
            x-client-data: value</code></pre>
            <p>The service block simply contains the name of your service. This will be used in your Worker script names if you do not overwrite them.</p><p>Under provider, name must be ‘cloudflare’  and you need to add your account and zone IDs. You can find them in the Cloudflare Dashboard.</p><p>The plugins section adds the Cloudflare specific code.</p><p>Now for the good part: functions. Each block under functions is a Worker script.</p><p><b>name</b>: (optional) If left blank it will be STAGE-service.name-script.identifier. If I removed name from this file and deployed in production stage, the script would be named production-hello-world-hello.</p><p><b>script</b>: the relative path to the javascript file with the worker script. I like to organize mine in a folder called handlers.</p><p><b>events</b>: Currently Workers only support http events. We call these routes. The example provided says that GET <a href="https://example.com/hello/">https://example.com/hello/</a> will  cause this worker to execute. The headers block is for testing invocations.</p><p>At this point you can deploy your worker!</p>
            <pre><code>CLOUDFLARE_AUTH_EMAIL=you@yourdomain.com CLOUDFLARE_AUTH_KEY=XXXXXXXX serverless deploy</code></pre>
            <p>This is very easy to deploy, but it doesn’t address our requirements. Luckily, there’s just a few simple modifications to make.</p>
    <div>
      <h3>Maturing our YAML File</h3>
      <a href="#maturing-our-yaml-file">
        
      </a>
    </div>
    <p>Here’s a more complex YAML file.</p>
            <pre><code>service:
  name: hello-world

package:
  exclude:
    - node_modules/**
  excludeDevDependencies: false

custom:
  defaultStage: development
  deployVars: ${file(./config/deploy.${self:provider.stage}.yml)}

kv: &amp;kv
  - variable: MYUSERS
    namespace: users

provider:
  name: cloudflare
  stage: ${opt:stage, self:custom.defaultStage}
  config:
    accountId: ${env:CLOUDFLARE_ACCOUNT_ID}
    zoneId: ${env:CLOUDFLARE_ZONE_ID}

plugins:
  - serverless-cloudflare-workers

functions:
  hello:
    name: ${self:provider.stage}-hello
    script: handlers/hello
    webpack: true
    environment:
      MY_ENV_VAR: ${self:custom.deployVars.env_var_value}
      SENTRY_KEY: ${self:custom.deployVars.sentry_key}
    resources: 
      kv: *kv
    events:
      - http:
          url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/hello"
          method: GET
      - http:
          url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/alsohello*"
          method: GET</code></pre>
            <p>We can add a custom section where we can put custom variables to use later in the file.</p><p><b>defaultStage</b>: We set this to development so that forgetting to pass a stage doesn’t trigger a production deploy. Combined with the <b>stage</b> option under provider we can set the stage for deployment.</p><p><b>deployVars</b>: We use this custom variable to load another YAML file dependent on the stage. This lets us have different values for different stages. In development, this line loads the file <code>./config/deploy.development.yml</code>. Here’s an example file:</p>
            <pre><code>env_var_value: true
sentry_key: XXXXX
SUBDOMAIN: dev</code></pre>
            <p><b>kv</b>: Here we are showing off a feature of YAML. If you assign a name to a block using the ‘&amp;’, you can use it later as a YAML variable. This is very handy in a multi script account. We could have named this variable anything, but we are naming it kv since it holds our Workers Key Value storage settings to be used in our function below.</p><p>Inside of the <b>kv</b> block we're creating a namespace and binding it to a variable available in your Worker. It will ensure that the namespace “users” exists and is bound to MYUSERS.</p>
            <pre><code>kv: &amp;kv
  - variable: MYUSERS
    namespace: users</code></pre>
            <p><b>provider</b>: The only new part of the provider block is <b>stage</b>.</p>
            <pre><code>stage: ${opt:stage, self:custom.defaultStage}</code></pre>
            <p>This line sets stage to either the command line option or custom.defaultStage if opt:stage is blank. When we deploy, we pass —stage=production to serverless deploy.</p><p>Now under our function we have added webpack, resources, and environment.</p><p><b>webpack</b>: If set to true, will simply bundle each handler into a single file for deployment. It will also take a string representing a path to a webpack config file so you can customize it. This is how we add Typescript support to our projects.</p><p><b>resources</b>: This block is used to automate resource creation. In resources we're linking back to the kv block we created earlier.</p><p><i>Side note: If you would like to include WASM bindings in your project, it can be done in a very similar way to how we included Workers KV. For more information on WASM, see the </i><a href="https://serverless.com/plugins/serverless-cloudflare-workers/"><i>documentation</i></a><i>.</i></p><p><b>environment</b>: This is the butter for the bread that is managing configuration for different stages. Here we can specify values to bind to variables to use in worker scripts. Combined with YAML magic, we can store our values in the aforementioned config files so that we deploy different values in different stages. With environments, we can easily tie into our CI tool. The CI tool has our deploy.production.yml. We simply run the following command from within our CI.</p>
            <pre><code>sls deploy --stage=production</code></pre>
            <p>Finally, I added a route to demonstrate that a script can be executed on multiple routes.</p><p>At this point I’ve covered (or hinted) at everything on our original list except Unit Testing. There are a few ways to do this.</p><p>We have a previous blog post about <a href="/unit-testing-worker-functions/">Unit Testing</a> that covers using <a href="https://github.com/dollarshaveclub/cloudworker">cloud worker</a>, a great tool built by <a href="https://www.dollarshaveclub.com/">Dollar Shave Club</a>.</p><p>My team opted to use the classic node frameworks mocha and sinon. Because we are using Typescript, we can build for node or build for v8. You can also make mocha work for non-typescript projects if you use an <a href="https://nodejs.org/api/esm.html">experimental feature that adds import/export support to node</a>.</p>
            <pre><code>--experimental-modules</code></pre>
            <p>We’re excited about moving more and more of our services to Cloudflare Workers, and the Serverless Framework makes that easier to do. If you’d like to learn even more or get involved with the project, see us on <a href="https://github.com/cloudflare/serverless-cloudflare-workers">github.com</a>. For additional information on using Serverless Framework with Cloudflare Workers, check out our <a href="https://developers.cloudflare.com/workers/deploying-workers/serverless/">documentation on the Serverless Framework</a>.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[CLI]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">40cVOSqV3lXK8azULKCab7</guid>
            <dc:creator>Jonathan Spies</dc:creator>
        </item>
        <item>
            <title><![CDATA[Deploying Workers with GitHub Actions + Serverless]]></title>
            <link>https://blog.cloudflare.com/deploying-workers-with-github-actions-serverless/</link>
            <pubDate>Fri, 01 Mar 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ If you weren’t aware, Cloudware Workers, our serverless programming platform, allows you to deploy code onto our 165 data centers around the world. 
Want to automatically deploy Workers directly from a GitHub repository? Now you can with our official GitHub Action.  ]]></description>
            <content:encoded><![CDATA[ <p>If you weren’t aware, <a href="https://developers.cloudflare.com/workers/about/">Cloudflare Workers</a>, our serverless programming platform, allows you to deploy code onto our 165 data centers around the world.</p><p>Want to automatically deploy Workers directly from a GitHub repository? Now you can with our official <a href="https://github.com/cloudflare/serverless-action">GitHub Action</a>. This Action is an extension of our existing integration with the Serverless Framework. It runs in a containerized GitHub environment and automatically deploys your Worker to Cloudflare. We chose to utilize the Serverless Framework within our GitHub Action to raise awareness of their awesome work and to enable even more serverless applications to be built with Cloudflare Workers. This Action can be used to deploy individual Worker scripts as well; the Serverless Framework is being used in the background as the deployment mechanism.</p><p>Before going into the details, we’ll quickly go over what GitHub Actions are.</p>
    <div>
      <h3>GitHub Actions</h3>
      <a href="#github-actions">
        
      </a>
    </div>
    <p>GitHub Actions allow you to <a href="https://developer.github.com/actions/creating-workflows/workflow-configuration-options/#action-blocks">trigger commands</a> in reaction to GitHub events. These commands run in containers and can receive environment variables. Actions could trigger build, test, or deployment commands across a variety of providers. They can also be linked and run sequentially (i.e. ‘if the build passes, deploy the app’). Similar to many <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD</a> tools, these commands run in an isolated container and receive environment variables. You can pass any command to the container that enables your development workflow.</p><p>Actions are a powerful way to your workflow on GitHub, including automating parts of your deployment pipeline directly from where your codebase lives. To that end, we’ve built an Action to deploy a Worker to your Cloudflare zone via our existing Serverless Framework integration for Cloudflare Workers. To visualize the entire flow see below:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ahViw5UC4o10Kun38A6j4/30f57d868f1586582eea7123fc5f5184/image1.png" />
            
            </figure><p>To see some of the other actions out there today, please <a href="https://github.com/features/actions">see here</a>.</p>
    <div>
      <h3>Why Use the Serverless Framework?</h3>
      <a href="#why-use-the-serverless-framework">
        
      </a>
    </div>
    <p>Serverless applications are deployed without developers needing to worry about provisioning hardware, capacity planning, scaling or paying for equipment when your application isn't running. Unlike most providers who ask you to choose a region for your serverless app to run in, all Cloudflare Workers deploy into our entire global network. Learn more about the <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/">benefits of serverless</a>.</p><p>The <a href="https://serverless.com/">Serverless Framework</a> is a popular toolkit for deploying applications that are serverless. The advantage of the Serverless Framework is that it offers a common CLI to use across multiple providers which support serverless applications. In <a href="/serverless-cloudflare-workers/">late 2018</a>, Cloudflare integrated Workers deployment into the Serverless CLI. Please check out <a href="https://developers.cloudflare.com/workers/deploying-workers/serverless/">our docs here</a> to get started.</p><p>If you run an entire application in a Worker, there is no cost to a business when the application is idle. If the application runs on our network (Cloudflare has 165 PoPs as of writing this), the app can be incredibly close to the end user, reducing latency by proximity. Additionally, Workers can be a powerful way to augment what you've already built in an existing technology, moving just the authentication or performance-sensitive components into Workers.</p>
    <div>
      <h3>Configuration</h3>
      <a href="#configuration">
        
      </a>
    </div>
    <p>Configuration of the Action is straightforward, with the side benefit of giving you just a ‘little bit’™ of exposure to the Serverless Framework if desired. A repo using this Action can just contain the Worker script to be deployed. If you feed the Action the right ENV variables, we’ll take care of the rest.</p><p>Alternatively you can also provide a <code>serverless.yml</code> in the root of your repo with your worker if you want to override the defaults. Get started learning about our integration with Serverless <a href="https://developers.cloudflare.com/workers/deploying-workers/serverless/">here</a>.</p><p>Your Worker script, and optional <code>serverless.yml</code> are passed into the container which runs the Action for deployment. The Serverless Framework picks up these files and deploys the Worker for you.</p><p>All the relevant variables must be passed to the Action as well, which include various account identifiers as well as your API key. You can check out this <a href="https://help.github.com/articles/creating-a-workflow-with-github-actions/">tutorial</a> from GitHub on how to pass environmental variables to an Action (<i>hint</i>: use the <code>secret</code> variable type for your API key).</p><h6>Support</h6><p>The repository is publicly available <a href="https://github.com/cloudflare/serverless-action">here</a> which goes over the configuration in more technical detail. Any question/suggestions feel free to let us know!</p> ]]></content:encoded>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3Xd7ZdguOQ4zXKhNHDRQeG</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
        <item>
            <title><![CDATA[A container identity bootstrapping tool]]></title>
            <link>https://blog.cloudflare.com/pal-a-container-identity-bootstrapping-tool/</link>
            <pubDate>Mon, 03 Jul 2017 16:21:24 GMT</pubDate>
            <description><![CDATA[ Everybody has secrets. Software developers have many. Often these secrets—API tokens, TLS private keys, database passwords, SSH keys, and other sensitive data—are needed to make a service run properly and interact securely with other services.  ]]></description>
            <content:encoded><![CDATA[ <p>Everybody has secrets. Software developers have many. Often these secrets—API tokens, TLS private keys, database passwords, SSH keys, and other sensitive data—are needed to make a service run properly and interact securely with other services. Today we’re sharing a tool that we built at Cloudflare to securely distribute secrets to our Dockerized production applications: PAL.</p><p>PAL is available on Github: <a href="https://github.com/cloudflare/pal">github.com/cloudflare/pal</a></p><p>Although PAL is not currently under active development, we have found it a useful tool and we think the community will benefit from its source being available. We believe that it's better to open source this tool and allow others to use the code than leave it hidden from view and unmaintained.</p>
    <div>
      <h3>Secrets in production</h3>
      <a href="#secrets-in-production">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9EgPkmFZufkA00t8O9NEK/6f04341ff5be43395c7d99236ae213b3/16214699701_55072899bb_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/personalcreations/16214699701">image</a> by <a href="https://www.personalcreations.com/">Personal Creations</a></p><p>How do you get these secrets to your services? If you’re the only developer, or one of a few on a project, you might put the secrets with your source code in your version control system. But if you just store the secrets in plain text with your code, everyone with access to your source repository can read them and use them for nefarious purposes (for example, stealing an API token and pretending to be an authorized client). Furthermore, distributed version control systems like Git will download a copy of the secret everywhere a repository is cloned, regardless of whether it’s needed there, and will keep that copy in the commit history forever. For a company where many people (including people who work on unrelated systems) have access to source control this just isn’t an option.</p><p>Another idea is to keep your secrets in a secure place and then embed them into your application artifacts (binaries, containers, packages, etc.) at build time. This can be awkward for <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">modern CI/CD workflows</a> because it results in multiple parallel sets of artifacts for different environments (e.g. production, staging, development). Once you have artifacts with secrets, they become secret themselves, and you will have to restrict access to the “armed” packages that contain secrets after they’re built. Consider the discovery last year that the <a href="http://thehackernews.com/2016/07/vine-source-code.html">source code of Twitter’s Vine</a> service was available in public Docker repositories. Not only was the source code for the service leaked, but the API keys that allow Vine to interact with other services were also available. Vine paid over $10,000 when they were notified about this.</p><p>A more advanced technique to manage and deploy secrets is to use a secret management service. Secret management services can be used to create, store and rotate secrets as well as distribute them to applications. The secret management service acts as a gatekeeper, allowing access to some secrets for some applications as prescribed by an access control policy. An application that wants to gain access to a secret authenticates itself to the secret manager, the secret manager checks permissions to see if the application is authorized, and if authorized, sends the secret. There are many options to choose from, including <a href="https://www.vaultproject.io/">Vault</a>, <a href="https://square.github.io/keywhiz/">Keywhiz</a>, <a href="https://medium.com/@Pinterest_Engineering/open-sourcing-knox-a-secret-key-management-service-3ec3a47f5bb">Knox</a>, <a href="https://github.com/meltwater/secretary">Secretary</a>, <a href="https://github.com/ejcx/dssss">dssss</a> and even <a href="https://blog.docker.com/2017/02/docker-secrets-management/">Docker’s own secret managment service</a>.</p><p>Secret managers are a good solution as long as an identity/authorization system is already in place. However, since most authentication systems involve the client already being in possession of a secret, this presents a chicken and egg problem.</p>
    <div>
      <h3>Identity Bootstrapping</h3>
      <a href="#identity-bootstrapping">
        
      </a>
    </div>
    <p>Once we have verified a service’s identity, we can make access control decisions about what that service can access. Therefore, the real problem we must solve is the problem of <i>bootstrapping service identity</i>.</p><p>This problem has many solutions when services are tightly bound to individual machines (for example, we can simply install host-level credentials on each machine or even use a machine’s hardware to identify it. Virtual machine platforms like Amazon AWS have machine-based APIs for host-level identity, like <a href="https://www.cloudflare.com/learning/access-management/what-is-identity-and-access-management/">IAM</a> and KMS). Containerized services have a much more fluid lifecycle - instances may appear on many machines and may come and go over time. Furthermore, any number of trusted and untrusted containers might be running on the same host at the same time. So what we need instead is an identity that belongs to a <i>service</i>, not to a machine.</p><p>Every Application needs an ID to prove to <a href="http://www.today.com/news/coolest-pope-ever-francis-talks-previous-work-nightclub-bouncer-2D11678844">the bouncer</a> that they’re on the guest list for Club Secret.</p><p>Bootstrapping the identity of a service that lives in a container is not a solved problem, and most of the existing are deeply integrated into the container orchestration (<a href="https://kubernetes.io/docs/concepts/configuration/secret/">Kubernetes</a>, <a href="https://docs.docker.com/engine/swarm/secrets/">Docker Swarm</a>, Mesos, etc.). We ran into the problem of container identity bootstrapping, and wanted something that worked with or current application deployment stack (Docker/Mesos/Marathon) but wasn’t locked down to a given orchestration platform.</p>
    <div>
      <h3>Enter PAL</h3>
      <a href="#enter-pal">
        
      </a>
    </div>
    <p>We use Docker containers to deploy many services across a shared, general-purpose Mesos cluster. To solve the service identity bootstrapping problem in our Docker environment, we developed PAL, which stands for Permissive Action Link, <a href="https://en.wikipedia.org/wiki/Permissive_Action_Link">a security device for nuclear weapons</a>. PAL makes sure secrets are only available in production, and only when jobs are authorized to run.</p><p>PAL makes it possible to keep only encrypted secrets in the configuration for a service while ensuring that those secrets can only be decrypted by authorized service instances in an approved environment (say, a production or staging environment). If those credentials serve to identify the service requesting access to secrets, PAL becomes a <i>container identity bootstrapping</i> solution that you can easily deploy a secret manager on top of.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>The model for PAL is that the secrets areprovided in an encrypted form and either embedded in containers, or provided as runtime configuration for jobs running in an orchestration framework such as <a href="http://mesos.apache.org/">Apache Mesos</a>.</p><p>PAL allows secrets to be decrypted at runtime after the service’s identity has been established. These credentials could allow authenticated inter-service communication, which would allow you to keep service secrets in a central repository such as Hashicorp’s <a href="https://vaultproject.io">Vault</a>, <a href="https://square.github.io/keywhiz/">KeyWhiz</a>, or others. The credentials could also be used to issue service-level credentials (such as <a href="/how-to-build-your-own-public-key-infrastructure/">certificates for an internal PKI</a>). Without PAL, you must distribute the identity credentials, required by tools like these themselves, inside your infrastructure.</p><p>PAL consists of two components: a small in-container initialization tool, <code>pal</code>, that requests secrets decryption and installs decrypted secret, and a daemon called <code>pald</code> that runs on every node in the cluster. <code>pal</code> and <code>pald</code> communicate with each other via a Unix socket. The <code>pal</code> tool is set as each job’s entrypoint, and it sends <code>pald</code> the encrypted secrets. <code>pald</code> then identifies the process making the job and determines whether it is allowed to access the requested secret. If so, it decrypts the secret on behalf of the job and returns the plaintext to <code>pal</code>, which installs the plaintext within the calling job’s container as either an environment variable or a file.</p><p>PAL currently supports two methods of encryption—<a href="https://openpgp.org/">PGP</a> and <a href="/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/">Red October</a>—but it can be extended to support more.</p>
    <div>
      <h3>PAL-PGP</h3>
      <a href="#pal-pgp">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/691ooiR4JKRArN1wWxWVlW/8f14c8fd1a6c18d94a6d63aa7a9e1b1d/image3.png" />
            
            </figure><p>PGP is a popular form of encryption that has been around since the early 90s. PAL allows you to use secrets that are encrypted with PGP keys that are installed on the host. The current version of PAL does not apply policies at a per-key level (e.g. only containers with Docker Label A can use key 1), but it could easily be extended to do so.</p>
    <div>
      <h4>PAL-Red October</h4>
      <a href="#pal-red-october">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4KqsF7ItaOhlij8TfpIqAF/4867dad9211d68588d9c4a4950c1dada/image2.png" />
            
            </figure><p>The Red October mode is used for secrets that are very high value and need to be managed manually or with multi-person control. We open sourced Red October <a href="/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/">back in 2013</a>. It has the nice feature of being able to encrypt a secret in such a way that multiple people are required to authorize the decryption.</p><p>In the PAL-RO typical setup, each machine in your cluster will be provisioned a Red October account. Before a container is scheduled to run, the secret owners delegate the ability to decrypt the secret to the host on which the container is going to run. When the container starts, <code>pal</code> calls <code>pald</code> which uses the machine’s Red October credentials to decrypt the secret via a call to the Red October server. Delegations can be of a limited time or for a number of decryptions. Once the delegations are used up, Red October has no way to decrypt the secret.</p><p>These two modes have been invaluable for protecting high-value secrets, where Red October provides additional oversight and control. For lower-value secrets, PGP provides a non-interactive experience that works well with ephemeral containers.</p>
    <div>
      <h3>Authorization details</h3>
      <a href="#authorization-details">
        
      </a>
    </div>
    <p>An important part of a secret management tool is ensuring that only authorized entities can decrypt a secret. PAL enables you to control which containers can decrypt a secret by leveraging existing code signing infrastructure. Both secrets and containers can be given optional labels that PAL will respect. Labels define which containers can access which secrets—a container must have the label of any secret it accesses. Labels are named references to security policies. An example label could be “production-team-secret” which denotes that a secret should conform to the production team’s secret policy. Labels bind cipher texts to an authorization to decrypt. These authorizations allow you to use PAL to control when and in what environment secrets can be decrypted.</p><p>By opening the Unix socket with the option <code>SO_PASSCRED</code>, we enable <code>pald</code> to obtain the process-level credentials (uid, pid, and gid) of the caller for each request. These credentials can then be used to identify containers and assign them a <i>label</i>. Labels allow PAL to consult a predefined policy and authorize containers to receive secrets. To get the list of labels on a container, <code>pald</code> uses the process id (pid) of the calling process to get its cgroups from Linux (by reading and parsing <code>/proc/&lt;pid&gt;/cgroups</code>). The names of the cgroups contain the Docker container id, which we can use to get container metadata via Docker’s <code>inspect</code> call. This container metadata carries a list of labels assigned by the Docker <code>LABEL</code> directive at build time.</p><p>Containers and their labels must be bound together using code integrity tools. PAL supports using Docker’s Notary, which confirms that a specific container hash maps to specific metadata like a container’s name and label.</p>
    <div>
      <h3>PAL’s present and future</h3>
      <a href="#pals-present-and-future">
        
      </a>
    </div>
    <p>PAL represents one solution for identity bootstrapping for our environment. Other service identity bootstrapping tools bootstrap at the host-level or are highly environmentally coupled. AWS IAM, for example, only works at the level of virtual machines running on AWS; Kubernetes secrets and Docker secrets management only work in Kubernetes and Docker Swarm, respectively. While we’ve developed PAL to work alongside Mesos, we designed it to used as a service identity mechanism for many other environments simply by plugging in new ways for PAL to read identities from service artifacts (containers, packages, or binaries).</p><p>Recall the issue where Vine disclosed their source code in their Docker containers on a public repository, Docker Hub. With PAL, Vine could have kept their API keys (or even the entire codebase) encrypted in the container, published that safe version of the container to Docker Hub, and decrypted the code at container startup in their particular production environment.</p><p>Using PAL, you can give your trusted containers an identity that allows them to safely receive secrets only in production, without the risks associated with other secret distribution methods. This identity can be a secret like a cryptographic key, allowing your service to decrypt its sensitive configuration, or it could be a credential that allows it to access sensitive services such as secret managers or CAs. PAL solves a key bootstrapping problem for service identity, making it simple to run trusted and untrusted containers side-by-side while ensuring that your secrets are safe.</p>
    <div>
      <h3>Credits</h3>
      <a href="#credits">
        
      </a>
    </div>
    <p>PAL was created by Joshua Kroll, Daniel Dao, and Ben Burkert with design prototyping by Nick Sullivan. This post was adapted from an internal blog by Joshua Kroll and a presentation I made at <a href="https://www.safaribooksonline.com/library/view/oreilly-security-conference/9781491976128/video290047.html">O’Reilly Security Amsterdam</a> in 2016 , and <a href="https://www.youtube.com/watch?v=G_JXv059UY0">BSides Las Vegas</a>.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4XgEY8tsCVdZHW4zp8e22j</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Manage Cloudflare records with Salt]]></title>
            <link>https://blog.cloudflare.com/manage-cloudflare-records-with-salt/</link>
            <pubDate>Wed, 14 Dec 2016 14:25:52 GMT</pubDate>
            <description><![CDATA[ We use Salt to manage our ever growing global fleet of machines. Salt is great for managing configurations and being the source of truth. We use it for remote command execution and for network automation tasks. ]]></description>
            <content:encoded><![CDATA[ <p>We use <a href="https://github.com/saltstack/salt">Salt</a> to manage our ever growing global fleet of machines. Salt is great for managing configurations and being the source of truth. We use it for remote command execution and for <a href="/the-internet-is-hostile-building-a-more-resilient-network/">network automation tasks</a>. It allows us to grow our infrastructure quickly with minimal human intervention.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HTPX88whBMGq7gOYzrjhw/1c30f21eeaae6e03d229aa8e548313dc/grains.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC-BY 2.0</a> <a href="https://secure.flickr.com/photos/pagedooley/2769134850/">image</a> by <a href="https://secure.flickr.com/photos/pagedooley/">Kevin Dooley</a></p><p>We got to thinking. Are DNS records not just a piece of the configuration? We concluded that they are and decided to manage our own records from Salt too.</p><p>We are strong believers in <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">eating our own dog food</a>, so we make our employees use the next version of our service before rolling it to everyone else. That way if there's a problem visiting one of the 5 million websites that use Cloudflare it'll get spotted quickly internally. This is also why we keep our own DNS records on Cloudflare itself.</p><p>Cloudflare has an <a href="https://api.cloudflare.com/">API</a> that allows you to manage your zones programmatically without ever logging into the dashboard. Until recently, we were using handcrafted scripts to manage our own DNS records via our API. These scripts were in exotic languages like PHP for historical reasons and had interesting behavior that not everybody enjoyed. While we were dogfooding our own APIs, these scripts were pushing the source of truth for DNS out of Salt.</p><p>When we decided to move some zones to Salt, we had a few key motivations:</p><ol><li><p>Single source of truth</p></li><li><p>Peer-reviewed, audited and versioned changes</p></li><li><p>Making things that our customers would want to use</p></li></ol><p>Points 1 and 2 were achieved by having DNS records in a Salt repo. The Salt configuration is itself in git, so we get peer reviews and audit trail for free. We think that we made progress on point 3 also.</p><p>After extensive internal testing and finding a few bugs in our API (that's what we wanted!), we are happy to announce the public availability of <a href="https://github.com/cloudflare/salt-cloudflare">Cloudflare Salt module</a>.</p><p>If you are familiar with Salt, it should be easy to see how to configure your records via Salt. All you need is the following:</p><p>Create the state <code>cloudflare</code> to deploy your zones:</p>
            <pre><code>example.com:
  cloudflare.manage_zone_records:
    - zone: {{ pillar["cloudflare_zones"]["example.com"]|yaml }}</code></pre>
            <p>Add a pillar to configure your zone:</p>
            <pre><code>cloudflare_zones:
  example.com:
    auth_email: ivan@example.com
    auth_key: auth key goes here
    zone_id: 0101deadbeefdeadbeefdeadbeefdead
    records:
      - name: blog.example.com
        content: 93.184.216.34
        proxied: true</code></pre>
            <p>Here we configure zone <code>example.com</code> to only have one record <code>blog.example.com</code> pointing to <code>93.184.216.34</code> behind Cloudflare.</p><p>You can test your changes before you deploy:</p>
            <pre><code>salt-call state.apply cloudflare test=true</code></pre>
            <p>And then deploy if you are happy with the dry run:</p>
            <pre><code>salt-call state.apply cloudflare</code></pre>
            <p>After the initial setup, all you need is to change the <code>records</code> array in pillar and re-deploy the state. See the <a href="https://github.com/cloudflare/salt-cloudflare">README</a> for more details.</p><p>DNS records are only one part of configuration you may want to change for your Cloudflare domain. We have plans to "saltify" other settings like WAF, caching, page rules and others too.</p><p><a href="https://www.cloudflare.com/join-our-team/">Come work with us</a> if you want to help!</p> ]]></content:encoded>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Salt]]></category>
            <guid isPermaLink="false">7cpWAbZAOswJhWNOR3oex8</guid>
            <dc:creator>Ivan Babrou</dc:creator>
        </item>
        <item>
            <title><![CDATA[Secure and fast GitHub Pages with CloudFlare]]></title>
            <link>https://blog.cloudflare.com/secure-and-fast-github-pages-with-cloudflare/</link>
            <pubDate>Tue, 14 Jun 2016 13:04:21 GMT</pubDate>
            <description><![CDATA[ GitHub offers a web hosting service whereby you can serve a static website from a GitHub repository. This platform, GitHub Pages, can be used with CloudFlare whilst using a custom domain name.

 ]]></description>
            <content:encoded><![CDATA[ <p>GitHub offers a web hosting service whereby you can serve a static website from a GitHub repository. This platform, GitHub Pages, can be used with CloudFlare whilst using a custom domain name.</p><p>In this tutorial, I will show you how to use CloudFlare and GitHub together. By taking advantage of CloudFlare’s global network, you can utilise our CDN service to improve your site's performance and security.</p><p>Whilst GitHub Pages doesn't ordinarily support SSL on custom domains, CloudFlare's Universal SSL allows your users to access your site over SSL, thus opening up the performance advantages of HTTP/2.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70mtZVLv2Dj2kkNjmWj1q7/0b9490f343714a2c5d61cadcc05e675f/IMG_0651.JPG.jpeg" />
            
            </figure><p>GitHub Pages is designed to host sites that only serve static HTML. The ability to only host static content isn’t as big of a restriction as you might think.</p><p>Static site generators avoid repetitive update tasks of updating “latest posts” feeds, pagination or sitemaps; whilst generating static HTML that can be uploaded to any web hosting service without a scripting engine. Unlike ancient desktop tools like FrontPage and Dreamweaver which lacked a content model, modern static site generators have the design decisively separate from content.</p><p>Typically, CMS-based sites must query a database for content, then render the HTML to be served to the end user; all this to serve the same content for request after request. Even with caching this combination is hardly elegant for sites where only the administrator changes the content.</p><p>With static sites the web server merely needs to serve static HTML to an end user. This has profound performance benefits; static sites walk whilst dynamic sites crawl. Above this, the ability to track all site changes in a Git repository adds better control when it comes to collaborative editing.</p><p>With static sites there is no CMS, no database; just HTML. No need to worry about <a href="https://bugs.php.net/bug.php?id=71105">patching PHP</a> or plugins with <a href="/the-sleepy-user-agent/">insecure database queries</a>.</p><p>Clearly static sites can’t do everything, namely anything that’s dynamic; though you can utilise JavaScript APIs to add some dynamic functionality if that's a route you want to pursue.</p>
    <div>
      <h3>Step 0: Preparing your site</h3>
      <a href="#step-0-preparing-your-site">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50nf0J19SYvwLHgvCTNE3T/a46dd03886f172e8483eec365c2700da/1280px-Dr_Jekyll_and_Mr_Hyde_poster_edit2-1.jpg" />
            
            </figure><p><a href="https://en.wikipedia.org/wiki/Strange_Case_of_Dr_Jekyll_and_Mr_Hyde#/media/File:Dr_Jekyll_and_Mr_Hyde_poster_edit2.jpg"><i>Chicago: National Prtg. &amp; Engr. Co.</i></a></p><p>For this tutorial, I will be using Jekyll as a static static site generator. Jekyll works by taking a bunch of markdown files and outputting the HTML necessary for a static blog.</p><p>There is a great list of generators on a site called <a href="http://www.staticgen.com">StaticGen</a>, including a static site generator written in Go called <a href="https://gohugo.io">Hugo</a>. I’m using Jekyll here due to the <a href="https://jekyllrb.com/docs/github-pages/">integration with GitHub Pages</a>.</p><p>If you want to host a JavaScript app or a simple static site, just skip this step.</p><p>Assuming you have a Ruby version greater than 2.0.0, then you can just <a href="https://jekyllrb.com/docs/installation/">install Jekyll</a> by running:</p>
            <pre><code>gem install jekyll</code></pre>
            <p>For this example I’m going to be creating a blog about about plants using Jekyll. To create the blog simply run:</p>
            <pre><code>jekyll new plants</code></pre>
            <p>which will output something like:</p>
            <pre><code>New jekyll site installed in /Users/junade/plants.</code></pre>
            <p>From here I can <code>cd</code> into the <code>plants</code> directory and serve the blog on my local computer as follows:</p>
            <pre><code>cd plants
jekyll serve</code></pre>
            <p>A web server will spin up and you’ll find some useful information in your terminal prompt. You should be able to access the site from localhost at port 4000 in your browser: <a href="http://127.0.0.1:4000">http://127.0.0.1:4000</a></p><p>Whilst this server is running you can update your site; you will find some useful variables to edit in <code>_config.yml</code>. To add a new post or edit the existing demo, simply add a new markdown file in the <code>_posts</code> directory.</p><p>Whilst ordinarily you would need to generate your site’s HTML using <code>jekyll build</code> then upload it to the web server of your choice, GitHub allows for raw Jekyll projects to be uploaded to its service; it will handle the building and serving of the HTML itself from the Jekyll project.</p><p>There is a <a href="https://github.com/github/pages-gem">Ruby Gem for Jekyll sites</a> that ensures they are rendered the same way locally as they do when they are hosted on GitHub Pages if you’re interested.</p>
    <div>
      <h3>Step 1: Setting up our repository</h3>
      <a href="#step-1-setting-up-our-repository">
        
      </a>
    </div>
    <p>Create a GitHub repository which contains the files of the site we want to serve (such as our Jekyll source or our HTML). As my GitHub username is IcyApril, I can create a repository called <code>icyapril.github.io</code>. Be sure that the repository name is all lowercase.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Xfs9k0OfZU8tVFrtlgWip/002f8515283e9e18268abc25b7e36981/create_repo_github_pages.png" />
            
            </figure><p>In this repository, whatever is in the main branch ends up published on <code>icyapril.github.io</code>.</p><p>If you haven’t already, let’s initiate a repository where our site files are:</p>
            <pre><code>git init
git add -A
git commit -m “Initial commit.”</code></pre>
            <p>We can now push files to our host by adding the origin as GitHub; make sure the URL of the origin is customised to be your own repository:</p>
            <pre><code>git remote add origin git@github.com:IcyApril/icyapril.github.io.git
git push -u origin master</code></pre>
            <p>You should now see your site when you visit <code>[username].github.io</code>.</p>
    <div>
      <h3>Step 2: Setting up our DNS</h3>
      <a href="#step-2-setting-up-our-dns">
        
      </a>
    </div>
    <p>I’ll assume you have registered a domain and <a href="https://support.cloudflare.com/hc/en-us/articles/201720164-Step-2-Create-a-CloudFlare-account-and-add-a-website">added it to your CloudFlare account</a>.In order to for GitHub to accept traffic from this domain, we need to create a CNAME file in our repository which contains the hostname to accept traffic for.</p><p>The following rules apply:</p><ul><li><p>If the CNAME file contains example.com, then <a href="http://www.example.com">www.example.com</a> will redirect to example.com.</p></li><li><p>If the CNAME file contains <a href="http://www.example.com">www.example.com</a>, then example.com will redirect to <a href="http://www.example.com">www.example.com</a>.</p></li></ul><p>In the Git repository we created in the previous section, let’s add a CNAME file to that repository and commit our changes:</p>
            <pre><code>echo "www.ju.je" &gt; CNAME
git add -A
git commit -m “Added CNAME file.”
git push origin master</code></pre>
            <p>We can add the records to point our DNS records to our GitHub Pages account (we can use a CNAME at the root thanks to <a href="/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/">CNAME Flattening</a>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jZjeTCFx2D8hokGjAD2m4/e80624359358b60e0b7cfdd75544b5bd/Yn1pLtu.png" />
            
            </figure><p>You can find <a href="https://help.github.com/articles/setting-up-an-apex-domain/">the most up-to-date IP Addresses</a> from the GitHub Pages documentation.</p>
    <div>
      <h3>Step 3: Time for SSL</h3>
      <a href="#step-3-time-for-ssl">
        
      </a>
    </div>
    <p>Unfortunately GitHub Pages doesn’t yet support SSL on GitHub Pages for custom domains which would ordinarily rule out using HTTP/2. Whilst the HTTP/2 specification (<a href="https://tools.ietf.org/html/rfc7540">RFC 7540</a>) allows for HTTP/2 over plain-text HTTP/2, all popular browsers require HTTP/2 to run on top of Transport Layer Security; meaning HTTP/2 only being able to run over HTTPS is the de-facto standard.</p><p>Fortunately, CloudFlare’s Universal SSL option allows us provide a signed SSL certificate to site visitors. This allows us to gain the performance benefits of HTTP/2 and <a href="https://webmasters.googleblog.com/2014/08/https-as-ranking-signal.html">potentially improve search engine rankings</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hHyZ2zWyyRhhA1f7QQ48o/4f9e52471bbcec3d65d829a7229a1c8a/cloudflare_ssl_modes.png" />
            
            </figure><p>In the Crypto tab of your CloudFlare site you should ensure your SSL mode is set to <code>Full</code> but not <code>Full (Strict)</code>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1WdxoW5g3b6hG8DkPD1Kka/606b8757f89a2144725e5ec55b8ee928/T08btVu.png" />
            
            </figure><p>We can now add a Page Rule to enforce HTTPS, as you add other Page Rules make sure this is the primary Page Rule:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rNJrxsVAjbZCmrxNXtqHG/0176c20cd7a73d9530f559a83ae0ae91/always_use_https_page_rule.png" />
            
            </figure><p>We can also create a Page Rule to ensure that non-www is redirected to www securely when using HTTPS:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/60sT5nSCAcf9S1akGJ0YPd/79edbe8bea1fff301c526eb0f3296767/redirect_page_rule_to_www.png" />
            
            </figure><p>Enabling HTTP Strict Transport Security (HSTS) will help ensure that your visitors have to communicate to your site over HTTPS, by telling browsers that they should always communicate over encrypted HTTPS. Be careful if you choose to set this though, it may render your site inaccessible if you wish to choose to ever turn HTTPS off.</p>
    <div>
      <h3>Step 4: Cache all the things</h3>
      <a href="#step-4-cache-all-the-things">
        
      </a>
    </div>
    <p>CloudFlare has a “Cache Everything” option in Page Rules. For static sites, it allows your HTML to be cached and served directly from CloudFlare's CDN.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vl8huJzTmWXTyQvaEjt1L/53b00cb3328e6645f172bcef285b33c0/PtBIQyF.png" />
            
            </figure><p>When deploying your site you can use the Purge Cache option in the Cache tab on CloudFlare to remove the cached version of the static pages. If you’re using a Continuous Integration system to deploy your site, you can use our <a href="https://api.cloudflare.com">API</a> to clear the cache programmatically.</p>
    <div>
      <h3>Shortcomings</h3>
      <a href="#shortcomings">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4uyeqKi4p6FkFGOw2WiyaO/94f2c0aa6976b5e24bb74e1eec67dd58/mind_the_gap.png" />
            
            </figure><p><a href="https://commons.wikimedia.org/wiki/File:MindTheGapVictoria.jpg"><i>WillMcC on WikiMedia</i></a></p><p>Firstly a word on security. If you are deploying a JavaScript app which communicates with remote APIs, be sure not to use this for sensitive data submissions. As <a href="https://help.github.com/articles/what-are-github-pages/">GitHub themselves put it</a>: “GitHub Pages sites shouldn't be used for sensitive transactions like sending passwords or credit card numbers.” Also bear in mind your website source files are publicly accessible in a Git repository, so be extra careful about what you put there.</p><p>There are some things we can’t do; GitHub Pages doesn’t let us set custom headers, which unfortunately means we can’t do HTTP/2 Server Push right now.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>GitHub Pages, CloudFlare and a static site generator combine to create fast, secure, free hosting for static sites.</p> ]]></content:encoded>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[HTTP2]]></category>
            <guid isPermaLink="false">cwaJIJzcEi9oxc1BiXqYJ</guid>
            <dc:creator>Junade Ali</dc:creator>
        </item>
        <item>
            <title><![CDATA[Red October: CloudFlare’s Open Source Implementation of the Two-Man Rule]]></title>
            <link>https://blog.cloudflare.com/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/</link>
            <pubDate>Thu, 21 Nov 2013 09:00:00 GMT</pubDate>
            <description><![CDATA[ At CloudFlare, we are always looking for better ways to secure the data we’re entrusted with. This means hardening our system against outside threats such as hackers, but it also means protecting against insider threats.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>At CloudFlare, we are always looking for better ways to secure the data we’re entrusted with. This means hardening our system against outside threats such as hackers, but it also means protecting against insider threats. According to a <a href="http://www.verizonenterprise.com/DBIR/2013/">recent Verizon report</a>, insider threats account for around 14% of data breaches in 2013. While we perform background checks and carefully screen team members, we also implement technical barriers to protect the data with which we are entrusted.</p><p>One good information security practice is known as the “two-man rule.” It comes from military history, where a nuclear missile couldn’t be launched unless two people agreed and turned their launch keys simultaneously. This requirement was introduced in order to prevent one individual from accidentally (or intentionally) starting World War III.</p><p>To prevent the risk of rogue employees misusing sensitive data we built a service in Go to enforce the two-person rule. We call the service Red October after the famous scene from “The Hunt for Red October.” In line with <a href="/a-note-about-kerckhoffs-principle">our philosophy on security software</a>, we are open sourcing the technology so you can use it in your own organization (<a href="https://github.com/cloudflare/redoctober">here’s a link</a> to the public Github repo). If you are interested in the nitty-gritty details, read on.</p>
    <div>
      <h3>What it is</h3>
      <a href="#what-it-is">
        
      </a>
    </div>
    <p>Red October is a cryptographically-secure implementation of the two-person rule to protect sensitive data. From a technical perspective, Red October is a software-based encryption and decryption server. The server can be used to encrypt a payload in such a way that no one individual can decrypt it. The encryption of the payload is cryptographically tied to the credentials of the authorized users.</p><p>Authorized persons can delegate their credentials to the server for a period of time. The server can decrypt any previously-encrypted payloads as long as the appropriate number of people have delegated their credentials to the server.</p><p>This architecture allows Red October to act as a convenient decryption service. Other systems, including CloudFlare’s build system, can use it for decryption and users can delegate their credentials to the server via a simple web interface. All communication with Red October is encrypted with TLS, ensuring that passwords are not sent in the clear.</p>
    <div>
      <h3>How to use it</h3>
      <a href="#how-to-use-it">
        
      </a>
    </div>
    <p>Setting up a Red October server is simple; all it requires is a locally-readable path and an SSL key pair. After that, all control is handled remotely through a set of JSON-based APIs.</p><p>Red October is backed by a database of accounts stored on disk in a portable password vault. The server never stores the account password there, only a <a href="/keeping-passwords-safe-by-staying-up-to-date">salted hash of the password</a> for each account. For each user, the server creates an RSA key pair and encrypts the private key with a key derived from the password and a randomly generated salt using a secure derivation function.</p><p>Any administrator can encrypt any piece of data with the encrypt API. This request takes a list of users and the minimum number of users needed to decrypt it. The server returns a somewhat larger piece of data that contains an encrypted version of this data. The encrypted data can then be stored elsewhere.</p><p>This data can later be decrypted with the decrypt API, but only if enough people have delegated their credentials to the server. The delegation API lets a user grant permission to a server to use their credentials for a limited amount of time and a limited number of uses.</p>
    <div>
      <h3>Cryptographic Design</h3>
      <a href="#cryptographic-design">
        
      </a>
    </div>
    <p>Red October was designed from cryptographic first principles, combining trusted and understood algorithms in known ways. CloudFlare is also opening the source of the server to allow others to analyze its design.</p><p>Red October is based on combinatorial techniques and trusted cryptographic primitives. We investigated using complicated secret primitives like <a href="http://en.wikipedia.org/wiki/Shamir's_Secret_Sharing">Shamir's sharing scheme</a>, but we found that a simpler combinatorial approach based on primitives from Go's standard library was preferable to implementing a mathematical algorithm from scratch. Red October uses <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">128-bit AES</a>, <a href="http://en.wikipedia.org/wiki/RSA_(algorithm)">2048-bit RSA</a> and <a href="http://en.wikipedia.org/wiki/Scrypt">scrypt</a> as its cryptographic primitives.</p>
    <div>
      <h4>Creating an account</h4>
      <a href="#creating-an-account">
        
      </a>
    </div>
    <p>Each user is assigned a unique, randomly-generated RSA key pair when creating an account on a Red October server. The private key is encrypted with a password key derived from the user’s password and salt using scrypt. The public key is stored unencrypted in the vault with the encrypted private key.</p>
    <div>
      <h4>Encrypting data</h4>
      <a href="#encrypting-data">
        
      </a>
    </div>
    <p>When asked to encrypt a piece of data, the server generates a random 128-bit AES key. This key is used to encrypt the data. For each user that is allowed to decrypt the data, a user-specific key encryption key is chosen. For each unique pair of users, the data key is doubly encrypted, once with the key encryption key of each user. The key encryption keys are then encrypted with the public RSA key associated with their account. The encrypted data, the set of doubly-encrypted data keys, and the RSA-encrypted key encryption keys are all bundled together and returned. The encrypted data is never stored on the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38JLECxsa5YaGcwkkJnhEn/3054808bc0b25afda9d7c3be1fdc8f6b/cryptography1.png" />
            
            </figure><p>Delegating credentials to the server</p><p>When a user delegates their key to the server, they submit their username and password over TLS using the delegate JSON API. For each account, the password is verified against the salted hash. If the password is correct, a password key is derived from the password and used to decrypt the user’s RSA private key. This key is now “Live” for the length of time and number of decryptions chosen by the user.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Jefh1rq4DwCnifHejHJNe/c10f4e6420354b9c16f3f8f12ff15ae8/cryptography3.png" />
            
            </figure><p>Decrypting data</p><p>To decrypt a file, the server validates that the requesting user is an administrator and has the correct password. If two users of the list of valid users have delegated their keys, then decryption can occur. First the RSA private key is used to decrypt the key encryption key for these two users, then the key encryption keys are used to decrypt the doubly encrypted data key, which is then used to decrypt the data.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nvlGuGFWguhaebYbrsY9n/1432bb6989d6b48631c7271d95bd12a6/cryptography2.png" />
            
            </figure><p>Some other key points:</p><ol><li><p>Cryptographic security. The Red October server does not have the ability to decrypt user keys without their password. This prevents someone with access to the vault from decrypting data.</p></li><li><p>Password flexibility. Passwords can be changed without changing the encryption of a given file. Key encryption keys ensure that password changes are decoupled from data encryption keys.</p></li></ol>
    <div>
      <h3>Looking ahead</h3>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>The version of Red October we are releasing to GitHub is in beta. It is licensed under the <a href="http://opensource.org/licenses/BSD-3-Clause">3-clause BSD license</a>. We plan to continue to release our improvements to the open source community. Here is the project on GitHub: <a href="https://github.com/cloudflare/redoctober">Red October</a>.</p><p>Writing the server in Go allowed us to design the different components of this server in a modular way. Our hope is this modularity will make it easy for anyone to build in support for different authentication methods that are not based on passwords (for example, TLS client certificates, time-based one-time-passwords) and new core cryptographic primitives (for example, elliptic curve cryptography).</p><p>CloudFlare is always looking to improve the state of security on the Internet. It is important to us to share our advances with the world and <a href="/open-source-two-way-street">contribute back to the community</a>. See the <a href="http://cloudflare.github.io/">CloudFlare GitHub page</a> for the list of our open source projects and initiatives.</p> ]]></content:encoded>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[RSA]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Salt]]></category>
            <guid isPermaLink="false">6tG7fMY0ykeCo1uPglwZFn</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare And Open Source Software: A Two-Way Street]]></title>
            <link>https://blog.cloudflare.com/open-source-two-way-street/</link>
            <pubDate>Mon, 07 Oct 2013 18:00:00 GMT</pubDate>
            <description><![CDATA[ CloudFlare uses a great deal of open source and free software. Our core server platform is nginx (which is released using a two-clause BSD license) and our primary database of choice is postgresql (which is released using their own BSD-like license).  ]]></description>
            <content:encoded><![CDATA[ <p>CloudFlare uses a great deal of open source and free software. Our core server platform is <a href="http://nginx.org/">nginx</a> (which is released using a <a href="http://nginx.org/LICENSE">two-clause BSD license</a>) and our primary database of choice is <a href="http://www.postgresql.org/">postgresql</a> (which is released using their own <a href="http://www.postgresql.org/about/licence/">BSD-like license</a>). We've <a href="/kyoto_tycoon_with_postgresql">talked in the past</a> about our use of <a href="http://fallabs.com/kyototycoon/">Kyoto Tycoon</a> (which is released under the GNU General Public License) and we've built many things on top of <a href="/pushing-nginx-to-its-limit-with-lua">OpenResty</a>.</p><p>And, of course, we make use of open source tools such as gcc, make, the Go programming language, Lua, python, Perl, and PHP, and projects like <a href="https://www.getsentry.com/welcome/">Sentry</a>, <a href="http://www.elasticsearch.org/">Kibana</a>, and <a href="http://www.nagios.org/">nagios</a>. And, naturally, we use Linux.</p><p>It would take a while to write down all the software that we use to build CloudFlare, but all that software has one thing in common: it's open source or free software. Our stack consists of either software we've built ourselves or an open source project (which we've sometimes forked).</p>
    <div>
      <h3>Why Build On Open Source</h3>
      <a href="#why-build-on-open-source">
        
      </a>
    </div>
    <p>It's probably obvious to most readers why we use open source software: it's reliable, it's easy to modify and it's easy to maintain. But there's another benefit that should not be overlooked: using and working on open source software brings a great deal of job satisfaction for programmers and it helps us hire the best.</p><p>We encourage our programmers to release changes they've made to open source software and to release projects through the <a href="http://cloudflare.github.io/">CloudFlare GitHub</a> page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3TMjRYWUEXyn3g7aJadzx1/0836a32a7ce3cd21e3057ddb2905ada5/cloudflare-github.png" />
            
            </figure><p>At GitHub you'll find projects such as <a href="https://github.com/cloudflare/golog">golog</a> (a high-performance Go logger), <a href="https://github.com/cloudflare/lua-cmsgpack">lua-cmsgpack</a> (an implementation of MessagePack for Lua), a Python-based <a href="https://github.com/cloudflare/CloudFlare-CNAME-Flattener">CNAME flattener</a>, and a <a href="https://github.com/agentzh/stapxx">macro language</a> for systemtap, amongst others.</p><p>You'll also find the <a href="https://github.com/chaoslawful/lua-nginx-module">ngx_lua module</a> which embeds Lua in nginx. That's not something CloudFlare initially wrote, but we make such extensive use of it that we hired <a href="https://github.com/agentzh">Yichun Zhang</a>. He continues to work full-time on it while at CloudFlare.</p><p>And, if you've ever delved into the internals of nginx, you'll know another CloudFlare employee, Piotr Sikora, who recently added <a href="http://www.mail-archive.com/nginx-devel@nginx.org/msg01061.html">the ability to set keys for TLS Session Tickets</a> to nginx.</p><p>So, at CloudFlare, open source can get you a job, be your job or, at least, be a significant part of your job.</p>
    <div>
      <h3>Sponsoring</h3>
      <a href="#sponsoring">
        
      </a>
    </div>
    <p>Where appropriate (i.e. where we think we make the biggest impact and get something we need) we've sponsored external open source projects and paid for improvements that all can use.</p><p>We make wide use of the excellent <a href="http://luajit.org/">LuaJIT</a> project and after much profiling by engineers we discovered areas where more JITing would improve our performance. Rather than do the work ourselves we <a href="http://luajit.org/sponsors.html#sponsorship_perf">sponsored</a> the LuaJIT project. These speedups will be appearing in LuaJIT 2.1 when it is released.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HzOPkOxjv7EyhnI9yyUBU/ff6bc6f6b09c39070d9202287fe7c10c/two-way-street.png" />
            
            </figure>
    <div>
      <h3>Two Way Street</h3>
      <a href="#two-way-street">
        
      </a>
    </div>
    <p>Of course, it would be easy for us to use open source software, make modifications and not release them. None of the licenses for the software we use force us to release our modifications. But we prefer to give back and not just because of karma.</p><p>There are two big advantages to releasing modifications we've made to existing projects: the many eyeballs effect and reducing fork cost.</p><p>The first, many eyeballs, is common to any open source project: the more people look at code, the better it gets. And that applies equally to code written by a core team of developers and code written by outsiders. When we contribute changes we've made, others look at the changes and improve them.</p><p>For example, back in 2012 we contributed an improvement to Go's <a href="https://code.google.com/p/go/source/detail?r=a041b45cc418">log/syslog</a> module. This year that work was <a href="https://code.google.com/p/go/source/detail?r=51407182d459">improved</a>.</p><p>And the cost of maintaining a fork provides useful economic pressure making releasing our modifications make sense. It's cheaper for us to release than to maintain a fork and merge as the core of a project changes.</p><p>But, what about CloudFlare's secret sauce?</p>
    <div>
      <h3>Open Sourcing CloudFlare Core</h3>
      <a href="#open-sourcing-cloudflare-core">
        
      </a>
    </div>
    <p>Our strong bias is to open source everything we've built. When we don't, it's usually because it's highly specific to us and/or because the support cost is high. Ultimately, we'd like to open source all our major components so that they can be used to build a faster, safer, smarter web.</p><p>Many of our smaller components are really glue code that don't make any sense to open source as they are so specific to our implementation of the overall system.</p><p>We don't believe that there is any chunk of code so clever that it gives us a long term competitive advantage. Instead, our advantage comes from the network we've built, the data we collect on making the web faster and safer, and, most importantly, the people we're able to attract.</p><p>A commitment to open source builds trust in the community which helps us continue to build our service and attract the best people.</p><p>In fact, the best way to get hired at CloudFlare is to make good contributions to open source projects we find interesting and useful. Contributions like that often speak more loudly than a resume.</p> ]]></content:encoded>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[NGINX]]></category>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[LUA]]></category>
            <guid isPermaLink="false">5k8obbKvvWiWBkX0BNi8Xl</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
    </channel>
</rss>