Networking

Microsoft Azure Outage Shows How Fragile the Cloud Can Be

In the middle of Microsoft’s busy earnings day on 29 October 2025, the company’s Azure cloud service suffered a major outage that rippled around the world. Microsoft later said the disruption was caused by an inadvertent configuration change that broke its Azure Front Door (AFD) content‑delivery network theverge.com. This single mistake triggered a cascade of problems that knocked out popular services like Microsoft 365, Xbox Live and even mobile apps for Starbucks and Costco theverge.com. The incident, which lasted for hours, highlighted just how dependent modern businesses are on cloud platforms and how easily those platforms can fail. When did the outage start? The trouble began in the late afternoon for European users and mid‑morning for Americans. Microsoft’s status page said that starting around 16:00 UTC (9 a.m. Pacific) on 29 October, customers using Azure Front Door “may have experienced latencies, timeouts, and errors” theverge.com. Cisco’s ThousandEyes monitoring service noticed similar problems at around 15:45 UTC, observing global HTTP timeouts and elevated packet loss at the edge of Microsoft’s network thousandeyes.com. In other words, requests simply stopped reaching Microsoft’s servers. What caused the problem? Azure’s engineers quickly identified a misconfiguration. They wrote that an “inadvertent configuration change was the trigger event” for the outage theverge.com. Instead of a cyber‑attack or a hardware failure, someone had changed a setting that broke the AFD service, which acts as a gateway for many Microsoft and customer websites. To stop things getting worse, Microsoft blocked all further changes to the service and rolled back to the last known good configuration geekwire.com. Which services were affected? The outage cascaded through Microsoft’s own products and many external customers: Microsoft 365 and Office: At 12:25 p.m. Eastern time, Microsoft 365’s status account said it was investigating reports of access problems theverge.com. An update half an hour later noted that internal network issues were causing connectivity problems and that traffic was being rerouted to restore service theverge.com. Xbox and gaming: The Xbox support team later said that gaming services had recovered, but some players needed to restart their consoles to reconnect theverge.com. Third‑party websites and apps: Because many organisations build their sites on Azure, the outage knocked out apps for Starbucks, Costco, Kroger and other retailers theverge.com. Downdetector, a site that tracks outages, recorded problems for Office 365, Minecraft, Xbox Live, Copilot, and many other services kbtx.com. Airlines and critical infrastructure: Alaska Airlines and Hawaiian Airlines told customers that key systems, including their websites and online check‑in, were disrupted theverge.com. Alaska Airlines later explained that it stood up backup infrastructure and was gradually restoring services, asking passengers to see an agent at the airport if they couldn’t check in online news.alaskaair.com. The outage even affected Vodafone UK and London’s Heathrow Airport hindustantimes.com. How did Microsoft respond? Microsoft’s status page updates provide a timeline of its response. After confirming the configuration error, Azure engineers blocked further changes and rolled back to the previous configuration state geekwire.com. By 7:40 p.m. Eastern (23:40 UTC), Microsoft said the AFD service was running at 98 % availability and that most affected customers were seeing improvements theverge.com. The team predicted full mitigation by 00:40 UTC on 30 October theverge.com and kept working on the “tail‑end recovery” for remaining customers. Microsoft did not immediately say who made the change or why proper safeguards failed. In an update posted at 12:22 p.m. Pacific, the company said it had deployed the “last known good” configuration and expected full recovery within four hours geekwire.com. In the meantime, it blocked all changes to Azure Front Door to prevent the problem from reoccurring geekwire.com. Why was the impact so broad? Cloud outages are not new, but this one was especially disruptive because Azure Front Door handles Domain Name System (DNS) and content delivery functions for a huge number of services. When a misconfiguration broke AFD, it prevented successful connections to multiple Microsoft and customer services thousandeyes.com. Even an hour after the problem began, more than 18 000 users were reporting issues with Azure, according to the outage‑tracking site Downdetector hindustantimes.com. Reports gradually fell as Microsoft rolled out fixes, but the event underscored how many companies rely on a small number of cloud providers. This outage also came just over a week after a major Amazon Web Services (AWS) disruption that took down Fortnite, Alexa, Snapchat and other services theverge.com. With back‑to‑back failures at two of the world’s largest cloud providers, many businesses are questioning whether they have enough redundancy in their digital infrastructure. The takeaway The October 29 Azure outage shows how a single error can quickly ripple across the internet. A misconfigured setting in Microsoft’s cloud knocked out airlines, retailers and gaming services for hours theverge.com. Microsoft eventually rolled back the change and restored service, but the incident is a reminder that even the most sophisticated cloud platforms are prone to human mistakes. Companies that depend on these platforms may need to build more resilience and prepare backup plans so that one provider’s misstep doesn’t ground flights or stop customers from placing a coffee order.

Microsoft Azure Outage Shows How Fragile the Cloud Can Be Read More »

Inbrowser-RDP-mr.viind-blog cloudflare

In-browser RDP with Cloudflare Tunnel — Complete practical setup on (Tested and working on windows 11)

This is a hands-on, step-by-step guide post you can use to publish a Windows host with in-browser RDP using Cloudflare Tunnel and Cloudflare Zero Trust Access. Read it once, then follow each step. I wrote this so you can copy, paste, edit small values, and run a single PowerShell script at the end to finish the setup on Windows. Short summary I had my domain registered with hostinger so  i moved my domain DNS management into Cloudflare, point your domain (update nameservers at Hostinger), create a Cloudflare Tunnel, install the tunnel agent on Windows, configure the tunnel to route a public hostname to an internal RDP host, create a Zero Trust Access app (RDP with browser rendering), and test from a browser. Step 0 — Key information you should have A domain name (example: yourdomain.com). Cloudflare account with Zero Trust (Access) enabled. Hostinger account where your domain currently has DNS (so you can change nameservers). Windows machine where you will run cloudflared (Administrator access required). Private IP of the RDP target (example: 192.168.1.100), and RDP enabled on that target. Step 1 — Add your domain to Cloudflare and switch nameservers (I had my domain registerd on Hostinger) Log in to Cloudflare and add your domain (the dashboard will give you two Cloudflare nameservers). Log in to Hostinger, find your domain’s DNS / Nameservers section, and replace the current nameservers with the two Cloudflare nameservers Cloudflare gave you. Save changes. Wait for Cloudflare to accept the domain (it typically takes a few minutes to a few hours to propagate). You can check the Cloudflare dashboard until the domain shows as active on Cloudflare. Once active, Cloudflare manages DNS for your domain and you will perform public hostname mapping within the Cloudflare dashboard on this process, later. (no more changes needed at Hostinger for the hostname you will use). Step 2 — Prepare Zero Trust and create a Tunnel Do this in the Cloudflare dashboard under Zero Trust (sometimes called Cloudflare for Teams / Access → Tunnels). The UI may show options to create tunnels or generate a one-time service install token. Follow the UI to create the tunnel and either: Generate a one-time service install token (recommended for unattended Windows service install), or Create the tunnel and note the Tunnel UUID (if you prefer interactive CLI login later). Also in Zero Trust you will create network routing entries (CIDR / routes) so Cloudflare knows which internal addresses the tunnel can reach: Under the Zero Trust / Network or Tunnels area, add the internal network routes (CIDR ranges) or specific IPs that the tunnel should be able to reach — for example your LAN range or the private IP of the RDP host (e.g., 192.168.1.0/24 or the single IP 192.168.1.100). Create any network policies required to allow the tunnel to access those CIDR ranges from the Cloudflare side (these options vary by UI but are usually grouped under “Network” or “Private networks”). Step 3 — Create targets and allow admin users Still under Zero Trust, register the internal target(s) you will access (the RDP host IP). This tells Cloudflare where to forward inbound session traffic that arrives via the tunnel. Create a Zero Trust policy or Access rule that allows your admin account(s) to use the application. This is done in Cloudflare Access / Policies — add an “Allow” policy that specifies the Cloudflare user or group who can open the RDP app. Step 4 — Configure a public hostname (Public Hostnames / DNS) You will map a hostname that your users can visit in the browser to reach the tunnel and then the RDP target. Go to Zero Trust → Tunnels → your tunnel → Public hostnames (or Public Hostnames / DNS mapping area). Add a public hostname, for example rdp.yourdomain.com. Set the service to the internal RDP target: rdp://192.168.1.100:3389 (replace with your private IP and port if different). Save. Cloudflare will create the correct CNAME behind the scenes (it points the public hostname to [TUNNEL_UUID].cfargotunnel.com), or you can create that CNAME in Cloudflare DNS manually if needed. Step 5 — Create an Access application (Browser RDP) In Zero Trust → Access → Applications, click Add application. Set Application domain to rdp.yourdomain.com. Choose Application type: Self-hosted and select RDP. Enable Browser rendering (this renders the RDP session in the browser). Create an Allow policy that includes your Cloudflare user (email) or a group that contains the admin accounts. Save the application. Step 6 — Install cloudflared and run the tunnel on Windows Below is the one-shot PowerShell script I use. Edit the top variables ($ServiceToken, $tunnelUUID, $hostname, $targetIP) to match your values, then copy the full code in one-go, paste it on powershell, hit enter. $UseToken = $true $ServiceToken = ” $tunnelUUID = ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ $hostname = ‘rdp.yourdomain.com’ $targetIP = ‘192.168.1.100’ $targetPort = 3389 $exeDir = ‘C:\Program Files\cloudflared’ $exePath = Join-Path $exeDir ‘cloudflared.exe’ $sysCfgDir = ‘C:\Windows\System32\config\systemprofile\.cloudflared’ $configPath = Join-Path $sysCfgDir ‘config.yml’ $credPath = Join-Path $sysCfgDir ($tunnelUUID + ‘.json’) If (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator)) { Write-Error ‘Run PowerShell as Administrator’; Break } New-Item -ItemType Directory -Force -Path $exeDir | Out-Null $downloadUrl = ‘https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-amd64.exe’ if (-not (Test-Path $exePath)) { try { Invoke-WebRequest -Uri $downloadUrl -OutFile $exePath -UseBasicParsing -ErrorAction Stop; Unblock-File $exePath -ErrorAction SilentlyContinue } catch { Write-Error “Download failed: $downloadUrl”; Break } } & “$exePath” –version Get-WmiObject Win32_Service | Where-Object { $_.PathName -and ($_.PathName -match ‘cloudflared’) } | ForEach-Object { try { Stop-Service -Name $_.Name -Force -ErrorAction SilentlyContinue } catch {} sc.exe delete $_.Name | Out-Null } taskkill /IM cloudflared.exe /F 2>$null New-Item -ItemType Directory -Force -Path $sysCfgDir | Out-Null if ($UseToken) { if (-not $ServiceToken) { Write-Error ‘Set $ServiceToken’; Break } & “$exePath” service install $ServiceToken Start-Sleep -Seconds 2 } $possibleCreds = @(“$env:USERPROFILE\.cloudflared\$tunnelUUID.json”, “C:\ProgramData\cloudflared\$tunnelUUID.json”, (Join-Path $exeDir ($tunnelUUID + ‘.json’))) $found = $possibleCreds | Where-Object { Test-Path $_ } | Select-Object -First 1 if ($found) { Copy-Item -Path $found -Destination $credPath -Force; Write-Host “Copied credentials to $credPath” } else { Write-Host “No credential file found; service may be running with token mode.” } $ingressService

In-browser RDP with Cloudflare Tunnel — Complete practical setup on (Tested and working on windows 11) Read More »

Scroll to Top