Sober living

Attention Required: Understanding Cloudflare Blocks

Understanding what triggers these alerts and knowing how to respond effectively can save you time and improve your online security awareness. Some organizations offer whitelisting options for regular users, removing unnecessary security challenges while maintaining protection against genuine threats. While occasional security alerts may be unavoidable, several practices can minimize their frequency and impact on your online activities. For critical websites you access regularly, consider adjusting your browser settings to optimize compatibility with their security requirements. Your network configuration can also influence how websites perceive your traffic. While occasionally inconvenient for legitimate users, these systems prevent numerous attacks daily.

Cloudflare Security Block: Understanding Why You Have Been Blocked

These requests first terminate at our HTTP and TLS layer, then flow into our core proxy system (which we call FL for “Frontline”), and finally through Pingora, which performs cache lookups or fetches data from the origin if needed. This was due to large amounts of CPU being consumed by our debugging and observability systems, which automatically enhance uncaught errors with additional debugging information. Any Access configuration updates attempted at that time would have either failed outright or propagated very slowly.

A correct Bot Management configuration file was deployed globally and most services started operating correctly. The initial symptom appeared to be degraded Workers KV response rate causing downstream impact on other Cloudflare services. But in the last 6+ years we’ve not had another outage that has caused the majority of core traffic to stop flowing through our network. The change explained above resulted in all users accessing accurate metadata about tables they have access to. This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. Cloudflare’s Bot Management includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network.

  • Cloudflare stopped receiving client traffic from the services when the shutdown began, causing the bytes transferred in response to drop to zero.
  • Follow us on social media at @CloudflareRadar (X), noc.social/@cloudflareradar (Mastodon), and radar.cloudflare.com (Bluesky), or contact us via email.
  • A website operator can then optionally express their preferences via machine-readable content signals.
  • Cloudflare’s Bot Management includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network.
  • When we’ve had outages in the past it’s always led to us building new, more resilient systems.

Why are we launching the Content Signals Policy now?

Remember, the primary intent of such security measures is to protect both users and content providers from malicious threats. On September 29, 2025, Internet connectivity was completely shut down across Afghanistan, impacting business, education, finance, and government services…. The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected. As many as 15 provinces experienced shutdowns, and we review the observed impacts across several of them below, using the regional traffic data recently made available on Cloudflare Radar. In addition to the drop in traffic, we observed a concurrent drop in announced IPv4 address space and spike in BGP announcements (likely withdrawals), suggesting that the disruption may have been caused by a network-related issue.

A reported power outage at one of Airtel Tanzania’s data centers on July 1 resulted in a multi-hour disruption in connectivity for its mobile customers. Modern security systems employ sophisticated algorithms to distinguish between legitimate users and potential threats. The purpose is to protect both the website and its users from potential security breaches. These Cloudflare attention notifications commonly appear through services like Cloudflare, which monitors traffic and blocks suspicious activities automatically. Similarly, rapidly clicking through multiple pages or submitting numerous forms in quick succession may appear as bot-like behavior to security systems.

  • The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats.
  • Yet government officials have not publicly addressed the cause.” However, posts from civil society groups that follow Internet connectivity in Iran (net4people, FilterWatch) suggested that the disruption was again due to an intentional shutdown.
  • Cloudflare bot solutions identify and mitigate automated traffic to protectyour domain from bad bots.
  • The outage was triggered by a bug in generation logic for a Bot Management feature file causing many Cloudflare services to be affected.

As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network. The explanation was that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management. This post is an in-depth recount of exactly what happened and what systems and processes failed.

Resolving access restrictions effectively

A published report noted “IRNA, Iran’s official news agency, cited the state-run Telecommunications Infrastructure Company, reporting a national-level disruption in international connectivity that affected most internet service providers Saturday night. Global satellite Internet service provider Starlink (AS14593) acknowledged a July 24 network outage through a post on X. This power outage impacted Internet connectivity within the country, with traffic dropping by as much as 32%.

Regional shutdowns by the Taliban to prevent “immoral activities”

“As part of its efforts to ensure the integrity of the examination process, and in coordination with relevant authorities, the Ministry of Education was able to uncover organized exam cheating networks in three examination centers in Lattakia Governorate. A larger list of detected traffic anomalies is available in the Cloudflare Radar Outage Center. And a myriad of technical issues, including issues with China’s Great Firewall, resulted in traffic losses across multiple countries.

Power outages cause Internet disruptions

A contractor cutting through three high voltage cables caused a nationwide power outage in Gibraltar on September 16, according to a Facebook post from the Gibraltar government. Wide-scale power outages occur all too frequently in Cuba, and when power is lost, Internet connectivity follows. In Curaçao, a series of Facebook posts from Aqualectra, the island’s water and power company, confirmed that there was a power outage, and provided updates on the progress towards restoration. On St. Vincent and the Grenadines, the St Vincent Electricity Services Limited (VINLEC) stated in a Facebook post that a “system failure” caused a power outage that affected customers on mainland St. Vincent. Although neither post cited the bullet as the cause of the disruption, news reports attributed the claim to a Spectrum spokesperson. Spectrum acknowledged the service interruption in a post on X, followed by another post four and a half hours later stating that the issue had been resolved.

The outage

(The Telmex Colombia and Comcel names shown in the graphs below are historical – Telmex and Comcel merged in 2012 and have operated under the Claro brand since then.) Claro did not acknowledge the disruption on social media, nor did it provide any explanation for it. The disruption affected multiple ASNs owned by Claro, including AS10620, AS14080, and AS26611. Unfortunately, no definitive root cause for this disruption could be found. This incident caused massive disruption of the Internet connections between China and the rest of the world. The attack also apparently impacted YemenNet’s routing, as announced IPv4 address space began to decline as the attack commenced. We have covered many such events in this series of blog posts over the last several years, and the latest occurred on September 10.

We can now drill down at regional and network levels, as well as exploring the impact across DNS traffic, connection bandwidth and latency, TCP connection tampering, and announced IP address space, helping us understand the impact of such events. Yet government officials have not publicly addressed the cause.” However, posts from civil society groups that follow Internet connectivity in Iran (net4people, FilterWatch) suggested that the disruption was again due to an intentional shutdown. The impact of the Starlink outage was particularly noticeable in countries including Yemen and Sudan, where traffic dropped by approximately 50%, as well as in Zimbabwe, South Sudan, and Chad. The graphs below show that there was an immediate impact to Internet traffic across several networks in the region, including Rostelecom (AS12389) and InterkamService (AS42742), where traffic dropped by 75% or more. During the disruption, the country’s traffic dropped by over 80% as compared to the previous week, with Flow experiencing a near complete outage. During the four-hour power outage, which also disrupted Internet connectivity, traffic dropped by as much as 80% below expected levels.

Work focused on rollback of the Bot Management configuration file to a last-known-good version. Although the issue was also present in prior versions of our proxy, the impact was smaller as described below. Some that have caused newer features to not be available for a period of time. Now that our systems are back online and functioning normally, work has already begun on how we will harden them against failures like this in the future. When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking. In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime.

Government-directed shutdowns

It allows you to instruct which crawlers and bots can access which parts of your site. Robots.txt is a plain text file hosted on your domain that implements the Robots Exclusion Protocol. To address the concerns our customers have today about how their content is being used by crawlers and data scrapers, we are launching the Content Signals Policy. After several days of peak traffic levels double those seen in previous weeks, traffic in Takhar fell on September 16, remaining near zero until September 21, when a small amount of connectivity was apparently restored. These regional shutdowns blocked Afghani students from attending online classes, impacted commerce and banking, and limited access to government agencies and institutions such as passport and registration offices, customs offices. Network providers announce IP address space that they are responsible for to other networks, enabling the routing of traffic to and from those IP addresses.

This changed the size of the previously fixed-size feature configuration file, causing the bots module to trigger an error. Our customers use bot scores to control which bots are allowed to access their sites — or not. As a request transits the core proxy, we run the various security and performance products available in our network. It could be from a browser loading a webpage, a mobile app calling an API, or automated traffic from another service. As well as returning HTTP 5xx errors, we observed significant increases in latency of responses from our CDN during the impact period. The screenshot at the top of this post shows a typical error page delivered to end users.

Recognizing these potential triggers helps you modify your online behavior to reduce the likelihood of encountering security blocks during important tasks. Website owners implement these protective measures to safeguard their digital assets and user data. These security mechanisms work by establishing specific rules and parameters for normal website usage. ” notification, it typically indicates that the security system has identified something potentially problematic in your interaction with the site. This guide explores the common causes of security blocks and provides practical solutions to resolve them quickly. Embracing these measures contributes to a safer online environment, so while it may be an inconvenience, it’s an essential part of maintaining web security.

” notification, several troubleshooting steps can help restore your access. These IDs are crucial for troubleshooting blocked access issues. When you see this message, it means the website’s security system has flagged your actions as potentially problematic. These security notifications typically appear when a website’s protection system detects potentially suspicious activity from your connection. Having pre-existing relationships with site administrators can expedite resolution when security blocks occur despite preventive measures. For professional contexts where website access is critical, establish direct communication channels with technical support teams.

That world has changed; that scraped content is now sometimes used to economically compete against the original creator. As many have realized, there needs to be a standard, machine-readable way to signal the rules of your road for how your data can be used even after it has been accessed. It does not, however, let them know what they are able to do with your content after accessing it. The robots.txt file can also include commentary by adding characters after # symbol. In this case, the asterisk tells visitors that any user agent, on any device or browser, can access the content. A user-agent is how your browser, or a bot, identifies themselves to the resource they are accessing.

While it turned out to be a coincidence, it led some of the team diagnosing the issue to believe that an attacker may be targeting both our systems as well as our status page. Customers who were not using our bot score in their rules did not see any impact. Customers on our old proxy engine, known as FL, did not see errors, but bot scores were not generated correctly, resulting in all traffic receiving a bot score of zero. Both versions were affected by the issue, although the impact observed was different. Unrelated to this incident, we were and are currently migrating our customer traffic to a new version of our proxy service, internally known as FL2. As a result, HTTP 5xx error codes were returned by the core proxy system that handles traffic processing for our customers, for any traffic that depended on the bots module.

Bir yanıt yazın