Last Thursday, ProPublica published an article critiquing our handling of some abuse reports that we receive. Feedback from the article caused us to reevaluate how we handle abuse reports. As a result, we've decided to update our abuse reporting system to allow individuals reporting threats and child sexual abuse material to do so anonymously. We are rolling this change out and expect it to be available by the end of the week.
I appreciate the feedback we received. How we handle abuse reports has evolved over the last six and a half years of Cloudflare's history. I wanted to take this opportunity to walk through some of the rationale that got us to this point and caused us to have a blindspot to the case that was highlighted in the article.
What Is Cloudflare?
Cloudflare is not a hosting provider. We do not store the definitive copy of any of the content that someone may want to file an abuse claim about. If we terminate a customer it doesn’t make the content go away. Instead, we are more akin to a specialized network. One of the functions of the network that we provide is to add security to the content providers that use us. Part of doing that inherently involves hiding the location of the actual hosting provider. If we didn't do this, a malicious attacker could simply bypass Cloudflare by attacking the host directly.
That created an early question on what we should do when someone reported abusive content that was passing through our network. The first principle was we believed it was important for us to not stand in the way of valid abuse reports being submitted. The litmus test that we came up with was that the existence of Cloudflare ideally shouldn't make it any harder, or any easier, to report and address abuse.
Mistakes of Early Abuse Reporting
The majority (83% over the last week) of the abuse reports that we get involve allegedly copyrighted material transiting our network. Our early abuse policy specified that if we received an abuse report alleging copyrighted material we'd turn over the IP address of the hosting provider so the person filing the abuse report could report the abuse directly.
It didn't take long for malicious attackers to realize this provided an effective way to bypass our protections. They would submit a fake report alleging some image on a legitimate site had been illegally copied, we'd turn over the IP address of our customer, and they'd attack it directly. Clearly that wasn't a workable model.
As a result, we revised our policy to instead act as a proxy for abuse reports that were submitted to us. If a report was submitted then we'd proxy the report through and forward it to the site owner as well as the site's host. We provided the contact information so the parties could address the issue between themselves.
While we have a Trust & Safety team that is staffed around the clock, for the most part abuse handling is automated. Various firms that specializing in finding and taking down copyrighted material generate such a flood, often submitting hundreds of reports for the same allegedly copyrighted item, that manual review of every report would be infeasible.
Violent Threats and Child Sexual Abuse
We've always treated reports of violent threats and child sexual abuse material with additional care. Understandably, from the perspective of the individuals in the ProPublica article, it seems callus and absurd that we would ever forward these reports to the site owner. However, we had a different perspective.
The vast majority of times that violent threats or child sexual abuse material were reported to us occurred on sites that were not dedicated to those topics. Imagine a social network like Facebook was a Cloudflare customer. Somewhere on the site something was posted that included a violent threat. That post was then reported to Cloudflare as the network that sits in front of the Facebook-like site.
In our early days, it seemed reasonable and responsible to pass the complaint on to the Facebook-like customer who could then follow up directly. That also met the litmus test of being what would happen if Cloudflare didn't exist. What the policy didn't account for was site owners who could not be trusted to act responsibly with abuse reports including contact information.
Beginning in 2014, we saw limited, but very concerning, reports of retaliation based on submitted abuse reports. As a result, we adjusted our process to make it so complaints about violent threats and child sexual abuse material would be sent only to the host, not to the site owner.
We’ve confirmed that in the cases reported to the site mentioned in the ProPublica article we followed this procedure. That change largely addressed the problem of people reporting abuse getting harassed. What we didn’t anticipate is that some hosts would themselves pass the full complaint, including the reporter’s contact information, on to the site owner. We assume this is what happened in the ProPublica cases.
Another change we made in 2015 was to clarify exactly what would happen when someone submitted a report by adding disclaimers to our abuse form. These disclaimers appear in multiple places throughout the abuse submission flow:
In a world without Cloudflare, if you wanted to anonymously report something, you would use a disposable email and a fake name and submit a report to the site's hosting provider or the site itself. We didn't do anything to check that the contact information used in reports was valid so we assumed, with the disclaimer in place, if people wanted to submit reports anonymously they'd do the same thing as they would have if Cloudflare didn't exist.
That was a bad assumption. As the ProPublica article made clear, many people did not read or understand the disclaimer and were surprised that we forwarded their full abuse report to the host who then, in some cases, could forward it to the site owner.
Determining Bad Actors
In reevaluating our policy a key question was when it is appropriate to pass along the full report and when it is not. Again, from the perspective of the author of the ProPublica article, that may seem like an easy distinction. The reality is that requiring an individual working on our Trust & Safety team understand the nature of every site that is on Cloudflare is untenable. Moreover, adding more human intervention that slows down the process of reporting abuse, especially in cases of violent threats and child sexual abuse material, where time may be of the essence, strikes us as a step backward.
Instead, we took the suggestions of many of the comments we received and are implementing a policy where reporters of these types of abuse can choose to submit them and not have their contact information included in what we forward. The person making the abuse report seems in the best position to judge whether or not they want their information to be relayed. Making this change requires some engineering work on our part, but we have prioritized it. By the end of this week, someone submitting an abuse report for one of these categories will have the choice or whether to do so anonymously.
We are under no illusion that this latest iteration of our abuse process is perfect. In fact, we already have concerns about challenges the new system will create. Anonymous reporting opens a new vector for malicious actors to submit false reports and harass Cloudflare customers. In addition, for responsible Cloudflare customers who want to act on reports, anonymous reports may make it more difficult for them to gather more information from the reporter which may make it more difficult for well-informed action to be taken to address the issue.
We appreciate the feedback on where our previous process broke down. As new problems arise, we anticipate that we'll continue to need to make changes to how we handle abuse reports.
Final Thoughts on Censoring the Internet
While we clearly had a significant blindspot in how we handled one type of abuse reports, we remain committed to our belief that it is not Cloudflare's role to make determinations on what content should and should not be online. That belief comes from a number of principles.
Cloudflare is more akin to a network than a hosting provider. I'd be deeply troubled if my ISP started restricting what types of content I can access. As a network, we don't think it's appropriate for Cloudflare to be making those restrictions either.
That is not to say we support all the content that passes through Cloudflare's network. We, both as an organization and as individuals, have political beliefs and views of what is right and wrong. There are institutions — law enforcement, legislatures, and courts — that have a social and political legitimacy to determine what content is legal and illegal. We follow the lead of those organizations in all the jurisdictions we operate. But, as more and more of the Internet sits behind fewer and fewer private companies, we're concerned that the political beliefs and biases of those organizations will determine what can and cannot be online.
If you're interested, I gave a talk a few years ago about how we think about our role in policing online content. It's about an hour long, but if you're interested in the topic, I encourage you to watch it in order to better understand our perspective.
From time to time an organization will sign up for Cloudflare that we find revolting because they stand for something that is the opposite of what we think is right. Usually, those organizations don't pay us. Every once in awhile one of them does. When that happens it's one of the greatest pleasures of my job to quietly write the check for 100% of what they pay us to an organization that opposes them. The best way to fight hateful speech is with more speech.
I appreciate the feedback on how we can improve our abuse process. We are implementing the changes that were recommended. They take engineering, so they aren't available immediately, but will be live by the end of this week. We continue to iterate and improve on our mission of helping build a better Internet.