In July, Nick Clegg posted on Facebook’s corporate website that “Facebook does not benefit from hate“. In the article, he starts with the following opener:
When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society — with more than 3 billion people using Facebook’s apps every month, everything that is good, bad and ugly in our societies will find expression on our platform. That puts a big responsibility on Facebook and other social media companies to decide where to draw the line over what content is acceptable.Nick Clegg, Vice President, Facebook
It covers off the societal divisions, and how Facebook has 3 billion active users accessing its app every month. This is important as Facebook obviously have a scale issue. With so many people using Facebook, moderating and managing what gets posted, hate or not, they are unable to see and action everything that goes on the platform.
In fact, Andrew Marr highlighted a case where user reports were used to take down a page soliciting physical harm to others. Nick said that they had not taken down the page & post fast enough, so this confirms that they do manual interventions and have a ‘queue’ to address things that have been reported that perhaps Facebook’s internal systems were unable to flag.
Nick Clegg highlighted this in his interview today with Andrew Marr and in the Facebook post:
Unfortunately, zero tolerance doesn’t mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack. We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled — to more than 35,000 — the people working on safety and security. Nick Clegg, Vice President, Facebook
We’re a pioneer in artificial intelligence technology to remove hateful content at scale.
So, does Facebook benefit from hate?
Facebook made an eye watering $69bn in revenue from ads in 2019 and made a profit of about $18.4bn in profits.
Nick mentions that Facebook ‘invests billions of dollars each year in people and technology to keep our platform safe’; however, data isn’t available to see what the actual cost of them policing the platform amounts to vs what they make from ads that would fall under the “hate” category.
It’s likely that Facebook do initially benefit (at least monetarily) from “hate ads”, and whether it costs in after Facebook pay for everything to try and combat “hate ads” by employing people and spending a significant amount of time building machine learning algorithms is only something that Facebook would be able to calculate and confirm.