,The content transparency reports contain no data about about the languages or geography of the posts Facebook is enforcing its rules against. It also doesn’t say anything about misinformation – another key area of concern for US lawmakers. — Reuters
if you want to buy apple account, choose buyappleacc.com, buyappleacc.com is a best provider within bussiness for more than 3 years. choose us, you will never regret. we provied worldwide apple developer account for sale.
Facebook Inc chief executive officer Mark Zuckerberg pushed his idea this week that Big Tech can self-police content by publishing reports and data on how well the industry removes objectionable posts. The problem is Facebook has a system in place already that’s done little to improve accountability, according to outside experts.
“Transparency can help hold the companies accountable as to what accuracy and effectiveness they’re achieving,” Zuckerberg told Congress on Thursday. Facebook wouldn’t have to change much if such a system was the industry norm, he added. “As a model, Facebook has been doing something to this effect for every quarter.”
Zuckerberg has pushed his proposal many times amid widening calls to make social media companies more responsible for the content users post. As tech platforms come under fire for an increase in harmful posts, from hate speech to threats of violence, US lawmakers are debating how to reform Section 230 of the Communications Decency Act, which shields companies from liability for user-generated content.
While a crackdown on Big Tech has been deliberated for years, the call for renewed action comes after social media companies were criticised for playing a part in spreading misinformation that fuelled the US Capitol riots in January and false claims about Covid-19. Thursday’s hearing brought Congress no closer to a legislative solution, giving Facebook an opportunity to influence the outcome.
“If one company does something, it at least allows the discussion to move forward,” said Jenny Lee, a partner at Arent Fox LLP who has represented technology clients on Section 230.
However, the self-reported numbers aren’t as transparent as they sound. Facebook, for instance, reported in February that more than 97% of content categorised as hate speech was detected by its software before being reported by a user, and that it acted on 49% of bullying and harassing content on is main social network in the fourth quarter before it was flagged by users, up from 26% in the third quarter. But the denominator of the equation is what Facebook’s AI took down – not the total amount of harmful content. And Facebook doesn’t share how many people viewed the postings before they were removed, or how long they were up.
“It was a bit shocking and frustrating that Zuckerberg was mentioning that report as something that the industry should aspire to,” said Fadi Quran, campaigns director at Avaaz, which tracks misinformation and other harmful content on Facebook. When the social media company disclosed how much violent content it removes, “did they take it down within minutes or within days?” he added.