Facebook said it had erased more than 3 billion fake accounts between October and March, which was a leap in illegal activities which underlined the ongoing battle of the social networking cleaning platform.
Facebook launched a new figure on Thursday as part of an updated transparency report that also described the proliferation of hate speech, graphics and videos, and other abusive content on its platform.
Facebook says that the billions of accounts deleted in six months that he associates with “unprofessional” bad actors who are stuck in creating profiles for spam “have never been considered active,” so don’t count the total number of active monthly users. Facebook saw 2.3 billion active users a month in the first quarter of this year, indicating that investors were paying attention to Facebook’s popularity and growth. Facebook believes that around 5% of these active accounts are fake.
Facebook CEO Mark Tsukarbarg Thursday also made a new defense of his company when critics – including one of the founders Chris Hughes – called for its dissolution. A report in the New York Times previously published this month, Hughes said Facebook would deal with failure with the dissemination of false information and other crimes, federal antitrust authorities urged giants to investigate and sanction social networks.
Zuckerberg pointed to an increase in Facebook’s investment in security and protection. “We can do things that other companies cannot do,” he said during a meeting with reporters to discuss the transparency report. “If you see it, we really have to decide which problem we think is most important to solve, and somehow some of the funds are related.”
Between October and March Facebook said it contained millions of deleted or 11.1 terror content labeled 52.3 million cases of violent or graphic content and contained 7,300,000 posts, images, or other incitement uploads. Relocation grew compared to the past few months due to an increase in Facebook’s efforts to implement tools for AI and recognize more human evaluators and possibly eliminate posts, photos and videos that violate their rules.
Facebook for the first time also detailed efforts to combat illegal publications of weapons and drugs, removing around 1.4 million content that violated their rules on the sale of weapons, weapons or ammunition and 1.5 million articles for drugs, including marijuana.
Facebook’s latest transparency report reflects increased efforts to eliminate its practice of detecting and removing the most dangerous online content. His well-documented missteps have triggered regulators around the world who came together last week on Facebook and other social media companies to call their police platform more aggressive towards the rise of online extremism. “Christchurch calls” followed the attack of two mosques in New Zealand in March, which were broadcast live on Facebook.
Facebook also criticized its attitude towards workers who acted as moderators for site content. Most of them are employees and many of them complain about low wages and inadequate benefits, even though they are the first line of defense against disruption of public content graphically.
Zuckerberg admitted on Thursday that “not only in terms of this specific challenge, but also in terms of a broader range of content there is much to be done”. The Facebook CEO added that the answer would likely involve government regulations. “I don’t think companies should make all their own decisions … so I fully support this rule,” Zuckerberg said.
Facebook has agreed to publish its quarterly transparency report and enter data for Instagram’s photo sharing service. The technology giant also said that he improved his artificial intelligence tool, which has supported his efforts to remove certain types of content, such as suspicious accounts, but was unable to cope with the emergence of hate speech.
Zuckerberg acknowledged that Facebook can be a challenge for me because he travels public and private exchanges, sporadic messages that are encrypted so that neither the company nor the government can see it.