GLOBAL – The Global Alliance for Responsible Media (Garm) has launched a report and framework to strengthen brand safety across seven social media platforms, including Facebook, Instagram, Twitter and YouTube.
The Garm Aggregated Measurement Report shows that eight in ten of more than 3.3 billion pieces of content removed from the platforms are either spam, adult or explicit content, or hate speech and acts of aggression.
The data included in the report is self-reported, and includes Facebook, Instagram, Twitter, TikTok, Pinterest, Snap and YouTube.
Garm is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) with support from other trade bodies such as the Association of National Advertisers, Incorporated Society of British Advertisers and the American Association of Advertising Agencies.
The report highlights progress from YouTube in removing accounts associated with hate speech and acts of aggression, Facebook in reducing prevalence in these acts on the site and Twitter in content removal.
The improvements have occurred amid an increased reliance on automated content moderation to help manage blocking and reinstatements due to Covid-19 disruptions affecting moderation teams, according to the report.
The report also includes a framework for advertisers to understand how well platforms are enforcing policies.
The framework includes four questions: How safe is the platform for consumers? How safe is it for advertisers? How effective is the platform in enforcing its safety policies? And how responsible is the platform in correcting mistakes?
Another report will be released later this year and will include gaming social media platform Twitch.
Stephan Loerke, chief executive of the WFA, said: “The collaboration between advertisers, agencies and platforms has been very constructive and we now have common ground to drive even greater progress for the benefit of society, marketers and the long-term health of the digital ecosystem.”