The heads of many of the world’s biggest social media platforms were urged to change their business models and become more accountable in the battle against rising hate speech online.
In a detailed statement, more than two dozen UN-appointed independent human rights experts – including representatives from three different working groups and multiple Special Rapporteurs – called out chief executives by name, saying that the companies they lead “must urgently address posts and activities that advocate hatred, and constitute incitement to discrimination, in line with international standards for freedom of expression.”
They said the new tech billionaire owner of Twitter, Elon Musk, Meta’s Mark Zuckerberg, Sundar Pichai, who heads Google’s parent company Alphabet, Apple’s Tim Cook, “and CEOs of other social media platforms”, should “centre human rights, racial justice, accountability, transparency, corporate social responsibility and ethics, in their business model.”
They reminded that being accountable as businesses for racial justice and human rights, “is a core social responsibility, advising that “respecting human rights is in the long-term interest of these companies, and their shareholders.”
They underlined that the International Convention on the Elimination of Racial Discrimination, the International Covenant on Civil and Political Rights, and the UN’s Guiding Principles on Business and Human Rights provide a clear path forward on how this can be done.
“We urge all CEOs and leaders of social media to fully assume their responsibility to respect human rights and address racial hatred.”
As evidence of the corporate failure to get a grip on hate speech, the Human Rights Council -appointed independent experts pointed to a “sharp increase in the use of the racist ‘N’ word on Twitter”, following its recent acquisition by Tesla boss Elon Musk.
This showed the urgent need for social media companies to be more accountable “over the expression of hatred towards people of African descent, they argued.
Soon after Mr. Musk took over, the Network Contagion Research Institute of Rutgers University in the US, highlighted that the use of the N-word on the platform increased by almost 500 per cent within a 12-hour period, compared to the previous average, the experts said.
“Although Twitter advised this was based on a trolling campaign and that there is no place for hatred, the expression of hatred against people of African descent is deeply concerning and merits an urgent response centred on human rights.”
They added that hate speech, “advocacy of national, racial and religious hatred that constitutes incitement to discrimination and violence, as well as racism on social media, are not just a concern for Twitter but also for other social media giants such as Meta”, the company formerly known as Facebook.
The experts said although some companies claimed not to allow hate speech, there was a clear gap between stated policies, and enforcement.
“This is particularly salient in the approval of inflammatory ads, electoral disinformation on Facebook, and content that talks of conspiracy theories. Research from Global Witness and SumOfUs recently revealed how Meta is unable to block certain advertisements”, the experts state.
Meta “took a significant step with the establishment of an oversight board in 2020”, in response to complaints, they said, noting that the “group of experts from diverse areas of expertise is in place to ‘promote free expression by making principled, independent decisions regarding content on Facebook and Instagram and by issuing recommendations on the relevant Facebook Company Content policy’”.
The experts acknowledged that the board had been well funded, received around two million appeals regarding content, and made a number of recommendations and decisions.
“However, the effectiveness of the Oversight Board can only be seen over a long-time horizon and will require continued commitment at the highest levels” to reviewing and modifying tools to combat racial hatred online, the experts said.
“There is a risk of arbitrariness and profit interests getting in the way of how social media platforms monitor and regulate themselves”, they added.