Global Witness’ tests identified the most extreme bias on TikTok, where 78% of the political content that was algorithmically recommended to its test accounts, and came from accounts the test users did not follow, was supportive of the AfD party. (It notes this figure far exceeds the level of support the party is achieving in current polling, where it attracts backing from around 20% of German voters.)

On X, Global Witness found that 64% of such recommended political content was supportive of the AfD.

Meta’s Instagram was also tested and found to lean right over a series of three tests the NGO ran. But the level of political bias it displayed in the tests was lower, with 59% of political content being right-wing.

“One of our main concerns is that we don’t really know why we were suggested the particular content that we were,” Ellen Judson, a senior campaigner looking at digital threats for Global Witness, told TechCrunch in an interview. “We found this evidence that suggests bias, but there’s still a lack of transparency from platforms about how their recommender systems work.”

The findings chime with other social media research Global Witness has undertaken around recent elections in the U.S., Ireland, and Romania. And, indeed, various other studies over recent years have also found evidence that social media algorithms lean right — such as this research project last year looking into YouTube.

“We’re hoping that the Commission will take [our results] as evidence to investigate whether anything has occurred or why there might be this bias going on,” she added, confirming Global Witness has shared its findings with EU officials who are responsible for enforcing the bloc’s algorithmic accountability rules on large platforms.