The state of freedom of expression online is substantially shaped by the moderation of content on dominant social media platforms. Their private community guidelines are the legal framework for discussions in the digital realm. Effectively, they constitute global law applying to billions of individuals, set by a few companies of the Global North. This raises fundamental questions on how to deal with private power. The answer varies substantially, depending on the legal culture: Some, in particular the United States, emphasize the principle of private autonomy, while others view it as an anti-trust problem. This paper examines the accumulation of private corporate power over freedom of speech as a human rights problem. Claimants all around the world have initiated legal actions based on the assumption that a particular act of content moderation—such as the (non-)removal of content, account suspensions or deletions, among others—, violated their human rights. The carefully chosen sample, from over 100 cases surveyed, aims to cover and contextualize the most important legal debates that courts are currently facing on this matter. The jurisprudence presented in this paper is divided into three main sections: Cases against intermediaries, cases against public officials, and cases dealing with the actions taken by states to enforce a particular kind of content moderation on private social media platforms. It also tries to showcase the different pathways taken by judicial bodies when solving these issues.