Abstract
The digitalization of everyday life is advancing rapidly, in parallel with the increasing penetration of the internet and social media among the world population. In this scenario, legal responsibilities and obligations of digital platforms, with respect to what is shared by their users, have increasingly become a matter of public relevance, closely linked to the democratic quality of political regimes and of society as a whole. This article aims to critically reconstruct the main regulatory problems posed by countering hate speech on social networks. The final aim of this reconstruction is to advance a new regulatory model that is, in many respects, an alternative to the current one, which has become a source of criticism and legal disputes. In particular, the article offers a new definition of offline and online hate speech, using categories inspired by philosophical anthropology and pragmatics of communication; a justification of the right to be protected from online hate speech and the related obligations in reference to the principle of equal social dignity, within the frame of digital citizenship; criteria for a fair balance between protection from online hate speech and other competing rights, such as freedom of information, expression, and association; an overall regulatory model based on the co-responsibility of the various actors, engaged in protecting users from online hate speech according to their different capacities. This model should mitigate the current risks of a “privatization of censorship”, caused by the delegation to digital platforms of the power to detect and remove prohibited contents.