Abstract
This paper provides a comprehensive analysis of legal and technical challenges arising when shaping a new strategy in content moderation. Its focus will be to strengthen platform users’ fundamental rights as well as to reduce the involvement of human content moderators that currently suffer from health-damaging working conditions. By means of a model case within the technical analysis it will be investigated to what extend platform providers should raise or lower their algorithmic decision boundaries in order to strengthen the users’ right to non-discrimination and their freedom of expression, but also to reduce human moderators’ involvement. The legal analysis focuses on the E-Commerce Directive, the upcoming Digital Services Act as well as on certain Member States’ national legislation with respect to their requirements for platform providers offering their hosting services within the EU and how they should be adopted within a new strategy. The final part provides concrete recommendations for all stakeholders involved in the content moderation process.