About the project
The whole project was developed during the 2018/2019 edition of DensityDesign Final Synthesis Studio at Politecnico di Milano, with the aim of learning to observe and represent controversial phenomena from different perspectives, and to communicate them to different publics with innovative and engaging visual artifacts. In this context, Communication Design is used to access and understand the increasingly complex amount of data and information shaping our world. Thus, we were invited to use visualisation techniques across multiple phases of the research process: to describe and understand the observed phenomena; to share methods and results; as a strategy for public engagement.
My Role
My role covered the whole design process: research, scraping, analysis and organization of (some) data, and design of (some) data visualizations. I also took care of the copywriting of the 3 output websites and of the presentation speech during the course.
Overview
Hate Speech is a controversial term, as it is positioned in a delicate balance between freedom of expression and the respect for equality, liberty and dignity of every human being. Several treaties such as the International Covenant on Civil and Political Rights (ICCPR) have attempted to define its contours. Legally and politically speaking there is an existing division between the American and the European approach to regulate hate speech, since the will of the United States to protect freedom of expression, as promoted by the First Amendment, has much broader boundaries than what is actually tolerated in Europe. Moreover the internet’s speed widely spreads the phenomenon and therefore makes it difficult for the IT companies, i.e. private social platforms, to regulate online hate speech. What then bring even more issues to the matter are the dynamics through which certain types of hateful contents lead—or not—to actual acts of discrimination or violence.
404 Hate not found
In the first part of the course we found, collected and visualized structured data from official sources, relevant to the hate speech online phenomeon, by focusing on how IT Companies in Europe have dealt with the filtering process of hateful contents on their platforms. Indeed, one of the hardest battlegrounds is online, because of its intrinsic characteristics of transnationality, anonymity, unpredictability and permanence. The social networks are among the main actors in the environment that are required to manage this kind of problem.
IT Companies have their own role models—called policies—that are of course influenced by laws in Europe. These policies regulate the filtering process or the removal of improper contents from each platform. The most relevant in the EU is the Code of Conduct on countering illegal hate speech online, that since its establishment has done 3 surveys and has published them in order to understand how IT Companies react to notifications of illegal hate speech.
The result of this first phase is the website 404 Hate not found, which contain a series of visualizations that provide a first overview of the depth and complexity of hate speech, as described by official sources. In addition to the website, we also designed a static version of the main visualization.

Hate Shades
In the second part of the course, we faced the complexity of hate speech by mapping, collecting and analyzing online data from unstructured digital sources. The challange in this case was not only to find a way to properly visualize the data, but also to strictly define the protocols we used to collect all the raw material.
We tried to follow a possible path that a user could take in order to get information and form an opinion about hate speech, starting with a theoretical approach to the issue, moving to a more practical observation of the hatred phenomenon, up to analyzing which are the opinions of the users by actual observing real discussions on social platforms. In this way we tried to map the controversial debate that the Hate Speech term generates online: should this regulation of contents be considered filtering or censorship?
What we could deduce from such a vast phenomenon was that the discussion, rather than promoting two kind of polarizations, told instead the intrinsic nuances of the topic, the "Hate Shades", making the controversy alive, understandable and debatable.


RegulHate
In the third, and last, phase we were called to take a stand in the dispute and design a web experience that drove a public narration. Considering the conclusions and the opinions emerged during the second phase, we could deduce that the discussion was far more complex and has much broader boundaries than what we expected, underlining the fundamental question that the Hate Speech controversy generates: who should have the responsibility and the power to regulate what may or may not be said online?
For our concept creation we choose to use the metaphor of sound. We wanted not only to represent speech, but also to give the user the possibility to listen to it and to regulate its volume, by raising or muting it, which means giving voice or silencing it. In this way, the user has the possibility to try the self-regulation method by interacting with an interface inspired by music softwares, using sliders to manipulate the volume of a content, choosing how much visibility to give to it.
In the website we proposed controversial hateful contents to the users, divided in four categories, that we choose from the categories of speeches that are held to be Hate Speech from the major IT Companies, as we’ve seen in Phase 01: Disability, Gender, Race, Religion. The contents were found online, in social networks like Reddit, Twitter and Facebook and also in some of the RationalWiki Hate Sites that we discovered and analysed during Phase 02. We decided to work with comments, posts, memes, videos, articles and papers: all of them have then been reprocessed in order to obtain an audio file. For each category we collected 6 contents to limit the fruition time of the experience, organizing them in order to recreate a stream of opinions.
Our intention was to show that there can be other valid solutions besides censorship, by supporting a non binary vision, by emphasizing common practice and by giving voice to all the nuances of the speech. Through participation the users help to define the community standards and to create a civic moderation that will grow with the empathy and the judgment of all the different points of views and perspectives.