What are the provisions of new policing draft laws

The analysis of the new Draft Law on Internal Affairs, the Draft Law on Data Processing and Records in Internal Affairs, as well as the new working draft of Data Protection Impact Assessment, shows that the fundamental issues of the intrusive technology application in Serbia have not been addressed.

Specifically, the Draft Law on Internal Affairs enables mass, indiscriminate processing of biometric data which is special category data, by means of recording in public spaces and storing the recorded data. As described in the Impact Assessment working draft, biometric data is collected by detecting faces in the course of recording, and by extracting biometric data from a captured photo. The Draft Law on Data Processing and Records provides that photos with biometric facial features are kept for 72 hours from the moment of creation. These processes take place within police powers, meaning that biometric data are indiscriminately collected and stored outside the prescribed procedures for special evidentiary actions, that is, without the court order.

The Draft Law on Internal Affairs defines cases in which an authorised officer can use the biometric data processing software for identification purposes. These are cases of finding the perpetrator of criminal or preparatory offences prosecuted ex officio, or finding a missing victim of a criminal offence prosecuted ex officio. However, the process of establishing an identity is subject to the conditions of the Criminal Procedure Code of Serbia, although it is not entirely clear which of the procedures would be applied for the special evidentiary actions, given that different conditions are prescribed for each of these evidentiary actions.

The draft laws describe two types of access, automated and semi-automated biometric data searches, but their difference is instead defined in the Impact Assessment working draft. The automated search of biometric data is limited either to locations or to persons determined by the security profiles and implies a real-time search. Semi-automated processing is a retroactive search of biometric data either for identification purposes, where biometric data from the system is used as an input for other databases held by the Ministry of Interior of Serbia, or for establishing the movements and whereabouts of an already identified person.

The SHARE Foundation has consistently advocated against the legalisation of mass, indiscriminate biometric surveillance for the past four years, particularly during the consultation process launched upon the withdrawal of the first Draft Law on Internal Affairs. A new draft with old fundamental issues is now before us. The public hearing is open until the end of December.

Please find the SHARE Foundation’s complete Position Paper at this link.



Read more:

Biometrics again in the Draft Law on Internal Affairs

The Ministry of Interior published a new version of the Draft Law on Internal Affairs, the Draft Law on Data Processing and Records in Internal Affairs, as well as a series of other laws from its jurisdiction. The proposed legislation contains provisions regulating biometric surveillance in public spaces. The public discussion on the drafts is open until 31 December.

SHARE Foundation remains absolutely opposed to any use of biometric surveillance in public spaces, regardless of whether it is in a domestic or international context.

The previous Draft was withdrawn from the procedure last September, after warnings from the domestic community and international actors that the adoption of such a law would legalise mass biometric surveillance of public spaces and reduce the rights and freedoms of citizens.

In the meantime, the Ministry organized a series of consultative meetings attended by representatives of SHARE Foundation, #hiljadekamera initiative, as well as other civil society organisations, experts and representatives of the academic community.

  • 30 September 2021 – The initial consultative meeting at which the Ministry indicated the need for consultations due to civil society criticism of the Draft Law on Internal Affairs, especially biometric surveillance in public areas.
  • 29 October 2021 – Meeting regarding the functioning of the video surveillance system, with special reference to the processing of biometric data, in the context of the Draft Law on Internal Affairs.
  • 21 December 2021 – Meeting on the protection of personal data when using biometric surveillance, in the context of the Draft Law on Internal Affairs.
  • 22 February 2022 – Meeting on the Impact Assessment of the processing of biometric data by the video surveillance system on the rights and freedoms of citizens, in the context of the Draft Law on Internal Affairs.
  • 13 May 2022 – Presentation and discussion on the first draft of the Impact Assessment prepared by the Ministry’s working group.
  • 18 July 2022 – Meeting on the topic of other disputed provisions of the Draft Law on Internal Affairs.
  • 30 November 2022 – Presentation and discussion on the second draft of the Impact Assessment prepared by the Ministry’s working group.

SHARE Foundation used these meetings to constantly point out to representatives of the Ministry of Interior the domestic and international legal framework that regulates this area and the risks that the biometric surveillance system carries with it. We particularly emphasised the conditions of necessity and proportionality for the use of such a system, which were not met.

It is particularly important that in May 2022, the Ministry of Interior again prepared a draft of the Impact Assessment of the processing of biometric data by the video surveillance system, on which we submitted comments. The Ministry submitted the latest draft of the Impact Assessment to us in November 2022. Although it accepted a part of SHARE Foundation’s comments on the May draft, certain essential issues were not addressed, which is why we sent comments to the Ministry in December 2022. Impact Assessment on the protection of personal data, approved by the Commissioner for Information of Public Importance and Personal Data Protection is a prerequisite for further consideration of this issue.

During the public debate on the Draft Law on Internal Affairs and other draft laws, SHARE Foundation will analyse the text and send comments with the aim of influencing as much as possible that the text that will be submitted to the parliamentary procedure protects the rights of citizens in accordance with the Constitution of the Republic of Serbia, international standards of human rights and the European Convention on Human Rights and prevent the mass biometric processing of personal data from public spaces. We invite representatives of the civil sector, academic community, expert public, media and journalist associations, as well as all citizens to join the public discussion so that the front against mass biometric surveillance of our streets is as broad and inclusive as possible.



Read more:

All you need to know about harassment on the internet


Social media have long been more than just scrolling through photo galleries from vacations or family gatherings. With all the benefits they provide us, social media platforms also carry great risks for their users. How to deal with harassment on the internet, what are the consequences and which steps to take to protect yourself?

The video is also available in Serbian, Macedonian and Albanian.



Read more:

All you need to know about security on the internet


Without basic digital hygiene there is no security on the internet. The software we use is not perfect and each vulnerability puts our data at risk of being misused by cybercriminals. Why is it important to take care of the security of online services, accounts and devices we use and how to protect our personal data, financial information and other digital resources?

The video is also available in Serbian, Macedonian and Albanian.



Read more:

All you need to know about data protection on the internet


Personal data is information that identifies you more closely and with the help of which, directly or indirectly, an individual can be identified. Examples are name, telephone number, fingerprint, political belief or medical history. What is personal data protection and what are the guarantees that the collection and processing of data are carried out transparently and in accordance with the law?

The video is also available in Serbian, Macedonian and Albanian.



Read more:

Forgotten humans-in-the-loop. Labour-oriented critique of the current discussion of algorithmic content moderation

In the past decade, thriving online harassment and hate speech, armies of bots spreading disinformation as a mean of interference in the elections, far-right propaganda, and waves of obscurantism disseminating through the COVID-19 related fake news repeatedly made online platforms’ content moderation the topic on everyone’s lips and newsfeeds. A recent round of discussion was triggered by the suspension of former US president Donald Trump’s Twitter account in January 2021 in the aftermath of Capitol storming. Twitter’s controversial decision to limit interactions, and later hide Trump’s posts, followed by the ultimate suspension of his account, was taken at the board of directors level. However, not every single user on the platform is paid equal attention when it comes to content moderation. More often, due to the large scope of data, machine learning systems are tasked with moderation. Human moderators are only responsible for reviewing the machine decisions in cases when users appeal those decisions or when the machine learning algorithms flag a given case as contentious.

Concerned by the implications algorithmic moderation can have on freedom of expression, David Kaye, a former UN special rapporteur on the promotion and protection of the right to freedoms of opinion and expression, called on online platforms to “[ensure] that any use of automation or artificial intelligence tools [in this particular case, for the enforcement of hate speech rules] involve human-in-the-loop”. Although the watchdog expressed valid concerns about the implications of algorithmic moderation for the freedom of expression, there is a strong case to argue that humans have never been out of the content moderation loop. 

Human-in-the-loop refers to the need for human interaction with machine learning systems in order to improve their performance. Indeed, algorithmic moderation systems cannot function without humans serving it. The machine has to be designed, created, maintained and constantly provided with new training data, which requires a complex human labor supply chain.

Given the increasing relevance of content moderation in public discourse, it is important to adopt a labour-oriented perspective to understand how algorithmic moderation functions. Contrary to popular fallacy that contraposes machine moderation to human moderation, indeed, current moderation presents a mixture of humans and machines. In other words, humans are pretty much “in-the-loop”.

The aggregated supply chain of all the human involvement is visually presented:

Human labour supply chain involved into algorithmic content moderation

Boards of Platforms


Algorithmic moderation starts from platforms boards’ members and employed consultants who decide on the general rules for moderation. These rules are primarily a product of the need to manage external risks. Among risks that have to be managed are pressures from civil society, regulators, and internet gatekeeping companies.

For instance, Facebook’s initial policy was to remove breastfeeding photos. The platform was defending itself by referring to the general no nudity policy which the fully exposed breasts violated. It was only in 2014, when after years of pressure from Free the Nipple activists, Facebook allowed pictures of women “actively engaged in breastfeeding”. Among other achievements of pressure from civil society was tightening of most of the platforms’ policies towards hate speech like misogyny, racism, and explicit threats of rape and violence.

A number of more established civil society groups also propose solutions on how to moderate based on their proposed ethics or derived from the existing norms of national and international law. Intergovernmental organizations like OSCE, Council of Europe, and the United Nations have their own projects dedicated to ensuring the freedom of expression while respecting the existing norms of international law.

The pressure from the regulators most visibly manifested itself in the adoption of the rules and practices aimed at curtailing the spread of terrorist content and fake news. The latter problem, which became widely discussed after the alleged interference of Russia’s government in the US elections, received additional attention due to the biopolitical concerns raised by Covid-19 pandemic.

When it comes to pressure from the gatekeeping companies, the example of the Parler social network is the most graphical. This social network ceased to exist after Amazon stopped providing the platform with cloud computing under the pretext of insufficient moderation, and both major distribution platforms App Store and Google Play, suspended Parler’s apps. In similar fashion, nudity ban on Tumblr, which led to a mass exodus of users from that platform, came after Apple banned Tumblr’s app from the iOS App Store due to reported child pornography. Likewise, Telegram’s CEO Pavel Durov reported that his decision to remove channels disclosing the personal data of law enforcement officers responsible for brutal dispersion of rally participants in Russia was forced by the gatekeeping company: he claimed that Apple did not allow the update for the IOS app to be released until these channels were removed. During the 2021 Russian parliamentary elections, Apple and Google, being squeezed by the Russian government, in their turn demanded Telegram to suspend a chatbot associated with the Smart Voting project run by the allies of jailed politician Alexey Navalny. The chatbot provided recommendations for Russian voters on which candidates to support in order to prevent the representatives of ruling party from getting the mandates

Not every platform CEOs would employ or at least report on using the algorithmic moderation systems as a solution. Clubhouse’s moderation, for example, works in a way that no machine-learning algorithm is employed. The conversations are recorded and stored by Agora, the Shanghai-based company providing the back-end for Clubhouse. In case of a complaint either by the users or a government, the platform could study the recording and pass the verdict.

The decision on whether to manage the aforementioned risks with the help of algorithmic moderation, always lies with board members of the platform. The boards create the rules, the engineers find the ways to impose them – although the border is not well demarcated, given that the CEOs often hold degrees in engineering themselves and might also be directly involved in designing the systems’ architecture.

Engineers


It is engineers who decide which algorithm to introduce for content moderation, design and maintain that algorithm, and seek ways to modernize or replace it.

Engineers choose between two main categories of algorithms, which are commonly both applied to content moderation.

One category of algorithms deals with searching for partial similarity (perceptual hashing) between newly uploaded content and an existing database of inappropriate content. For example, perceptual hashing is effective in preventing the circulation of inappropriate content such as viral videos of mass shootings, extremist texts or copyrighted films and songs. The most well-known example of a perceptual hashing-based algorithm is the Shared Industry Hash Database (SIHD), used by companies like Google, Facebook, Microsoft, LinkedIn, and Reddit. The database was created in 2017, contains terrorism-related content and has been criticized for its lack of transparency.

The second category encompasses algorithms that predict (machine learning) if content is inappropriate. Machine learning technologies like speech recognition and computer vision are effective in classifying user-generated content that infringes on the platforms’ terms of services (ToS). This technology has however drawn criticism for discriminating against certain groups, as in the case of overmoderation of tweets written in Black English. These biases are not generated by the algorithm itself, but are formed due to inappropriately compiled datasets in the training of that algorithm.

Driven by the orders coming from the platforms’ board, engineers constantly seek new datasets to improve the work of their algorithms. These datasets are manually labelled by human moderators, outsourced data flaggers, and regular users.

Human moderators


The main role of human moderators is to review users’ appeals against particular machine-made decisions and to decide in those cases when the level of machine learning algorithm confidence is low. Moderators often work for outsourced companies based in the Global South and the conditions of their labour are a matter of concern of the human rights activists. Besides usually being in economically precarious situations, the moderators suffer huge psychological pressure by dealing with very sensitive content like videos of live streamed suicides on a daily basis.

Moderators’ role is not limited to resolving disputes between the user and platform. If human moderators confirm that the uploaded content violates the ToS of the platform, this content, now verified by the expert, can further augment the dataset used for algorithmic training.

The mainstream practice of human moderation presupposes anonymity of the moderators. The course on the pioneering approach has been taken by Facebook. In their attempt to meet the demand for an increased transparency and improve the company’s legitimacy, Facebook’s board has introduced the system considerably reminding of the constitutional technology of separation of powers employed by the national states. Indeed, the idea of creating a quasi-legal judicial body within Facebook being dedicated to content moderation matters came from Noah Feldman, a professor at Harvard Law School who took part in drafting the interim Iraqi constitution.

In 2020, the platform established the so-called Oversight Board (OB) referred to by commentators as Facebook’s Supreme Court. The OB comprises twenty members “paid six-figure salaries for putting in about fifteen hours a week” among whom are acknowledged human rights activists, journalists, academics, lawyers, as well as former judges and politicians. By October 2021, the OB has adopted 18 decisions, some of which have overturned the initial decision passed by anonymous human moderators or the board itself. Other decisions, the most significant of which is Trump’s account suspension, have been upheld by the OB. In passing its decisions, the OB refers to both the platform’s community guidelines and the international human rights standards, namely the provisions of International Covenant on Civil and Political Rights. Clearly, twenty OB members are unable to review all the cases eligible for appeal so their goal is limited to reviewing the most representative ones, chosen by Facebook, to produce advisory policy recommendations and, supposedly, create the precedents human moderators can refer to in their practice. The company states that their “teams are also reviewing the board’s decisions to determine where else they should apply to content that’s identical or similar”.

Crowdsourced flaggers


Algorithm training sets are usually compiled by crowdsourced flaggers who classify content for a small financial reward, working through platforms such as Amazon’s Mechanical ‘Turk’ or Yandex’s ‘Toloka’. Using the example of Yandex Toloka, flaggers are tasked with classifying images into the following six categories: “pornography”, “violence”, “perversion”, “hinting”, “doesn’t contain porn”, “didn’t open”. As shown below, the left image, taken from the tutorial, is classified as “doesn’t contain porn”, while the other two images are classified as “hinting”. The explanatory signs indicate that the middle image displays “an obvious focus on the genital area” while the right image shows an anatomical depiction of genitals. These classified datasets are most probably used by Yandex to moderate their social media platforms like Messenger and Zen. The latter enjoys relative popularity in the Russian segment of the Web. At the same time, the explicitly norm prescribing manner in which these datasets are compiled serves to illustrate an observation that “the training dataset is a cultural construct, not just a technical one”.

Users


Regular users of online platforms also contribute to the training of algorithms or updating the databases for similarity-searching algorithms through reporting on content they deem inappropriate. While for users themselves reporting is a way to make their voices heard by the platform, for the latter the feedback is valuable as any feedback could be and as an unpaid labour of mapping the training datasets for predictive algorithms. 

Once a sufficient number of users report that a piece of content doesn’t meet the requirements of the platform, the content is sent to the human moderators for further review. If the moderator confirms that the content violates the ToS of the platform, those users have demonstrably contributed to improving the algorithms.

Conclusion


While current mainstream approaches to the analysis of automated moderation systems focus strictly on the technical details of how the algorithms work, the people involved always go unseen. This paper pays tribute to the humans whose labour makes the automated moderation possible but kept lost in the false human-machine dichotomy, when in fact the current practice of content moderation presents an assemblage of humans and machines intertwined.



Ilya Lobanov is an independent researcher from Saint-Petersburg, currently based in Vienna. His interests lie in the areas of political economy of digital capitalism, urban politics, and history of mind.

Read more: