Introducing Algorithms, Data and Democracy: How can we strengthen democratic legitimacy with technological developments?

By Sine N. Just

As the extent to which digital technologies already saturate societies is becoming apparent to growing numbers of global citizens, the question of whether ubiquitous digitalization is desirable is moving to the forefront of public debate: as we combat COVID-19, should contact tracing apps and digital vaccination passports be mandatory for all citizens? In the battle against crime, should facial recognition software be integrated into surveillance equipment? Seeking to vanquish child abuse, should predictive algorithms assist social workers’ case evaluation? These are but three of the many issues that are being debated at present. And what about the (social) media platforms on which these debates are conducted? Invoking freedom of speech, should they remain unregulated? Seeking social justice, should hate speech and incitement to violence be banned? As decision making and opinion formation alike are becoming thoroughly digitalized, we need to discuss the content of controversies about data and algorithms as well as the form of datafied and algorithmic controversies.

A new research project aims to do just that. Generously funded by the Villum and Velux Foundations, the Algorithms, Data and Democracy-project will investigate issues of public concern about digitalisation and datafication as these are shaped by digital technologies and articulated in the digital infrastructures of democracy. The aim is to understand current concerns and challenges so as to be able to suggest ways in which the algorithmic organisation of data can engage, enlighten and empower individual citizens and democratic institutions. Turning potential crises of trust in democratic societies as well as in novel technologies into opportunities for enhanced digital democracy.

The ADD-project will achieve this aim through strong interdisciplinary integration as well as disciplinary expertise. It brings together a team of researchers with unique competences in computer science and technological research, the humanities and the social sciences, building common theoretical and methodological approach in the process of studying empirical cases. Further, the project integrates scientific research and public outreach by involving relevant stakeholders and interested citizens from its outset and throughout the 10 years of its existence. At first, we will seek broad engagement, listening to concerns and opinions of people and organizations. As we develop our research, we will seek to enlighten the debate through the communication of results. Finally, we will join in conversations that can empower citizens and inspire policy-makers to instigate positive change.

Read more about the project on our website and follow it as it unfolds by subscribing to our newsletter.  

Digitale liv

Af Sine N. Just

Kultur- og kommunikationsstudier af digitaliseringens omorganisering af hverdagen

Antologien Digitale liv. Brugere, platforme og selvfremstillinger, der er redigeret af Rikke Andreassen, Rasmus Rex Pedersen og Connie Svabo, samler en gruppe forskere om studiet af forskellige måder, hvorpå digitalisering påvirker menneskers liv og hverdag. De fleste af bidragyderne har fagligt hjemme på Institut for Kommunikation og Humanistisk Videnskab, og nogle af os (inklusiv undertegnede) er medlemmer af Digital Media Lab. Vi er altså alle kultur- og kommunikationsforskere med interesse for digitalisering, men snarere end et snævert fokus viser antologien feltets emnemæssige bredde, metodiske spændvidde og teoretiske eklekticisme. I denne omtale vil jeg give et overblik over de emner, metoder og teorier, som bogen rummer, og dermed en forsmag på, hvad man finder, hvis man dykker ned i den.

Den digitale hverdag

Hverdagen er blevet digital i en sådan grad, at vi ikke tænker over det – indtil en del af den digitale infrastruktur bliver synlig via et ’glitch’ i matrixens ellers så fejlfrie flow. Vi ser vores afhængighed af det digitale, når internettet ’lacker’, som børnene råber med lige dele vrede og resignation. Når iPhonen går i stykker, og det bliver umuligt at holde styr på aftaler, at tage billeder, at lytte til musik og at komme i kontakt med venner, for nu blot at nævne nogle af de funktioner, der i dag er samlet i en smart phone. Eller når indkøbenes stregkode ikke kan scannes, og man må vente på, at en af supermarkedets stadigt færre ansatte dukker op.

Bogens undertitel angiver tre overordnede områder eller tematikker inden for denne omsiggribende udvikling, som bogen også er struktureret efter. For det første ’brugere’. Hvad betyder det for os, som individer og fællesskaber, at vi i stadigt stigende grad defineres i og med de digitale teknologier, vi benytter os af? Når vi som borgere organiseres i digitale offentligheder, finder vores nyheder på nettet og modtager information fra ’det offentlige’ i e-boksen? Når byens rum bliver digitale og vores færdsel i dem styres af vores evne eller vilje til at interagere med teknologierne omkring os? Når computerspil florerer i stadigt flere undergenrer, og man fx kan øve sig på kærlighed via spillene?

Det andet tema er ’platforme’. Her stilles især skarpt på den algoritmiske opbygning af digitale infrastrukturer; hvordan former de underliggende algoritmer brugernes handlemuligheder på fx Facebook, Spotify og dating apps? Hvad betyder det, at interaktionen mellem bruger og algoritme har en tendens til at virke forstærkende på brugerens smag? Altså, når man både selv kan sætte sine præferencer og bliver tilbudt mere af den type indhold, man efterspørger, er det på den ene side effektiv målretning, men på den anden side også spildte muligheder. ’People who likes this, also liked…’-logikken kan være en god måde at målrette indhold, men det kan også føre til ensporede ekkokamre, til øget forudsigelighed og kontrol.

Endelig fokuserer det tredje tema ’selvfremstillinger’ på forskellige brugergruppers anvendelse af de teknologiske muligheder til identitetsdannelse og/eller selvpromovering. Hvad enten det drejer sig om influencere, der bliver berømte på at ’være sig selv’, prekære arbejdere, der søger beskæftigelse via arbejdsplatforme, eller professionelle organisationer, der bruger digital kommunikation som et middel til realisering af strategiske mål, så formes individuelle og kollektive identiteter i stadigt stigende grad af de digitale platformes mulighedsrum, og i denne sidste del af bogen fokuseres der på, hvordan disse mulighedsrum konkret udnyttes.

Digitale metoder?

Der er i antologien eksempler på, at den omsiggribende digitalisering også skaber nye forskningspraksisser i form af digitale metoder. Fx anvender Sander Andreas Schwartz i sit kapitel om algoritmisk offentlighed ’walk-through’-metoden til at gennemgå Facebooks design og funktionalitet, og i kapitlet om Spotifys anbefalingssystemer benytter Rasmus Rex Pedersen sig af en ’kritisk læsning af algoritmen’, der muliggør en nærmere undersøgelse af, hvordan anbefalingerne egentlig fungerer.

Det er dog karakteristisk, at ingen af kapitlerne arbejder med digitale metoder til indsamling og analyse af ’big data’. I stedet er fokus på kvalitative studier, ofte med særligt blik for brugeren, hvad enten det er via kvalitative interviews, protokolanalyser eller deltagerobservationer af brugeroplevelser eller gennem studier af individuelle og kollektive aktørers digitale kommunikation. Antologien viser dermed, at digitaliseringen ikke har overflødiggjort klassiske humanistiske og samfundsvidenskabelige tilgange, men at det derimod i høj frad giver mening at studere de nye fænomener med velkendte metoder.

Digital humaniora

Dermed bidrager kapitlerne med dybe og detaljerede indsigter i forskellige former for sociomateriel meningsdannelse; de forstår og forklarer ’digitale liv’ som måder, hvorpå teknologisk betingede handlerum opstår og udnyttes. Dette må være en af humanioras fornemste opgaver i dag; at forstå, hvad det vil sige at være menneske i en digital tidsalder – hvordan vi former vores digitale redskaber og hvordan de former os. Antologien bidrager med netop sådanne forståelser gennem inddragelse af en bred vifte af teoretiske perspektiver, modeller og begreber. Fra aktørnetværksteoretiske arrangementer af humane og non-humane aktører over digitale affordances til realiserede handlinger – og deres konsekvenser. Og fra kontrol, normalisering og forstærkning af eksisterende ideologiske rammer, fx via markedsliggørelsen af shitstorms, til kreativ udnyttelse af de etablerede rammer til afsøgning af mulige alternativer, fx gennem minoritetssubjekters selvkommercialisering.

Er man konkret interesseret i et eller flere af de temaer, som behandles i Digitale liv, eller mere generelt optaget af spørgsmålet om, hvordan teknologiske udviklinger påvirker menneskelige fællesskaber, er der masser af inspiration at hente i antologien. Som bagsideteksten lover, kan bogen ”læses af alle, der interesserer sig for den digitalisering, der i stigende grad præger vores samfund og digitale liv”.

Beyond good and evil: Debating data ethics with university students

By Sine N. Just

From https://blog.gaijinpot.com/spot-the-kanji-for-good-and-evil-in-everyday-japanese/

Whether your point of reference for the title of this piece is late 19th century German philosophy or early 21st century computer games, the implications are the same: The categories are blurred and, hence, we need to develop new modes of ethical judgement. This is particularly true now, as the need for such judgement is also moving beyond ‘the human’ and ‘the technical’ as separate realms. What we need, today, is a sociotechnical ethics that enable us to steer current developments in new directions. We need, to borrow from the subtitle of Nietzsche’s work, a new ‘philosophy of the future’.

Turning from such sweeping visions to the more mundane question of how to introduce data ethics to students, the aim of this post is to report on one small experiment with technology-supported in-class debate.

Discussing algorithms and data in the class room

Using Kialo as the platform for the debate, I asked my students to help develop and assess arguments pro and con the societal impact of ‘big data’. The students had been given a prompt in the form of boyd and Crawford’s (2012) ‘Critical questions for big data,’ and prior to the exercise I had pointed to a key quote from that text (p. 663):

Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means?

Translating these broad questions into more specific positions on the ethical implications involved, students were then asked to produce arguments for and against four common claims with clear tech-optimistic or tech-pessimistic valences as well as one claim suggesting the irrelevance of big data:

  • Society will become safer
  • Society will become more controlled
  • Society will become richer
  • Society will become more unequal
  • Society will remain unchanged

The students were asked to provide one argument for or against each claim, and at the end of the exercise we held a plenary discussion to reflect on the arguments produced.

Students’ positions on data ethics

Looking at the students’ arguments, a first finding is that no one argued in favor of the claim that ‘society will remain unchanged’. To the contrary, the students provided arguments against this claim, e.g. suggesting that corporate actors and public institutions ‘will increasingly organize around big data’, that ‘techgiants have more knowledge about us, than we do’ and that individuals ‘will change their digital behaviors to protect their private lives’.

Beyond the consensus that algorithms and data are impacting the social world, however, students were divided as to the nature of the impact. And for each of the four remaining claims, they produced approximately as many arguments in favor as against. For instance, the claim that society will become more controlled, which was the claim that produced the most responses, let to a nuanced discussion concerning the implications of such control. That is, while most students took increased surveillance for granted, some felt this to be a cause for celebration rather than concern as it could ‘reduce crime’ or ‘create safety and make things easier’. Others, however, highlighted the risks of ‘misuse’ and ‘manipulation’, and suggested that people might self-regulate because ‘we do not know when we are being watched’.

Interestingly, the students produced somewhat different arguments for the claim ‘society will become safer’, suggesting that control and safety might not be exclusionary categories, but instead exist in a trade-off. Here, students were less prone to accept the possibilities of data to produce safety and more concerned with the price of such safety, e.g. suggesting that lack of transparency creates uncertainty and arguing that the ‘need to produce regulation about data security [GDPR] shows that there is a problem’. Generally, however, for this claim the students felt that ‘it depends’. And one comment on the claim of ‘more control’ nicely sums up the general attitude:

…learning more about human action and behavior can be both good and bad. Depends on what one wants to control.

Turning to the question of growth, students clearly saw the potential of technological developments to create new business opportunities and increase the efficiency while decreasing costs of e.g. marketing activities. However, as one student argued:

…society might not need more growth and wealth, but a better distribution of resources.

This takes us to the claim concerning increased inequality, which students seemingly view as a side-effect of growth. That is, the students tend to support the combined claim that current uses of algorithms and data simultaneously produce more growth and more inequality. The reasoning being that ‘data is a form of capital with which you can negotiate’ and ‘the most powerful people and organizations have more access to data and can use it to their advantage’.

Towards data ethics

In our plenary summary of the exercise, the students reported that in considering where to position themselves they had found that it is not easy to take one stance or the other, as there are many arguments for and against all positions and the matter is ‘more nuanced than I thought’, as one participant said.

Illustrating this point was the main pedagogical aim of the exercise, which I concluded with a slide showing Kranzberg’s (1986: 545) famous dictum that ‘technology is neither good nor bad; nor is it neutral.’ However, I also hoped to move beyond the articulation of this position to begin developing the alternative. What might a data ethics beyond the clear dichotomies of optimism and pessimism – good and evil – look like?

In reflecting on this question, we talked about intentionality, consequences, and situationality. Each of these potential principles of judgement are reminiscent of well-established ethical schools and, hence, carry with them the same issues of when and how to use either. As might be expected, we did not resolve these issues once and for all, but the questions linger with me – and, I hope, with the students.

With this text, I invite continued reflection on the ethics of data in as well as outside of classrooms. The future will not wait for us to develop a new philosophy, and, hence, establishing a robust and distinct data ethics is an increasingly urgent matter.