By Sine N. Just
Whether your point of reference for the title of this piece is late 19th century German philosophy or early 21st century computer games, the implications are the same: The categories are blurred and, hence, we need to develop new modes of ethical judgement. This is particularly true now, as the need for such judgement is also moving beyond ‘the human’ and ‘the technical’ as separate realms. What we need, today, is a sociotechnical ethics that enable us to steer current developments in new directions. We need, to borrow from the subtitle of Nietzsche’s work, a new ‘philosophy of the future’.
Turning from such sweeping visions to the more mundane question of how to introduce data ethics to students, the aim of this post is to report on one small experiment with technology-supported in-class debate.
Discussing algorithms and data in the class room
Using Kialo as the platform for the debate, I asked my students to help develop and assess arguments pro and con the societal impact of ‘big data’. The students had been given a prompt in the form of boyd and Crawford’s (2012) ‘Critical questions for big data,’ and prior to the exercise I had pointed to a key quote from that text (p. 663):
Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing? Will data analytics help us understand online communities and political movements? Or will it be used to track protesters and suppress speech? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means?
Translating these broad questions into more specific positions on the ethical implications involved, students were then asked to produce arguments for and against four common claims with clear tech-optimistic or tech-pessimistic valences as well as one claim suggesting the irrelevance of big data:
- Society will become safer
- Society will become more controlled
- Society will become richer
- Society will become more unequal
- Society will remain unchanged
The students were asked to provide one argument for or against each claim, and at the end of the exercise we held a plenary discussion to reflect on the arguments produced.
Students’ positions on data ethics
Looking at the students’ arguments, a first finding is that no one argued in favor of the claim that ‘society will remain unchanged’. To the contrary, the students provided arguments against this claim, e.g. suggesting that corporate actors and public institutions ‘will increasingly organize around big data’, that ‘techgiants have more knowledge about us, than we do’ and that individuals ‘will change their digital behaviors to protect their private lives’.
Beyond the consensus that algorithms and data are impacting the social world, however, students were divided as to the nature of the impact. And for each of the four remaining claims, they produced approximately as many arguments in favor as against. For instance, the claim that society will become more controlled, which was the claim that produced the most responses, let to a nuanced discussion concerning the implications of such control. That is, while most students took increased surveillance for granted, some felt this to be a cause for celebration rather than concern as it could ‘reduce crime’ or ‘create safety and make things easier’. Others, however, highlighted the risks of ‘misuse’ and ‘manipulation’, and suggested that people might self-regulate because ‘we do not know when we are being watched’.
Interestingly, the students produced somewhat different arguments for the claim ‘society will become safer’, suggesting that control and safety might not be exclusionary categories, but instead exist in a trade-off. Here, students were less prone to accept the possibilities of data to produce safety and more concerned with the price of such safety, e.g. suggesting that lack of transparency creates uncertainty and arguing that the ‘need to produce regulation about data security [GDPR] shows that there is a problem’. Generally, however, for this claim the students felt that ‘it depends’. And one comment on the claim of ‘more control’ nicely sums up the general attitude:
…learning more about human action and behavior can be both good and bad. Depends on what one wants to control.
Turning to the question of growth, students clearly saw the potential of technological developments to create new business opportunities and increase the efficiency while decreasing costs of e.g. marketing activities. However, as one student argued:
…society might not need more growth and wealth, but a better distribution of resources.
This takes us to the claim concerning increased inequality, which students seemingly view as a side-effect of growth. That is, the students tend to support the combined claim that current uses of algorithms and data simultaneously produce more growth and more inequality. The reasoning being that ‘data is a form of capital with which you can negotiate’ and ‘the most powerful people and organizations have more access to data and can use it to their advantage’.
Towards data ethics
In our plenary summary of the exercise, the students reported that in considering where to position themselves they had found that it is not easy to take one stance or the other, as there are many arguments for and against all positions and the matter is ‘more nuanced than I thought’, as one participant said.
Illustrating this point was the main pedagogical aim of the exercise, which I concluded with a slide showing Kranzberg’s (1986: 545) famous dictum that ‘technology is neither good nor bad; nor is it neutral.’ However, I also hoped to move beyond the articulation of this position to begin developing the alternative. What might a data ethics beyond the clear dichotomies of optimism and pessimism – good and evil – look like?
In reflecting on this question, we talked about intentionality, consequences, and situationality. Each of these potential principles of judgement are reminiscent of well-established ethical schools and, hence, carry with them the same issues of when and how to use either. As might be expected, we did not resolve these issues once and for all, but the questions linger with me – and, I hope, with the students.
With this text, I invite continued reflection on the ethics of data in as well as outside of classrooms. The future will not wait for us to develop a new philosophy, and, hence, establishing a robust and distinct data ethics is an increasingly urgent matter.