GEOG3150 - GIS, Geocomputation and Geoplanning - Semster 2

Seminar 2 - Ethics of Individual-Level Modelling

This seminar is a little different. You (the students) are going to play the part of a University ethical review committee. We (the lecturers) are going to propose a new research project. The committee will decide whether or not the research is ethically sound and, therefore, if it should go ahead.

There are two short documents to read before the seminar. Start with:

University guidance on ethical issues. Available on the VLE.

While reading, note down the most important ethical considerations, as you will need to keep these in mind while reading about the proposed project.

The second document to read is the fictional application for ethical review:

Ethical Review Application: "An agent-based model of violent offenders to reduce crime". Available on the VLE. (This form has been reduced in length slightly, but all the relevant sections are still present).

Although this is a fictional project, many of the ethical issues it raises are pertinent to the ethics of agent-based modelling generally. During the seminar, you will be able to discuss (amongst yourselves and with the lecturers) issues arising.

Write down one question that you would like to ask the researcher under the following categories:

  1. Research design
  2. The conduct of the research
  3. Risks and benefits
  4. Treatment of subject participants.
  5. Informed consent
  6. Data protection
  7. Confidentiality and disclosure

You will all get a chance to ask a question and discuss the lecturer's answer. At the end there will be a vote to decide whether or not the project is ethically sound.

Other Things to Think About

The following might inspire some other aspects to consider..

Assuming Behaviour from Someone's Data

There is a relevant ongoing debate about 'big data' and how our personal information is being collected and used to make assumptions about our behaviour. This this is particularly pertinent to online advertising. With the proposed research project in particular, the simulation consumes information about a person and then predicts their future behaviour based on these characteristics.

An article in The Atlantic presents some of these arguments. It is an extract from the following book:

Turow, J. (2012). The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth. Yale University Press

The image below (from the article) highlights some of the concerns raised.

Based on their browsing behaviour, these kids wont see adverts for colleges

Star Trek: The Next Generation. Episode: The Measure of a Man

In this episode, Commander Data has to prove to a tribunal that, although not human, he is a sentient being and deserves the same rights as humans. It brings up interesting philosophical and ethical questions about whether machines can 'think'. The proposed research is attempting to do exactly this: create virtual individuals capable of human-like decision making.

Available on Box of Broadcasts:

Minority Report

This well-known Hollywood film (based losely on a Philip K. Dick novel) explores what society might look if we really are able to predict, with certainty, what someone will do in the future. Think about the ethical implications while watching it.

Available on Box of Broadcasts:

The Anatomy of Violence

Adrian Raine has recently published a fairly contraversial book about the extent to which people are biologically pre-disposed to commit crime. He has successfully argued in U.S. court that a convicted offender should not receive a death sentence because they were only partly responsilbe for committing the crime that they were found guilty of (he argued that the combination of biological factors and a horrendous childhood predisposed the person to violent crime).

There is a review of the book in The Guardian.

Some relevant quotes

"Surveillance is theft. This data is not public property, it belongs to us. When it is used to predict our behaviour, we are robbed of something else - the principle of free will crucial to democratic liberty"

"[technology] is inherently neither good nor evil. What makes it so is how it is used. For example, one can say that nuclear physics is not good or bad, but if it is used to build bombs to kill and destroy, it becomes evil, whereas if it is used to produce safe and reliable energy, it becomes beneficial."

"This last step, prediction, raises harsh ethical issues, as models, just like any scientific theory, cannot actually prove anything, although using models can be a useful asset for stakeholders. However, in the case of conflicts, the stakeholders are conflicting parties. Models can be used strategically, i.e. in support of the interests of parties involved, and not only for conflict settlement."

[School of Geography homepage] [Leeds University homepage]