This lecture is about the patterns produced by models in their results. It brings together most of the ideas that we have covered in previous lectures, as well as introducing some new ones. We will talk about:
The format of this lecture will be different to the others. Rather than Nick and Alison talking for an hour, you will do some background reading before the lecture, and we will then use the lecture time to discuss the concepts you come across in the reading.
The slides used in the lecture are available: here.
Please note that the leture slides work in two dimensions; sometimes you have to press the 'down' arrow to find additional slides explaining a particular topic. You can press 'escape' to zoom out and see all of the slides.
The slides with voting windows on them wont work after the lecture, as we will have finished voting!
Before the lecture, please read the following paper:
Evans, A., A. Heppenstall and M. Birkin (2013) Understanding Simulation Results. In B. Edmonds and R. Meyer (eds) Simulating Social Complexity. Springer. [Available on the VLE: http://tinyurl.com/oq4bh64 ]
It is a difficult paper that covers a range of fairly complicated material, but you will be able to understand most of it with some effort and a little wider reading. The notes in the following section should also help.
These sections introduce two important words: equifinality and identifiability. These are not difficult concepts to understand, but they are quite hard to explain. Look them up (Wikipedia is a good place to start) and try to create your own definitions.
Half way down page 2, some other important words are also introduced: equilibrium; oscillation; catastrophe; bifurcation. We will discuss these during the lecture, but it is important to understand equilibrium. Here, equilibrium refers to a model reaching a consistent state, or a number of states that it moves between. For example, after running the festival model for some time, you might find that the overall distribution of crime does not change. Individual crimes will still occur, and agents still move around, but if the model ran forever the overall spatial pattern will not change - it has reached equilibrium.
The authors then move on to discuss emergence. Consider the following:
This section looks at the different statistics that we can use to describe model outputs. Table 1 is particularly useful, as it shows the statistics that we can use to reduce the dimensionality in a model in order to make it easier to recognise patterns. For example, we can use exploratory statistics to take some spatial model output (2D) and turn it into a single number (1D) that describes the variables across space.
You will probably be familiar with some of these methods, but most will be new to you. Briefly familiarise yourself with the following:
This section makes the point that the tools that we have developed over some time ("2500 years"!) are poorly suited to exploring the detail associate with individual-level data. The authors argue that visualisation is a powerful tool for understanding individual level data: "Our chief tool for individual-level understanding without aggregation is, and always has been, the human ability to recognise patterns in masses of data" (pg 8).
Table 3 lists some visualisation techniques. Find visual examples of the following so that you can see what the visualisations actually look like:
This last section is probably the most difficult. It discusses how we can use patterns to better understand our models (and the systems that they are simulating) by "highlighting the mechanisms within the models which give rise to these patterns" (pg 13).
The key concepts to try to understand are: