I just got back from the International Centre for Comparative Criminological Research annual conference (programme is here) held at the Open University Milton Keynes. There were some outstanding keynote speakers such as Lord Justice Leveson, Prof John Hatchard, Prof Jim Fraser and Dr Itiel Dror, and the panel included Iain McKie (ex-policeman and campaigner on behalf of his daughter Shirley McKie who was wrongly accused of leaving her fingerprint at a crime scene and lying about it).  When I was originally invited to speak at the conference I was going to talk about the latest research we were doing in collaboration with David Lagnado at UCL on using Bayesian networks to help build complex legal arguments (dealing with things like alibi evidence, motive and opportunity). But that was before the  R v T ruling and its potentially devasting impact for using Bayes in English courts (the analogy would have been like talking about differential equations after being told that you were not allowed to use addition and subtraction). So I ended up doing a presentation based on our draft paper addressing the R v T ruling.  This turned out to be a good move because I think it also addressed some of the core recurrent themes of the conference. 

 Specifically, a core issue was: what constitutes sufficiently 'reliable' forensic evidence? The assumption seemed to be that 'reliable' meant 'scientific' and/or 'rigorous' but the message I was trying to get across was that these vague notions were unnecessarily confusing. All that really matters for any type of forensic match evidence (be it DNA, fingerprint, footprint, voice, earprint, and even faceprint) is knowing:

a) the random match probability for a given type of match (i.e. what proportion of people have the same 'type' as the one found)

b) the probability of a false positive match (the chance that the test will determine that a trace is of the given type when it is not)

c) the probability of a false negative match (the chance that the test will determine that a trace is not of the given type when it is)

Once you know these three things you can determine the value of the evidence (as explained in the paper using Bayes) in favour of or against the target being the source. In general the lower the probabilities are, the better the value is of the evidence. So, for example, DNA is assumed to be 'scientific' because it is assumed to always give very low values for each probability. But this is a fallacy.  In some cases the DNA match probability is not especially low and even worse, the experts presenting it do not even bother stating what the error probabilities b) and c) are. On the other hand an 'unscientific' forensic science like footwear matching could in principle offer some genuine (but small) evidential value if the probabilities are known and stated accurately.

So, especially relevant to this issue were the discussions about the ‘reliability’ of particular types of forensic evidence. For example, Ailsa Strathie gave a very good presentation with empirical data suggesting that all the various types of faceprint evidence were ‘unreliable’. In fact, what she actually showed was that in practice faceprint match evidence was no more accurate than what could already be determined by lay people (i.e. jury members).So, assuming jury members are not blind, this is a good reason to reject the admissibility of faceprint evidence(any expert evidence has to be something that could not already be determined by the jury). Jim Fraser’s presentation on “when science meets the law” also very much focused on the issue of what kind of forensic evidence was sufficiently reliable.

Lord Justice Leveson, whose talk focused on the Law Commission’s report on Expert Evidence in Criminal Trials, was especially keen to expose what he regarded as unscientific areas of forensics (and he also provided lots of great stories and anecdotes). During the breaks and questioning we had some very interesting discussions about Bayes where I was again making my point that what mattered from the perspective of ‘value of evidence’ was the ability to provide justified match probabilities and error probabilities.It is better to have match evidence of small, but know value, than that of potentially high value that cannot be justified.

The panel discussion focused on the use of expert evidence. Iain McKie’s contribution about his own experiences on his daughter’s case were especially pertinent, revealing some problematic cultural issues with both the police and community of fingerprint experts. I also found out from Maggie Scott QC that the Scottish legal system, unlike England and Wales, no longeraccepted evidence about drug traces on banknotes (something I recently wrote a blog entry on). Lord Justice Leveson, Itiel Dror, and Jim Fraser were the other panel members (it was chaired by Phil Bates).

Other conference highlights were:

·        John Hatchard spoke about what constitutes evidence and mentioned how difficult it was to teach statistics to law students.

·        Graham Pike (standing in for Lousie Ellison) gave a terrific talk about the unreliability of eye witness evidence and also mentioned some interesting data about the reliability of ID parades. The latter is of great interest to me becausethe famous Adams case was about whether you could incorporate 'scientific evidence' (DNA match probability) and 'non-scientific evidence (failure to select defendant from ID parade).The 'scientific evidence' is actually not very scientific after all: the match probability was anywhere between 1 in 2 million and 1 in 200 million and at no time did the experts attempt to quantify the impact of testing errors (making the match probability almost irrelevant). On the other other (if I understood Graham correctly) the police have rather good data on the accuracy of ID parade identification. So the latter information could be entered into the Bayesian calculations just as legitimately as the DNA match evidence (in fact I would argue MORE so).

·        Jim Turner gave a talk about the ‘CSI effect’ (which we subsequently found out was first named as such by Lord Justice Leveson), which I pointed out was especially timely given the Casey Anthony aquittal in the Florida (unfortunately I seemed to be the only person in the audience aware of this case which had dominated the entire US media for weeks and, I believe will be spoken about more in years to come than even the OJ Simpson case). He also gave some very interesting results of a survey about what people believed was ‘reliable’ evidence.

·        Itiel Dror gave a typically entertaining and informative presentation with a lot material I had not seen him present before. I was especially interested in his discussion of what he called the ‘bias snowball effect’ whereby different pieces of apparently independent evidence were not actually independent because of bias (if one, say a fingerprint expert already knows that a DNA match has been found then he/she is more likely to confirm a fingerprint match etc). This is closely related to the Bayesian Network (BN) modelling work we are currently doing and I will certainly produce a working BN example of this soon.

On a less positive note, there was one part of the conference that I found uncomfortable.The last thing I would expect to have to contend with at a conference like this is ignorant deligitimisation of Israel (as a Jew with family ties to Israel I am particular sensitive to this and attuned to the extent to which it is perpetrated). Yet this is what happened during a session on “Picturing the truth? Drawing, seeing evidence”.  Jill Gibbon (Open University) gave a talk entitled “Unveiling the arms trade: satire, seeing and evidence” in which she described her experiences of making drawings at arms fairs. I was not exactly sure how the talk fitted into the conference theme, but that was not what concerned me. What concerned and upset me was that Dr Gibbon took the opportunity to make political statements condemning Israel that were not only completely out of context from her own talk, but were completely false. Specifically, following on from comments and drawings ridiculing Israeli representatives at the Paris arms fair(which at least fell within the context of the talk) Dr Gibbon stated that “only 6 months after this fair Israel attacked Gaza and killed over 1000 civilians on the pretext of stopping arms smuggling tunnels”.I was forced to point out, at the end of her talk that a) Israel’s actions in Gaza were not as stated, but in response to over 5000 rockets fired at civilian targets in Israel; and b) it has now been definitively proven that over 800 of the people killed in Gaza were Hamas members – with the proportion of civilians killed far less than in any other comparable war in history, showing the extraordinary lengths Israel went to avoid civilian deaths. To be fair, Dr Gibb did apologise afterwards and said that she should have checked the facts better. What we both agreed on was that she was simply repeating the kind of standard anti-Israel narrative that  dominates the British media.

Anyway, other than that it was a very good conference, which was superbly organised by Hayley Ness and Sarah Batt (and the rest of the OU team).