The notion of 'explaining away' evidence

Now consider the following slightly more complex network:

 image\BBNs0067.gif

In this case we have to construct a new conditional probability table for node B ('Martin late') to reflect the fact that it is conditional on two parent nodes (A and D). Suppose the table is:

 

 

Martin oversleeps

True

 

False

 

 

 

Train strike

True

False

True

False

Martin late

 

True

 

0.8

0.5

0.6

0.5

 

False

 

0.2

0.5

0.4

0.5

 We also have to provide a probability table for the new root node D ('Martin oversleeps').

Martin oversleeps

 

True

0.4

False

0.6

We have already seen that in this initialised state the probability that Martin is late is 0.51 and the probability that Norman is late is 0.17.

Suppose we find out that Martin is late. This evidence increases our belief in both of the possible causes (namely a train strike A and Martin oversleeping B). Specifically, applying Bayes theorem yields a revised probability of A of 0.13 (up from the prior probability of 0.1) and a revised probability of D of 0.41 (up from the prior probability of 0.4). However, if we had to bet on it, our money would be firmly on Martin oversleeping as the more likely cause. Now suppose we also discover that Norman is late. Entering this evidence and applying Bayes yields a revised probability of 0.54 for a train strike and 0.44 for Martin oversleeping. Thus the odds are that the train strike, rather than oversleeping, have caused Martin to be late. We say that Martin's lateness has been 'explained away'.