Having entered the probabilities we can now use Bayesian probability to do various types of analysis. For example, we might want to calculate the (unconditional) probability that Norman is late:

p(Norman late) = p(Norman late | train strike) * p(train strike) + p(Norman late | no train strike)*p(no train strike)

= (0.8 * 0.1) + (0.1 * 0.9) = 0.17

This is called the marginal probability .

Similarly, we can calculate the marginal probability that Martin is late to be 0.51.

However, the
most important use of BBNs is in *revising*
probabilities in the light of actual observations of events. Suppose,
for example, that we *know*
there is a train strike. In this case we can **enter
the evidence **that
'train strike' = true. The conditional probability tables already
tell us the revised probabilites for Norman being late (0.8) and
Martin being late (0.6). Suppose, however, that we do not know if
there is a train strike but do know that Norman is late. Then we can
enter the evidence that 'Norman late' = true and we can use this
observation to determine:

a) the (revised) probability that there is a train strike; and

b) the (revised) probability that Martin will be late.

To calculate a) we use Bayes theorem :

Thus, the observation that Norman is late significantly increases the probability that there is a train strike (up from 0.1 to 0.47). Moreover, we can use this revised probability to calculate b):

p(Martin late) = p(Martin late | train strike) * p(train strike) + p(Martin late | no train strike)* p(no train strike)

= (0.6 * 0.47) + (0.5 * 0.53) = 0.55

Thus, the
observation that Norman is late has slightly increased the
probability that Martin is late. When we enter evidence and use it to
update the probabilities in this way we call it **propagation**.

For a detailed
look at how BBNs transmit evidence for propagation (including the
notions of *d-connectedness*
and *separation*)
click here.