Introduction to Probability, Ch#1 notes
A probabilistic model is a mathematical description of an uncertain situation. Every probabilistic model involves an underlying process, called the experiment, that will produce exactly one out of several possible (mutually exclusive)outcomes. The set of *all* possible outcomes is called the sample space of the experiment, and is denoted by $\Omega$. A subset of the sample space, a collection of possible outcomes, is called an event.
The probability law assigns to any event A, a nonnegative P(A)(called the probability of A) that encodes our knowledge or belief about the collective "likelihood" of the elements of A. The probability law *must* satisfy following probability axioms
Nonnegativity: P(A) $\geq$ 0, for every event A.
Additivity: If A and B are two disjoint events, then the probability of their union satisfies, $P(A \cup B)$ = P(A) + P(B) . It can be generalized to any number of disjoint events.
Normalization: P($\Omega$) = 1
-- Conditional Probability --
The conditional probability of an even A, given an event B with P(B) > 0, is defined by P(A|B) = $\frac{P(A \cap B)}{P(B)}$ and specifies a new (conditional) probability law on the same sample space $\Omega$. And, All the probability axioms remain valid on the conditional probability law.
Multiplication Rule:
Assuming that all of the conditioning events have positive probability, we have
P($\cap_{i = 1}^{n} A_i$) = $P(A_1)P(A_2|A_1)P(A_3|A_1 \cap A_2)...P(A_n|\cap_{i = 1}^{n-1} A_i)$
Total Probability Theorem:
Let $A_1, A_2, ..., A_n$ be disjoint events that form a partition of the sample space and assume that probability of all these events is greater than 0. Then, for any event B, we have
P(B) = $P(A_1)P(B|A_1)$ + $P(A_2)P(B|A_2)$ + ... + $P(A_n)P(B|A_n)$
Bayes' Rule:
Let $A_1, A_2, ..., A_n$ be disjoint events that form a partition of the sample space and assume that probability of all these events is greater than 0. Then, for any even B such that P(B) > 0, we have
$P(A_i|B)$ = $\frac{P(A_i)P(B|A_i)}{\sum_{i = 1}^{n}P(A_i)P(B|A_i)}$
-- Independence: --
Two evens A and B are independent if and only if, $P(A \cap B)$ = P(A)P(B). This can be generalized to any number of elements.
Also, as a consequence P(A|B) = P(A), provided P(B) > 0
Conditional Independence:
Two events A and B are said to be conditionally independent given another event C with P(C) > 0, if
$P(A \cap B| C)$ = P(A|C)P(B|C).
And, if in addition $P(B \cap C)$ > 0, then conditional independence is equivalent to the condition
P(A |$B \cap C$) = P(A|C)
Note that, independence does not imply conditional independence and viceversa
Thursday, September 16, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment