πŸ“Š

Probability And Chance

13 concepts in Statistics

Probability gives statistics its language of uncertainty. Students learn how to describe chance using sample spaces, theoretical and experimental probability, compound events, and simulation. As the topic deepens, they encounter tree diagrams, conditional probability, independence, the addition and multiplication rules, expected value, and the law of large numbers. The goal is not just to compute answers, but to understand when a probability comes from logical counting, when it comes from repeated trials, and when a condition changes the β€œwhole” that a probability must be measured against. This topic supports decision-making, risk interpretation, and many of the models used later in statistics and science.

Suggested learning path: Start with basic probability, theoretical versus experimental models, and sample spaces, then move into tree diagrams and compound events before studying conditional probability, independence, and the main probability rules.

Basic Probability

Probability is the measure of how likely an event is to occur, expressed as a number between 0 (impossible) and 1 (certain). It is calculated as the ratio of favorable outcomes to total possible outcomes when all outcomes are equally likely.

Prerequisites:
relative frequency

Theoretical Probability

Theoretical probability is the expected probability of an event calculated by mathematical reasoning about equally likely outcomes, without conducting experiments. It equals the number of favorable outcomes divided by the total number of possible outcomes.

Prerequisites:
probability basic
relative frequency

Experimental Probability

Experimental probability is the probability of an event estimated from actual experimental data, calculated as the number of times the event occurred divided by the total number of trials. It approaches the theoretical probability as more trials are conducted.

Prerequisites:
probability basic
data collection

Sample Space

The sample space is the complete set of all possible outcomes for a probability experiment, listed without repetition. It forms the foundation for every probability calculation because the probability of any event is a fraction of the sample space.

Prerequisites:
tally chart

Tree Diagram

A tree diagram is a branching diagram that shows all possible outcomes of a multi-step random process. Each branch represents one choice or event, and complete paths show combined outcomes.

Prerequisites:
stat sample space
probability basic

Compound Events

Compound events are probability events made up of two or more simple events combined using 'and' (both events occur) or 'or' (at least one occurs). For independent 'and' events, multiply probabilities; for 'or' events, add probabilities and subtract any overlap.

Prerequisites:
probability basic
stat sample space

Conditional Probability

Conditional probability is the probability that one event happens given that another event has already happened. It narrows the sample space to the cases where the given condition is true.

Prerequisites:
stat sample space
compound events
two way tables

Independent Events

Two events are independent if knowing that one event happened does not change the probability of the other event.

Prerequisites:
conditional probability

Addition Rule

The addition rule finds the probability that at least one of two events occurs. It adds the probabilities of the two events and then subtracts any overlap so the shared outcomes are not counted twice.

Prerequisites:
compound events
stat sample space

Multiplication Rule

The multiplication rule finds the probability that two events both occur. It multiplies the probability of the first event by the conditional probability of the second event given that the first has happened.

Prerequisites:
conditional probability
tree diagram

Statistical Simulation

Using random number generation to model real-world processes and estimate probabilities or outcomes that are difficult to calculate theoretically.

Prerequisites:
probability basic
random sampling

Law of Large Numbers

The Law of Large Numbers states that as the number of independent, identically distributed trials increases, the sample average converges to the theoretical expected value (population mean). In other words, larger samples produce more reliable estimates of the true probability or average.

Prerequisites:
probability basic
mean fair share

Expected Value

The expected value of a random variable is the long-run average outcome of a random process, calculated as the weighted sum of each possible outcome times its probability. It represents what you would earn or lose on average per trial if the process were repeated infinitely many times.

Prerequisites:
probability basic
weighted average

More Statistics Topics