Unraveling Uncertainty: A Deep Dive into Probability & Odds
Introduction: The Language of Likelihood
Welcome to Whizmath, where we transform complex mathematical concepts into engaging and understandable insights! Today, we embark on a fascinating journey into the world of "Probability & Odds"—two fundamental concepts that are not just confined to textbooks but are deeply embedded in our daily lives. From predicting weather patterns and understanding financial markets to strategizing in games of chance and making informed medical decisions, probability and odds provide us with a powerful framework for quantifying and navigating uncertainty.
In this extensive lesson, we will meticulously dissect the principles of probability and odds, explore their interconnections, and equip you with the tools to apply them effectively. Prepare to unlock the secrets of likelihood, predict future events with greater accuracy, and make smarter decisions in a world brimming with variables.
Chapter 1: Foundations of Probability – Quantifying Chance
Probability is the branch of mathematics that deals with the likelihood of an event occurring. It is a numerical measure between 0 and 1 (or 0% and 100%), where 0 indicates an impossible event and 1 indicates a certain event.
1.1 Key Terminology: Building Our Lexicon
- Experiment: A process that yields an outcome (e.g., flipping a coin, rolling a die, drawing a card from a deck).
- Outcome: A single possible result of an experiment (e.g., getting a "Heads" when flipping a coin, rolling a "3" on a die).
- Sample Space ($S$): The set of all possible outcomes of an experiment. It is typically denoted by $S$ or $\Omega$.
- Example: For a single coin flip, $S = \{Heads, Tails\}$.
- Example: For rolling a standard six-sided die, $S = \{1, 2, 3, 4, 5, 6\}$.
- Event ($E$): A subset of the sample space; a collection of one or more outcomes.
- Example: Getting an "even number" when rolling a die is an event, $E = \{2, 4, 6\}$.
- Example: Getting "at least one Head" when flipping two coins, $S = \{HH, HT, TH, TT\}$, $E = \{HH, HT, TH\}$.
- Simple Event: An event consisting of only one outcome.
- Compound Event: An event consisting of more than one outcome.
- Mutually Exclusive Events (Disjoint Events): Events that cannot occur at the same time. If event A occurs, event B cannot occur, and vice versa. Their intersection is empty ($A \cap B = \emptyset$).
- Example: When rolling a die, the event "getting an odd number" ($O = \{1, 3, 5\}$) and the event "getting an even number" ($E = \{2, 4, 6\}$) are mutually exclusive.
- Independent Events: Events where the occurrence of one does not affect the probability of the other occurring.
- Example: Flipping a coin twice. The outcome of the first flip does not influence the outcome of the second flip.
- Dependent Events: Events where the occurrence of one affects the probability of the other occurring.
- Example: Drawing two cards from a deck without replacement. The probability of drawing the second card depends on what the first card drawn was.
1.2 Classical (Theoretical) Probability: The Ideal Scenario
Classical probability, also known as theoretical probability, is used when all outcomes in the sample space are equally likely. It is calculated using the formula:
$P(E) = \frac{\text{Number of favorable outcomes}}{\text{Total number of possible outcomes in the sample space}} = \frac{|E|}{|S|}$
Where:
- $P(E)$ is the probability of event $E$ occurring.
- $|E|$ is the number of outcomes in event $E$.
- $|S|$ is the total number of outcomes in the sample space $S$.
Example 1.2.1: Rolling a Die
What is the probability of rolling a "4" on a standard six-sided die?
- Sample space $S = \{1, 2, 3, 4, 5, 6\}$, so $|S| = 6$.
- Event $E = \text{rolling a 4} = \{4\}$, so $|E| = 1$.
- $P(\text{rolling a 4}) = \frac{1}{6}$
Example 1.2.2: Drawing a Card
What is the probability of drawing a "King" from a standard 52-card deck?
- Total possible outcomes $|S| = 52$.
- Number of Kings in a deck $|E| = 4$.
- $P(\text{drawing a King}) = \frac{4}{52} = \frac{1}{13}$
1.3 Empirical (Experimental) Probability: Learning from Experience
Empirical probability, also known as experimental probability, is based on observations from experiments or real-world data. It is calculated by performing an experiment multiple times and observing the frequency of the event.
$P(E) = \frac{\text{Number of times event E occurred}}{\text{Total number of trials}}$
Example 1.3.1: Coin Flip Experiment
If you flip a coin 100 times and get "Heads" 53 times, the empirical probability of getting "Heads" is:
$P(\text{Heads}) = \frac{53}{100} = 0.53$
Important Note: As the number of trials in an empirical probability experiment increases, the empirical probability tends to approach the classical (theoretical) probability. This concept is formalized by the Law of Large Numbers.
1.4 Subjective Probability: The Role of Belief
Subjective probability is based on personal judgment, experience, or intuition rather than formal calculation or empirical data. It's often used when there's insufficient objective data.
- Example: A doctor estimating the probability of a patient recovering from a rare disease.
- Example: A sports analyst predicting the probability of a team winning a championship based on their expertise.
Chapter 2: Rules of Probability – Navigating Complex Scenarios
2.1 The Addition Rule: "OR" Events
The Addition Rule is used when we want to find the probability of one event OR another event occurring.
2.1.1 For Mutually Exclusive Events:
If two events $A$ and $B$ are mutually exclusive, the probability of $A$ or $B$ occurring is the sum of their individual probabilities:
$P(A \text{ or } B) = P(A \cup B) = P(A) + P(B)$
Example 2.1.1:
What is the probability of rolling a "1" or a "6" on a single six-sided die?
- $P(\text{rolling a 1}) = \frac{1}{6}$
- $P(\text{rolling a 6}) = \frac{1}{6}$
- Since rolling a 1 and rolling a 6 are mutually exclusive:
- $P(\text{1 or 6}) = P(1) + P(6) = \frac{1}{6} + \frac{1}{6} = \frac{2}{6} = \frac{1}{3}$
2.1.2 For Non-Mutually Exclusive Events:
If two events $A$ and $B$ are not mutually exclusive (meaning they can occur at the same time, their intersection is not empty), we must subtract the probability of their intersection to avoid double-counting:
$P(A \text{ or } B) = P(A \cup B) = P(A) + P(B) - P(A \cap B)$
Where $P(A \cap B)$ is the probability of both $A$ and $B$ occurring.
Example 2.1.2:
What is the probability of drawing a "King" or a "Heart" from a standard 52-card deck?
- $P(\text{King}) = \frac{4}{52}$
- $P(\text{Heart}) = \frac{13}{52}$
- The "King of Hearts" is common to both events, so they are not mutually exclusive.
- $P(\text{King and Heart}) = P(\text{King of Hearts}) = \frac{1}{52}$
- $P(\text{King or Heart}) = P(\text{King}) + P(\text{Heart}) - P(\text{King and Heart})$
- $= \frac{4}{52} + \frac{13}{52} - \frac{1}{52} = \frac{17}{52} - \frac{1}{52} = \frac{16}{52} = \frac{4}{13}$
2.2 The Multiplication Rule: "AND" Events
The Multiplication Rule is used when we want to find the probability of two or more events occurring in sequence.
2.2.1 For Independent Events:
If two events $A$ and $B$ are independent, the probability of both $A$ and $B$ occurring is the product of their individual probabilities:
$P(A \text{ and } B) = P(A \cap B) = P(A) \times P(B)$
Example 2.2.1:
What is the probability of flipping a "Head" on the first coin and a "Tail" on the second coin?
- $P(\text{Head}) = \frac{1}{2}$
- $P(\text{Tail}) = \frac{1}{2}$
- Since the flips are independent:
- $P(\text{Head and Tail}) = P(\text{Head}) \times P(\text{Tail}) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$
2.2.2 For Dependent Events (Conditional Probability):
If two events $A$ and $B$ are dependent, the probability of both $A$ and $B$ occurring is the probability of $A$ multiplied by the conditional probability of $B$ given that $A$ has already occurred.
$P(A \text{ and } B) = P(A \cap B) = P(A) \times P(B|A)$
Where $P(B|A)$ is the conditional probability of event $B$ occurring given that event $A$ has already occurred.
Example 2.2.2: Drawing Cards Without Replacement
What is the probability of drawing two "Aces" in a row from a standard 52-card deck without replacement?
- Event A: Drawing an Ace first. $P(A) = \frac{4}{52}$
- Event B: Drawing another Ace second, given the first was an Ace and not replaced. Now there are 3 Aces left and 51 total cards. $P(B|A) = \frac{3}{51}$
- $P(\text{Ace and Ace}) = P(A) \times P(B|A) = \frac{4}{52} \times \frac{3}{51} = \frac{12}{2652} = \frac{1}{221}$
2.3 Conditional Probability: When Information Matters
Conditional probability is the probability of an event occurring given that another event has already occurred. It is denoted as $P(B|A)$, read as "the probability of B given A."
The formula for conditional probability is:
$P(B|A) = \frac{P(A \cap B)}{P(A)}$, where $P(A) > 0$.
This formula can be rearranged to derive the Multiplication Rule for dependent events: $P(A \cap B) = P(A) \times P(B|A)$.
Example 2.3.1: Medical Test
Suppose 10% of the population has a certain disease (D). A test for the disease is 90% accurate (meaning it correctly identifies the disease 90% of the time, and correctly identifies no disease 90% of the time).
Let:
- D = Has the disease
- ND = Does not have the disease
- PT = Positive test result
- NT = Negative test result
We are given:
- $P(D) = 0.10$
- $P(ND) = 0.90$
- $P(PT|D) = 0.90$ (True positive rate)
- $P(NT|ND) = 0.90$ (True negative rate)
From these, we can infer:
- $P(NT|D) = 1 - P(PT|D) = 1 - 0.90 = 0.10$ (False negative rate)
- $P(PT|ND) = 1 - P(NT|ND) = 1 - 0.90 = 0.10$ (False positive rate)
Now, let's find the probability that a person actually has the disease given they tested positive, i.e., $P(D|PT)$.
Using Bayes' Theorem (which is derived from conditional probability):
$P(D|PT) = \frac{P(PT|D) \times P(D)}{P(PT)}$
First, we need $P(PT)$. A positive test can occur in two ways:
- A person has the disease AND tests positive ($D \cap PT$)
- A person does NOT have the disease AND tests positive ($ND \cap PT$)
$P(PT) = P(PT|D)P(D) + P(PT|ND)P(ND)$
$P(PT) = (0.90 \times 0.10) + (0.10 \times 0.90)$
$P(PT) = 0.09 + 0.09 = 0.18$
Now, calculate $P(D|PT)$:
$P(D|PT) = \frac{0.90 \times 0.10}{0.18} = \frac{0.09}{0.18} = 0.5$
This surprising result shows that even with a 90% accurate test, if only 10% of the population has the disease, a positive test only means a 50% chance of actually having the disease. This highlights the importance of understanding conditional probability.
2.4 The Complement Rule: "NOT" Events
The probability of an event not occurring is 1 minus the probability that it does occur.
$P(E') = 1 - P(E)$
Where $E'$ (or $E^c$) denotes the complement of event $E$.
Example 2.4.1:
The probability of rain tomorrow is 0.3. What is the probability that it will not rain tomorrow?
$P(\text{No Rain}) = 1 - P(\text{Rain}) = 1 - 0.3 = 0.7$
Chapter 3: Counting Principles – The Foundation for Complex Probabilities
Many probability problems require us to count the number of possible outcomes or favorable outcomes. This is where counting principles become indispensable.
3.1 The Fundamental Counting Principle (Multiplication Principle)
If there are $n_1$ ways to do one thing, and $n_2$ ways to do another thing, and so on, then there are $n_1 \times n_2 \times \dots \times n_k$ ways to do all $k$ things.
Example 3.1.1: Outfit Combinations
You have 3 shirts, 2 pairs of pants, and 4 pairs of shoes. How many different outfits can you make?
Number of outfits = $3 \times 2 \times 4 = 24$ outfits.
3.2 Permutations: Order Matters!
A permutation is an arrangement of objects in a specific order. The order of selection or arrangement is crucial.
3.2.1 Permutations of $n$ distinct objects taken $r$ at a time:
The number of permutations of $n$ distinct objects taken $r$ at a time is given by:
$P(n, r) = \frac{n!}{(n-r)!}$
Where $n!$ (n factorial) is the product of all positive integers up to $n$ ($n! = n \times (n-1) \times \dots \times 2 \times 1$), and $0! = 1$.
Example 3.2.1: Race Finishers
In a race with 8 runners, how many ways can the gold, silver, and bronze medals be awarded? (Here, order matters: finishing first is different from finishing second).
- $n = 8$ (total runners), $r = 3$ (medal positions)
- $P(8, 3) = \frac{8!}{(8-3)!} = \frac{8!}{5!} = \frac{8 \times 7 \times 6 \times 5!}{5!} = 8 \times 7 \times 6 = 336$ ways.
3.2.2 Permutations with Repetition (when objects are not distinct):
If there are $n$ objects where there are $n_1$ identical objects of type 1, $n_2$ identical objects of type 2, ..., $n_k$ identical objects of type k, the number of distinct permutations is:
$\frac{n!}{n_1! n_2! \dots n_k!}$
Example 3.2.2: Anagrams
How many distinct permutations can be made from the letters of the word "MISSISSIPPI"?
- $n = 11$ (total letters)
- I appears 4 times ($n_I = 4$)
- S appears 4 times ($n_S = 4$)
- P appears 2 times ($n_P = 2$)
- M appears 1 time ($n_M = 1$)
- Number of permutations = $\frac{11!}{4! 4! 2! 1!} = \frac{39,916,800}{(24)(24)(2)(1)} = \frac{39,916,800}{1152} = 34,650$
3.3 Combinations: Order Doesn't Matter!
A combination is a selection of objects where the order of selection does not matter.
3.3.1 Combinations of $n$ distinct objects taken $r$ at a time:
The number of combinations of $n$ distinct objects taken $r$ at a time is given by:
$C(n, r) = \binom{n}{r} = \frac{n!}{r!(n-r)!}$
Example 3.3.1: Forming a Committee
From a group of 10 people, how many different committees of 3 people can be formed? (The order in which people are chosen for the committee doesn't change the committee itself).
- $n = 10$ (total people), $r = 3$ (people on the committee)
- $C(10, 3) = \frac{10!}{3!(10-3)!} = \frac{10!}{3!7!} = \frac{10 \times 9 \times 8 \times 7!}{ (3 \times 2 \times 1) \times 7!} = \frac{10 \times 9 \times 8}{3 \times 2 \times 1} = \frac{720}{6} = 120$ committees.
Probability Applications of Counting Principles:
Many probability problems involve calculating combinations or permutations for both the numerator (favorable outcomes) and the denominator (total possible outcomes).
Example 3.3.2: Lottery Probability
In a lottery, you choose 6 numbers from 49. What is the probability of winning if the order of numbers doesn't matter?
- Total possible outcomes: $C(49, 6) = \frac{49!}{6!(49-6)!} = \frac{49!}{6!43!} = 13,983,816$
- Favorable outcomes (winning combination): 1
- $P(\text{Winning}) = \frac{1}{13,983,816}$
Chapter 4: Understanding Odds – A Different Perspective on Likelihood
While probability expresses the likelihood of an event as a fraction or decimal between 0 and 1, odds express the likelihood as a ratio of favorable outcomes to unfavorable outcomes, or vice versa. Odds are commonly used in gambling, sports betting, and risk assessment.
4.1 Odds in Favor
The odds in favor of an event are the ratio of the number of ways the event can occur to the number of ways the event cannot occur.
Odds in favor of $E = \text{Number of favorable outcomes} : \text{Number of unfavorable outcomes}$
Or, if $P(E)$ is the probability of the event and $P(E')$ is the probability of its complement:
Odds in favor of $E = P(E) : P(E')$
Example 4.1.1: Rolling a Die
What are the odds in favor of rolling a "4" on a standard six-sided die?
- Favorable outcomes (rolling a 4): 1
- Unfavorable outcomes (not rolling a 4): 5 (i.e., 1, 2, 3, 5, 6)
- Odds in favor of rolling a 4 = $1:5$
Example 4.1.2: Using Probabilities
If $P(\text{Rain}) = 0.3$, then $P(\text{No Rain}) = 0.7$.
Odds in favor of rain = $0.3 : 0.7$, which can be simplified to $3:7$ (by multiplying both sides by 10).
4.2 Odds Against
The odds against an event are the ratio of the number of ways the event cannot occur to the number of ways the event can occur. It's simply the inverse of odds in favor.
Odds against $E = \text{Number of unfavorable outcomes} : \text{Number of favorable outcomes}$
Or:
Odds against $E = P(E') : P(E)$
Example 4.2.1: Rolling a Die (Odds Against)
What are the odds against rolling a "4" on a standard six-sided die?
- Unfavorable outcomes: 5
- Favorable outcomes: 1
- Odds against rolling a 4 = $5:1$
4.3 Converting Between Probability and Odds
It's crucial to be able to convert between these two representations of likelihood.
4.3.1 From Probability to Odds:
If $P(E)$ is the probability of an event:
- Odds in favor of $E = P(E) : (1 - P(E))$
- Odds against $E = (1 - P(E)) : P(E)$
Example 4.3.1:
If the probability of winning a game is $\frac{2}{5}$:
- Odds in favor of winning = $\frac{2}{5} : (1 - \frac{2}{5}) = \frac{2}{5} : \frac{3}{5} = 2:3$
- Odds against winning = $\frac{3}{5} : \frac{2}{5} = 3:2$
4.3.2 From Odds to Probability:
If the odds in favor of an event $E$ are $a:b$:
$P(E) = \frac{a}{a+b}$
If the odds against an event $E$ are $a:b$:
$P(E) = \frac{b}{a+b}$
Example 4.3.2:
If the odds in favor of a horse winning a race are $3:8$:
$P(\text{Winning}) = \frac{3}{3+8} = \frac{3}{11}$
Example 4.3.3:
If the odds against a team winning a match are $5:2$:
$P(\text{Winning}) = \frac{2}{5+2} = \frac{2}{7}$
Chapter 5: Advanced Concepts and Applications
5.1 Expected Value: The Long-Run Average
Expected value ($E(X)$) is a concept closely related to probability, representing the average outcome of a random variable over a large number of trials. It's particularly useful in decision-making under uncertainty, especially in finance, gambling, and insurance.
For a discrete random variable $X$ with possible outcomes $x_1, x_2, \dots, x_n$ and corresponding probabilities $P(x_1), P(x_2), \dots, P(x_n)$:
$E(X) = \sum_{i=1}^{n} x_i P(x_i) = x_1 P(x_1) + x_2 P(x_2) + \dots + x_n P(x_n)$
Example 5.1.1: A Simple Game
You play a game where you roll a fair six-sided die. If you roll a 6, you win \$10. If you roll a 5, you win \$5. If you roll any other number (1, 2, 3, 4), you lose \$3. What is the expected value of playing this game?
- Outcome $x_1 = \$10$, $P(x_1) = P(\text{rolling a 6}) = \frac{1}{6}$
- Outcome $x_2 = \$5$, $P(x_2) = P(\text{rolling a 5}) = \frac{1}{6}$
- Outcome $x_3 = -\$3$, $P(x_3) = P(\text{rolling 1, 2, 3, or 4}) = \frac{4}{6}$
$E(X) = (\$10 \times \frac{1}{6}) + (\$5 \times \frac{1}{6}) + (-\$3 \times \frac{4}{6})$
$E(X) = \frac{10}{6} + \frac{5}{6} - \frac{12}{6}$
$E(X) = \frac{15 - 12}{6} = \frac{3}{6} = \$0.50$
This means that, on average, you can expect to win \$0.50 per game if you play many times. A positive expected value suggests a favorable game in the long run, while a negative expected value suggests an unfavorable one.
5.2 Introduction to Probability Distributions (Brief Overview)
Probability distributions describe how probabilities are distributed over the values of a random variable.
- Discrete Probability Distributions: For random variables that can only take on a countable number of values (e.g., number of heads in coin flips, number of defects).
- Binomial Distribution: Describes the number of successes in a fixed number of independent Bernoulli trials (trials with only two outcomes, success/failure).
- Poisson Distribution: Describes the number of events occurring in a fixed interval of time or space, given a constant average rate.
- Continuous Probability Distributions: For random variables that can take on any value within a given range (e.g., height, weight, time).
- Normal Distribution (Gaussian Distribution): The most common continuous distribution, characterized by its bell-shaped curve. Many natural phenomena follow a normal distribution.
- Uniform Distribution: All values within a given range have an equal probability.
Understanding these distributions allows for more sophisticated probabilistic modeling and inference.
Chapter 6: Real-World Applications of Probability & Odds
Probability and odds are not just academic exercises; they are powerful tools used across countless disciplines:
- Gambling and Gaming: The most obvious application. Casinos, lotteries, and sports betting all rely heavily on probability and odds calculations to set payouts and determine house advantage.
- Finance and Investing: Risk assessment, portfolio management, option pricing, and predicting market movements all involve probabilistic models.
- Insurance: Actuaries use probability to calculate premiums, assess risks, and determine payouts for various policies (life, health, auto, property).
- Medicine and Public Health: Clinical trials, disease prevalence, diagnostic test accuracy (as seen in our conditional probability example), and epidemic modeling all use probability.
- Science and Engineering: Quality control, reliability engineering, statistical mechanics, quantum mechanics, and experimental design.
- Artificial Intelligence and Machine Learning: Bayesian networks, probabilistic graphical models, and various classification algorithms are built on probabilistic foundations.
- Weather Forecasting: Predicting the likelihood of rain, snow, or extreme weather events.
- Sports Analytics: Calculating win probabilities, player performance metrics, and strategic decision-making.
- Everyday Decision Making: From deciding whether to carry an umbrella to choosing a route with less traffic, we implicitly use probabilistic thinking.
Chapter 7: Common Misconceptions and Pitfalls
Despite their widespread use, probability and odds are often misunderstood. Be aware of these common pitfalls:
- The Gambler's Fallacy: The mistaken belief that past events influence future independent events. For example, after a series of coin flips landing on "Heads," believing "Tails" is "due." Each flip is independent, $P(Heads) = 0.5$ every time.
- Ignoring Sample Size: Small sample sizes can lead to highly variable empirical probabilities that don't reflect the true theoretical probability. The Law of Large Numbers only applies with a sufficient number of trials.
- Misinterpreting Conditional Probability: As seen in the medical test example, $P(A|B)$ is not the same as $P(B|A)$.
- Confusion Between Probability and Odds: While related, they are distinct measures. A 50% probability is $1:1$ odds, not $1:2$ or $2:1$.
- Base Rate Fallacy: Neglecting the overall prevalence (base rate) of an event when interpreting conditional probabilities, as demonstrated in the medical test example.
Conclusion: Embracing the World of Uncertainty with Confidence
You've now journeyed through the intricate landscape of "Probability & Odds," from their foundational definitions to their complex rules and powerful real-world applications. You've learned to quantify chance, understand the nuances of dependent and independent events, master counting techniques, and distinguish between probability and odds.
The ability to think probabilistically is an invaluable skill in an unpredictable world. It empowers you to make more informed decisions, critically evaluate information, and navigate uncertainty with a clearer understanding of potential outcomes. As you continue your mathematical exploration, remember that probability is not about eliminating uncertainty, but about understanding and managing it.
Keep practicing, keep exploring, and keep unraveling the fascinating language of likelihood with Whizmath!
Practice Problems (with Solutions)
Problem 1: Basic Probability
A bag contains 5 red marbles, 3 blue marbles, and 2 green marbles. If you pick one marble at random:
a) What is the probability of picking a red marble?
b) What is the probability of picking a blue or a green marble?
Show Solution
Solution 1:
Total marbles = $5 + 3 + 2 = 10$
a) $P(\text{Red}) = \frac{\text{Number of red marbles}}{\text{Total marbles}} = \frac{5}{10} = \frac{1}{2}$
b) $P(\text{Blue or Green}) = P(\text{Blue}) + P(\text{Green})$ (mutually exclusive events)
$P(\text{Blue}) = \frac{3}{10}$
$P(\text{Green}) = \frac{2}{10}$
$P(\text{Blue or Green}) = \frac{3}{10} + \frac{2}{10} = \frac{5}{10} = \frac{1}{2}$
Problem 2: Dependent Events
You draw two cards from a standard 52-card deck without replacement. What is the probability that both cards are Queens?
Show Solution
Solution 2:
$P(\text{1st card is Queen}) = \frac{4}{52}$
$P(\text{2nd card is Queen | 1st card was Queen}) = \frac{3}{51}$ (3 Queens left, 51 cards left)
$P(\text{Both Queens}) = P(\text{1st Queen}) \times P(\text{2nd Queen | 1st Queen})$
$= \frac{4}{52} \times \frac{3}{51} = \frac{12}{2652} = \frac{1}{221}$
Problem 3: Combinations
A pizza place offers 8 different toppings. You want to choose 3 toppings for your pizza. How many different combinations of toppings can you choose?
Show Solution
Solution 3:
This is a combination problem because the order of choosing toppings doesn't matter.
$n = 8$ (total toppings), $r = 3$ (toppings to choose)
$C(8, 3) = \frac{8!}{3!(8-3)!} = \frac{8!}{3!5!} = \frac{8 \times 7 \times 6}{3 \times 2 \times 1} = 8 \times 7 = 56$ combinations.
Problem 4: Odds Conversion
The probability of a new product succeeding in the market is estimated to be 0.6.
a) What are the odds in favor of the product succeeding?
b) What are the odds against the product succeeding?
Show Solution
Solution 4:
$P(\text{Success}) = 0.6$
$P(\text{Failure}) = 1 - 0.6 = 0.4$
a) Odds in favor = $P(\text{Success}) : P(\text{Failure}) = 0.6 : 0.4 = 6:4 = 3:2$
b) Odds against = $P(\text{Failure}) : P(\text{Success}) = 0.4 : 0.6 = 4:6 = 2:3$
Problem 5: Expected Value
You buy a raffle ticket for \$2. There are 500 tickets sold. One ticket wins a \$200 prize, and two tickets win a \$50 prize. What is your expected net gain or loss from buying one ticket?
Show Solution
Solution 5:
Cost of ticket = -\$2 (loss)
Possible outcomes and their probabilities:
- Win \$200: $P(\text{Win \$200}) = \frac{1}{500}$
- Win \$50: $P(\text{Win \$50}) = \frac{2}{500}$
- Win \$0 (lose \$2): $P(\text{Win \$0}) = \frac{497}{500}$
Expected winnings (before subtracting ticket cost):
$E(\text{Winnings}) = (\$200 \times \frac{1}{500}) + (\$50 \times \frac{2}{500}) + (\$0 \times \frac{497}{500})$
$E(\text{Winnings}) = \frac{200}{500} + \frac{100}{500} + 0 = \frac{300}{500} = \$0.60$
Expected net gain/loss = Expected winnings - Cost of ticket
Expected net gain/loss = $\$0.60 - \$2.00 = -\$1.40$
On average, you can expect to lose \$1.40 each time you buy a ticket.