Math & Statistics

Standard Probability Calculator

Calculate basic probability, conditional probability, Bayes' theorem, binomial distribution, combinations & permutations, expected value, and the addition & multiplication rules — all with full step-by-step working and real-world examples.

8 Calculation Types
Step-by-Step Working
Distribution Tables
100% Free

Probability Calculator — 8 Types

Select a calculation type, fill in the values, and get the answer with complete step-by-step working

🎲
Basic P(A)
Favourable / Total
🔄
Complement
P(not A)
Addition Rule
P(A or B)
✖️
Multiplication
P(A and B)
🔗
Conditional
P(A|B)
🧠
Bayes' Theorem
Updated belief
📊
Binomial Dist.
nCk × pᵏ(1-p)ⁿ⁻ᵏ
🔢
nCr / nPr
Combinations & Perms
🎲 Basic Probability — P(A) = Favourable Outcomes ÷ Total Outcomes
0–1
p
p
p
For mutually exclusive events, P(A∩B) = 0. For independent events, P(A∩B) = P(A) × P(B).
p
p
P(A|B) = P(A∩B) / P(B) — "What is the probability of A given that B has happened?"
p
p
p
Bayes: P(A|B) = [P(B|A) × P(A)] / [P(B|A)×P(A) + P(B|¬A)×P(¬A)]
0–1
🎲 RESULT
Calculated
Result Summary
More Useful Results
Visual Breakdown
Step-by-Step Working
    Share This Result

    What Is Probability? A Complete Guide

    From the basic definition to the rules that govern chance, uncertainty, and statistics

    The Mathematics of Uncertainty

    Probability is a numerical measure of how likely an event is to occur, expressed as a number between 0 (impossible) and 1 (certain). A probability of 0.5 means the event is equally likely to happen or not happen — like a fair coin flip. A probability of 0.9 means the event is very likely. Written as a percentage, 0.9 = 90%.

    The formal study of probability began in the 17th century when mathematicians Blaise Pascal and Pierre de Fermat corresponded about gambling problems. Today, probability underpins statistics, machine learning, finance, insurance, medical research, physics, and everyday decisions. Every weather forecast, every clinical trial result, every spam filter, and every credit score relies on probability.

    00.10.250.50.750.91
    ImpossibleVery UnlikelyUnlikelyEven ChanceLikelyVery LikelyCertain
    Three Probability Axioms (Kolmogorov, 1933):
    1. P(A) ≥ 0 for any event A — probabilities are non-negative.
    2. P(S) = 1 — the probability of the entire sample space S is 1.
    3. For mutually exclusive events A and B: P(A ∪ B) = P(A) + P(B).
    All of probability theory follows logically from these three axioms.
    🏥
    Medical Diagnosis

    Doctors use Bayes' theorem to calculate the actual probability that a positive test result means a disease is present — accounting for false positive rates and disease prevalence.

    Weather Forecasting

    "70% chance of rain" is a probability. Meteorologists use probability models trained on decades of atmospheric data to quantify uncertainty in weather predictions.

    📈
    Finance & Insurance

    Actuaries use probability distributions to price insurance premiums. Portfolio managers use probability to model asset returns and calculate Value at Risk (VaR).

    🤖
    Machine Learning

    Probabilistic classifiers like Naïve Bayes, logistic regression, and neural network softmax outputs all return probabilities. Language models predict the most probable next word.

    🎰
    Games & Gambling

    Casinos design games using precisely calculated probabilities to ensure a house edge. Understanding probability is the foundation of any winning poker or blackjack strategy.

    ⚛️
    Quantum Physics

    In quantum mechanics, probabilities are not just useful approximations — they are fundamental. A particle genuinely has no definite position until measured; only a probability wave.

    Types of Probability — In Depth

    Understanding theoretical, experimental, and subjective probability with examples

    Theoretical (Classical) Probability applies when all outcomes are equally likely. P(A) = number of favourable outcomes ÷ total possible outcomes. Rolling a fair die: P(4) = 1/6 ≈ 0.1667. Drawing an ace from a standard deck: P(ace) = 4/52 = 1/13 ≈ 0.077.

    Experimental (Empirical) Probability is based on actual observed data. If you flipped a coin 1,000 times and got 487 heads, the empirical probability of heads = 487/1000 = 0.487. As the number of trials grows, empirical probability converges to theoretical probability — this is the Law of Large Numbers.

    Subjective Probability reflects personal belief or expert judgment, not a repeatable experiment. "I believe there's a 70% chance my team wins tomorrow." A doctor saying "I give this patient a 30% chance of recovery." These are still useful probabilities — they can be refined using Bayes' theorem as new evidence emerges.

    Common Probability Mistake — The Gambler's Fallacy: After a coin lands heads 10 times in a row, many people feel tails is "overdue." This is wrong. Each flip is independent. Past outcomes do not influence future ones for independent events. P(tails on flip 11) = 0.5, always.

    Complete Probability Formula Reference

    All 8 probability types covered in this calculator — formulas, descriptions, and examples

    TypeFormulaWhen to UseExample
    Basic P(A)P(A) = f / nAll outcomes equally likelyP(head) = 1/2 = 0.5
    ComplementP(A') = 1 − P(A)Probability of "not A"P(not 6) = 1 − 1/6 = 5/6
    Addition RuleP(A∪B) = P(A)+P(B)−P(A∩B)A or B (or both)P(heart or ace) = 13/52 + 4/52 − 1/52
    Multiplication (Indep.)P(A∩B) = P(A)×P(B)Both A and B, independentP(HH) = 0.5 × 0.5 = 0.25
    Multiplication (Dep.)P(A∩B) = P(A)×P(B|A)Both A and B, dependentDrawing 2 aces without replacement
    Conditional P(A|B)P(A|B) = P(A∩B)/P(B)Probability of A given B occurredP(ace | face-up card is red) = ?
    Bayes' TheoremP(A|B) = P(B|A)×P(A) / P(B)Update prior belief with evidencePositive test → actually sick?
    Binomial P(X=k)C(n,k) × pᵏ × (1−p)ⁿ⁻ᵏk successes in n trials3 heads in 10 coin flips
    Combinations nCrn! / (r! × (n−r)!)Choose r from n, order doesn't matterC(52,5) = 2,598,960 poker hands
    Permutations nPrn! / (n−r)!Arrange r from n, order mattersP(10,3) = 720 ways
    Key Distinction — Independent vs Dependent Events: Two events A and B are independent if P(A|B) = P(A) — knowing B happened doesn't change the probability of A. Examples: coin flips, dice rolls (with replacement). Dependent events: drawing cards without replacement, conditional on prior draws changing the sample space.

    Bayes' Theorem — The Most Powerful Rule in Probability

    How to update your beliefs with evidence — the foundation of rational reasoning

    Updating Beliefs with Evidence

    Bayes' theorem, published posthumously in 1763 by Reverend Thomas Bayes, describes how to rationally update a probability estimate when new evidence arrives. The formula is:

    P(A|B) = [P(B|A) × P(A)] / P(B)
    Where: P(A) = prior · P(B|A) = likelihood · P(A|B) = posterior

    Classic medical example: A disease affects 1% of the population. A test for it has 90% sensitivity (P(positive|disease) = 0.90) and 5% false positive rate (P(positive|no disease) = 0.05). If you test positive, what is the actual probability you have the disease?

    Most people say "90%" — they focus on the test accuracy and ignore the base rate. Using Bayes: P(disease|positive) = (0.90 × 0.01) / [(0.90×0.01) + (0.05×0.99)] = 0.009 / 0.0585 ≈ 15.4%. Despite a 90% accurate test, a positive result means only a ~15% chance of actually having the disease. This is called the base rate fallacy.

    Bayes' theorem is used in: spam filters (is this email spam given these words?), self-driving cars (is that object a pedestrian?), medical diagnosis, A/B testing, submarine search, DNA evidence in courts, and large language models.

    Binomial Distribution — Complete Guide

    When to use it, the formula, mean & variance, and worked examples

    The binomial distribution models the number of successes in a fixed number of independent trials, where each trial has the same probability p of success. Four conditions must hold: (1) fixed number of trials n, (2) each trial is independent, (3) each trial is success/failure, (4) probability p is constant.

    Examples where binomial applies: number of heads in 10 coin flips, number of defective items in a batch of 50, number of patients who recover out of 30 given a treatment, number of free throws made in 20 attempts.

    P(X=k) = C(n,k) × pk × (1−p)n−k
    Mean μ = npVariance σ² = np(1−p)Std Dev σ = √(np(1−p))

    Worked example: A fair coin is flipped 10 times. What is P(exactly 3 heads)?
    n=10, k=3, p=0.5. P(X=3) = C(10,3) × 0.5³ × 0.5⁷ = 120 × 0.125 × 0.0078 = 0.1172 (11.72%)

    Frequently Asked Questions About Probability

    Answers to the most common probability questions, from basics to Bayes

    What is the basic probability formula?
    Basic probability is P(A) = (Number of favourable outcomes) ÷ (Total number of equally likely outcomes). The result is always between 0 (event cannot happen) and 1 (event always happens). For example, the probability of rolling a 3 on a fair six-sided die is 1/6 ≈ 0.1667, because there is 1 favourable outcome (rolling a 3) out of 6 total outcomes.
    What is the difference between independent and dependent events?
    Independent events do not influence each other — knowing one occurred gives no information about the other. Examples: coin flips, dice rolls, spinning a roulette wheel. For independent events: P(A and B) = P(A) × P(B). Dependent events influence each other. Example: drawing two cards from a deck without replacement. After the first draw, the deck has changed, so the probabilities for the second draw are different. For dependent events: P(A and B) = P(A) × P(B|A).
    What is conditional probability and when do you use it?
    Conditional probability P(A|B) is the probability of event A given that B has already occurred. Formula: P(A|B) = P(A∩B) / P(B). You use it whenever the sample space is narrowed by given information. Example: In a class of 30 students, 18 passed maths and 12 passed both maths and science. If a student passed maths, what is the probability they also passed science? P(science|maths) = 12/18 = 0.667.
    What is Bayes' theorem and why is it important?
    Bayes' theorem lets you update the probability of a hypothesis when new evidence arrives: P(A|B) = [P(B|A) × P(A)] / P(B). It's fundamentally important because it describes how to reason rationally under uncertainty. Without Bayes, you might mistake a high test sensitivity for a high probability that a positive test means you have a disease (ignoring the base rate). Bayes corrects this by incorporating the prior probability P(A) — how common the condition is before any test.
    When should I use the binomial distribution?
    Use the binomial distribution when: (1) there are exactly n fixed trials, (2) each trial is independent, (3) each trial has exactly two outcomes (success/failure), and (4) the probability of success p is constant across all trials. Examples: flipping a coin n times, testing n components where each independently passes/fails with probability p, and asking n voters who independently support candidate A with probability p.
    What is the difference between combinations (nCr) and permutations (nPr)?
    Combinations count the number of ways to choose r items from n when order does not matter: nCr = n! / (r! × (n−r)!). Permutations count the arrangements when order does matter: nPr = n! / (n−r)!. Memory aid: a combination lock should really be called a permutation lock — the order of digits matters! Example: choosing 3 people from 10 for a committee (order doesn't matter) = C(10,3) = 120. Arranging 3 people in 3 seats (order matters) = P(10,3) = 720.
    What is the complement rule and why is it useful?
    The complement rule states P(A') = 1 − P(A), where A' (read "A complement" or "not A") is the event that A does not happen. It's useful because it's often far easier to calculate "at least one" or "not all" probabilities via the complement than directly. Example: P(at least one head in 5 flips) is tedious to calculate directly (sum 5 terms). But P(no heads) = (0.5)⁵ = 0.03125, so P(at least one head) = 1 − 0.03125 = 0.96875.
    What is the addition rule and when does it simplify?
    The general addition rule is P(A∪B) = P(A) + P(B) − P(A∩B). The minus term removes double-counting of outcomes in both A and B. When events are mutually exclusive (cannot both occur simultaneously, like rolling a 2 and a 5 on one die), P(A∩B) = 0, so the formula simplifies to P(A∪B) = P(A) + P(B). Always check for mutual exclusivity before simplifying — many errors come from incorrectly assuming events are mutually exclusive.
    Is my data kept private?
    Yes, completely. All calculations run entirely in your browser using JavaScript. No values you enter are ever sent to a server, stored in a database, or shared with any third party. The page works offline once loaded. Refreshing the page clears all inputs.