← Research
NBA Analytics Bayesian Statistics 16 min read • March 2026

When 20% From Three Doesn’t Mean What You Think

A team shoots 4-for-20 from three and trails by 10 after three quarters. Every instinct says they’re cold. But the math says something different—and sharps already know it.

By the Metrics
78%
of 3PT% variance explained by skill at season level
11pp
standard deviation of 3PT% in a single game (20 shots at 40%)
7%
comeback rate from 20+ point deficits in the NBA
🏀
Problem
Teams shoot far below their average all the time—but how much should you trust a single-game 3PT%? Most bettors overreact to small samples.
Approach
Bayesian models + shot quality analytics to separate luck from real signal. Beta-Binomial shrinkage meets expected field goal percentage.
📈
Outcome
A framework for knowing when cold shooting is noise vs genuine breakdown—and how to use that edge in live betting markets.
in 𝕏

It’s the third quarter. Your team is down 10. They’re shooting 20% from three—4-for-20—against a season average of 40%. The live line has shifted hard. The in-game bettor in your head is screaming “they’re ice cold, stay away.”

But here’s the thing: shooting 20% on 20 attempts when your true rate is 40% is not even statistically unusual. It’s well within two standard deviations of expected. The math behind this is simple, powerful, and almost universally ignored in live betting.

1. The Setup

Let’s make the scenario concrete. Imagine an NBA team with these season numbers:

  • Season 3PT%: 40% (roughly league-leading territory)
  • Tonight so far: 4-for-20 from three (20%)
  • Score: Down 10 entering the 4th quarter

The question every live bettor faces: is this team genuinely cold, or are they about to regress to the mean? Should you fade them or back them?

To answer this properly, you need to understand three things: why 3-point shooting is the noisiest stat in basketball, how Bayesian models handle small samples, and what shot quality data reveals about whether the misses are real.

2. Why 3PT% Is the Noisiest Stat in Basketball

Every three-point attempt is a coin flip with loaded odds. If a team’s true shooting rate is 40%, each shot is a Bernoulli trial with p = 0.4. The variance of a single shot is p(1−p) = 0.24. Over n shots, the standard deviation of the shooting percentage is:

SD = sqrt(p * (1-p) / n)
   = sqrt(0.4 * 0.6 / 20)
   = sqrt(0.012)
   = 0.1095
   ≈ 11 percentage points

Read that again: 11 percentage points of standard deviation on 20 three-point attempts. That means a 40% shooting team will regularly look like a 29% team or a 51% team in any given game—and both outcomes are perfectly normal.

Shooting 20% (4-for-20) is about 1.8 standard deviations below the mean. Unusual? Slightly. Shocking? Not at all. In a normal distribution, you’d expect this roughly once every 28 games—about twice a season.

The fundamental problem: 20 three-point attempts in a game is not enough data to tell you anything reliable about a team’s shooting ability. The signal-to-noise ratio is terrible. Over a full season (~2,500 team 3PA), the picture stabilizes. In a single game, it’s mostly noise.

This is why 3PT% swings explain more playoff outcomes than almost any other factor. It’s not that teams get better or worse—it’s that the variance is enormous and the sample sizes are tiny.

3. The Bayesian Answer: How Much to Trust a Small Sample

This is where it gets useful. Instead of asking “what is this team’s 3PT% tonight?” (answer: 20%, which is meaningless), we can ask: “given everything we know, what’s our best estimate of their true shooting ability right now?”

The tool for this is a Beta-Binomial model—the workhorse of Bayesian sports analytics. Here’s how it works in plain English:

The Prior: What We Knew Before Tonight

The team shoots 40% from three over the season. That’s our prior belief. We encode this as a Beta distribution: Beta(40, 60), where the numbers represent “pseudo-shots”—as if we’d already seen 40 makes out of 100 attempts before the game started.

The Likelihood: What We’ve Seen Tonight

4 makes on 20 attempts. That’s the new data.

The Posterior: Our Updated Belief

Bayes’ theorem combines them elegantly:

Prior:      Beta(40, 60)     → 40% expected
In-game:    4 makes, 16 misses
Posterior:  Beta(40+4, 60+16) = Beta(44, 76)
            → 44/120 = 36.7% expected

The model moved our estimate from 40% down to 36.7%. Not to 20%. The in-game data nudged our belief slightly, but the season-long track record dominates.

17% With only 20 three-point attempts, the model gives just 17% weight to in-game data. The other 83% comes from the season-long prior. You’d need ~100 attempts to split the weight evenly—impossible in a single game.

How Prior Strength Changes the Answer

The critical modeling decision is how much to trust the season average. Here’s how different confidence levels change the posterior:

Prior Strength Interpretation Posterior (given 4/20)
Weak (α+β = 20) Treat each game fresh 30.0%
Moderate (α+β = 50) Some season context 31.4%
Strong (α+β = 100) Full season confidence 36.7%
Very strong (α+β = 200) Multi-season track record 38.2%

Even with a weak prior, the model still estimates 30%—nowhere near the raw 20%. The Bayesian answer is always “expect regression.” The only question is how much.

The posterior isn’t just a single number—it’s a distribution. Beta(44, 76) has a 95% credible interval of roughly 28% to 46%. So Q4 shooting could land anywhere in that range. But the center of gravity is firmly above 35%.

4. Were the Shots Actually Bad?

The Bayesian model has a blind spot: it assumes the team’s shooting ability hasn’t changed. But what if the defense adjusted? What if they’re taking worse shots than usual?

This is where shot quality models come in. The core idea: every shot can be decomposed into two components:

  • Shot quality (xEFG%) — how good was the look? Based on location, defender distance, shot type, shot clock
  • Shot making — did the shooter convert above or below what an average player would from that exact situation?

The gap between expected and actual shooting is the “luck” component—or more precisely, the part that can’t be explained by shot selection.

What the Numbers Say

NBA tracking data breaks every shot down by defender distance, touch time, and shot type. The differences are massive:

Shot Type Defender Distance Expected 3PT%
Catch-and-shoot Wide open (6+ ft) ~40%
Catch-and-shoot Open (4–6 ft) ~37%
Pull-up Tight (2–4 ft) ~33%
Off-dribble Very tight (0–2 ft) ~29%
Late shot clock (<7s) Contested ~26%

A team that’s getting wide-open catch-and-shoot looks and shooting 20% is experiencing pure bad luck. A team being forced into late-clock contested pullups at 20% is getting exactly what the defense intended.

The key finding from shot quality research: Shot quality (xEFG%) stabilizes far earlier in a season than actual shooting efficiency. By game 15, a team’s shot selection profile is largely established. The actual conversion rate takes 40+ games to stabilize. This means early-season xEFG% is more predictive than actual shooting percentages.

5. Combining Both: Bayesian Shrinkage Meets Shot Quality

The two approaches complement each other perfectly:

Approach What It Answers Limitation
Bayesian shrinkage “Given this small sample, what’s the true shooting %?” Assumes shot quality is constant
Shot quality (xEFG%) “Are they getting the same quality of looks?” Doesn’t handle sample size uncertainty
Combined “Given shot quality AND small sample, what to expect?” Best of both worlds

The ideal framework: use xEFG% as the dynamic prior mean, and Beta-Binomial for the uncertainty.

Back to our scenario. Two possible realities:

38%
Scenario A: xEFG%
Same open looks as usual. 18pp of “luck debt”—strong regression signal.
28%
Scenario B: xEFG%
Defense forcing bad shots. Only 8pp of luck debt. Less regression expected.
?
Your Edge
Knowing which scenario you’re in before the market does.

If the team’s shot quality is maintained (Scenario A), the Bayesian prior stays at the season average and regression is strongly expected. If shot quality has degraded (Scenario B), you need to adjust the prior downward—and the regression case weakens.

6. What This Means for Betting

The practical edge here isn’t complicated—it’s just underused:

Fade the Hot, Back the Cold

Research consistently shows that extreme single-game shooting performances regress hard. Teams shooting 50%+ from three rarely sustain it. Teams shooting under 25% rarely stay that cold. The market overreacts to both extremes, creating edges on the other side.

Live Betting: The Regression Window

Live lines adjust to in-game shooting—but they adjust based on results, not shot quality. If a team is missing open looks (high xEFG%, low actual%), the live line is mispriced. The Q4 regression hasn’t happened yet, but the model says it should.

This is exactly the kind of edge that compounds over time: small, repeatable, and rooted in math rather than narrative.

Season-Level Applications

The same logic applies at the season level. Teams that start hot from three regress. Teams that start cold improve. The James-Stein estimator—the frequentist cousin of Bayesian shrinkage—shows this with mathematical certainty: extreme shooting percentages always regress toward the league mean.

83% Shot quality stabilizes far earlier in a season than actual shooting efficiency. By game 15, a team’s shot selection profile is largely set—making xEFG% a leading indicator of where actual percentages are headed.

7. The Caveats

Before you go all-in on regression-to-the-mean bets, some important nuances:

  • The hot hand is real. Miller & Sanjurjo’s landmark research showed that shooting streaks exist—they’re small but statistically significant. Pure mean-reversion models slightly overestimate regression.
  • Lineup changes matter. If the team’s best shooter left with an injury in the 2nd quarter, the prior needs adjusting. The season average includes minutes with a player who isn’t on the floor anymore.
  • Fatigue is real. Q4 shooting percentages are lower than Q1 across the league. A team that’s been playing heavy minutes won’t fully regress to their fresh-legs average.
  • Defensive adjustments persist. If the opposing team found a coverage that’s working, they’re not going to stop running it in Q4.
  • Comeback rates are low. Even with regression, a 10-point deficit after Q3 is overcome only about 7% of the time from 20+ point deficits. The math favors regression in shooting, but that doesn’t automatically translate to a win.
The bottom line: Bayesian models tell you the shooting will probably normalize. Shot quality models tell you whether the misses were real. But “probably normalize” isn’t “definitely overcome a 10-point deficit.” The edge is in the pricing, not in guarantees.

Further Reading

Related Articles

Turn Shooting Variance Into an Edge

BidCanvas Betting Companion delivers real-time regression signals and shot quality context—helping bettors spot mispricings as they happen.

Request Demo See Betting Companion