← Back to Blog

Recency Bias in Sports Is Ruining Your Predictions

Your favorite player drops 42 points on a Wednesday night. By Thursday morning you're telling your group chat he's going to average 40 for the rest of the season. Then he puts up 19 the next game and you're genuinely shocked. That's recency bias — and it tricks every sports fan, every single week, whether they know it or not.

What Recency Bias in Sports Actually Means

Recency bias is a cognitive bias where your brain puts more weight on recent events than older ones. It sounds logical — recent information should matter more, right? The problem is, in sports, recent events aren't always the most predictive ones. They're just the most memorable.

Your brain evolved to notice what just happened. That wiring made sense when humans needed to track immediate threats. But in sports, where player performance swings naturally game to game, recency bias turns you into a reactive predictor instead of a rational one. You stop analyzing the full picture. You just react to the last thing you saw.

The NBA Example That Shows Up Every Week

Anthony Edwards dropped 51 points in a game earlier this season. The internet lost its mind. Fantasy players scrambled to add him. Group chats declared him the best scorer in the league. People started predicting 40-point nights going forward. Then he scored 21 in his next game. Then 24. Then 18.

This isn't an Edwards problem. It's a recency bias problem. The same thing happened with Donovan Mitchell after a couple of monster games in December — fans calling for MVP consideration, then three mediocre performances hit and nobody said a word. Victor Wembanyama blocks nine shots and suddenly he's "the most dominant defensive player ever seen." Then he records two blocks the next game and the take evaporates.

One game — no matter how wild — is a tiny sample size. Your brain fights that. It wants to build a narrative, and the most recent game feels like the definitive chapter. It's not. It's one data point.

Why Hot Streaks Don't Last: Regression to the Mean

There's a concept that explains most of this: regression to the mean. It basically says that when something goes unusually high or low, it tends to drift back toward the historical average. Not because of bad luck or a jinx — just math.

When a player scores 50 points, that performance usually combines real skill with favorable variance. Maybe the defense was beat up. Maybe he got to the free-throw line 14 times. Maybe three contested mid-rangers rattled in that normally don't. All of those conditions are unlikely to align again the next game.

The data backs this up. Research on NBA shooting has consistently shown that players on so-called "hot streaks" don't actually shoot meaningfully better in the games that follow. The hot streak feels real because your brain locks onto the most vivid recent data point. But look at 50-game rolling averages and the streaks mostly disappear into noise.

Steph Curry hit seven three-pointers in three straight games in January. Fans called it a vintage revival. People started projecting crazy shooting splits for the rest of the season. In the next two games combined? Four threes total. Curry is still excellent — but he's not "seven-threes-per-game" excellent. That was a hot run doing what hot runs always do: ending.

The Three Ways Recency Bias Kills Your Predictions

Recency bias doesn't just show up once. It sneaks in at every stage of making a prediction. Here are the three places it hits hardest:

Overreacting to a single game. One blowout loss doesn't mean a team collapsed. One 42-point performance doesn't mean the player leveled up permanently. But recency bias makes both feel like lasting new realities instead of outliers. You lock onto the most recent event and let it crowd out 60 prior games worth of data.

Forgetting the longer trend. If a player has been shooting under 40% all season, one great week doesn't erase that. But after that good week, most fans mentally reset the baseline. You're not wrong to notice improvement — you're wrong to throw out the 15 games of evidence that came before it. The full season average is almost always a better predictor than the last three games.

Misreading team win streaks. A team wins five in a row and everyone declares them a contender. A team loses four straight and the panic tweets start. In an 82-game NBA season, a five-game winning streak is roughly 6% of the schedule. It matters — but it doesn't override everything that came before it. Teams that were 18-22 before the streak aren't suddenly title favorites.

How to Actually Fight Recency Bias

You can't shut recency bias off — your brain isn't built that way. But you can build habits that push back against it before you lock in a prediction.

Use season-long averages, not the last game. Before making a call on a player, look at their 30-game rolling average, not what they did Tuesday. Most stats sites show this easily. The season average smooths out the noise that one bad or great game creates.

Ask yourself: what would I think if I hadn't watched that game? Seriously — try it. If your prediction only makes sense because of what just happened, recency bias is the one making the call, not you. Strip away the most recent game and see if your prediction still holds up against the full data.

Treat outlier performances like outliers. A career-high isn't a new baseline. If a player averages 22 points per game and drops 44 one night, his expected range next game is still somewhere near 22 — not 44. Use the outlier as context, not as a forecast. The average exists for a reason.

Keep a record of your predictions. This is the fix most sports fans skip, and it's the most valuable one. When you track what you predicted against what actually happened, patterns emerge. You'll notice exactly when you fell for a hot game, a winning streak, or a single highlight. You can't fix a bias you can't see. Tracking forces you to see it.

The Bottom Line

Recency bias in sports isn't an occasional mistake — it's the default for almost every fan. Your brain is designed to prioritize what just happened over what's actually predictive. The fans who get genuinely good at reading sports performance learn to pause and ask: am I reacting to the last game, or am I actually reading the full data set?

Start treating outliers like outliers. Look at larger samples. Track your predictions against real outcomes. You won't become perfect — nobody is — but you'll get meaningfully better over time. Not because your instincts improve, but because you've built a system to check them.

If you want to put it to the test, GAGE is built for exactly this — make predictions on player performances and see how your accuracy holds up over time. Pure sports knowledge, no luck required.