ADVANCED ANALYTICS • 2015-2024
To evaluate the success, predictability, and scalability of NFL drafting strategies by testing whether player outcomes can be explained using only pre-draft data. This project aims to validate an Expected Value (EV) system that assigns a performance benchmark to each draft pick based on historical outcomes.
Interpreting EV (Even with R² = 0.414) Expected Value (EV) in this context isn't a crystal ball — it's a baseline. It reflects the average historical return for each draft slot, based solely on pre-draft inputs like pick number, position, and athletic data. The model explaining 41.4% of outcome variance doesn't mean EV is wrong — it means that 41.4% of player success is systematically explainable with the features available before draft day. The other ~59%? That's everything else: injuries, development, coaching, environment, even luck. So EV isn't a prediction of what will happen — it's a reference point for what should be expected over time. And the fact that a noisy, chaotic system like the NFL Draft yields over 40% signal from pre-draft data? That makes EV not just useful — but statistically defensible.
We built a Random Forest Regressor using 42 pre-draft features (combine data, draft slot, college metrics, etc.) Evaluated model output against a custom Expected Value (EV) system Measured alignment using statistical rigor: R² = 0.414 (41.4% of outcome variance explained) RMSE = 29.6 (±8% error vs. mean player value) Analyzed 2,397 players drafted between 2015–2024 Verified against team-, round-, and position-level trends