How To Predict The Future – Statistics For Shooters Part 1

I actually believe the average shooter might get more value from this Statistics for Shooters series of articles than anything I’ve published in a long time. I promise it’s worth your time to read!

I realize most shooters aren’t engineers or math nerds. Many people have an uncomfortable relationship with math and aren’t impressed with fancy formulas. However, statistics and probability are insanely applicable when it comes to long-range shooting in particular. Understanding just a few basics can help you gain actionable insight and put more rounds on target. Venturing beyond “average” and “extreme spread” will lead to better decisions.

I have spent an absurd amount of time arduously crafting this article with the math-averse shooter in mind. I pulled from a dozen books, white papers, magazine articles, and other sources on the subject (see works cited) to deliver a comprehensive, but approachable, overview of the most relevant aspects for fellow shooters. I literally spent months trying to make this content simple and balanced because I firmly believe this can help a lot of shooters once they wrap their heads around a few basics.

Predicting the Future

“Each time you pull the trigger, the bullet chooses a single outcome from infinite possibilities based on countless random factors. Without a time machine, you can never know exactly where the next bullet will go. However, you can predict the most likely outcome, and precisely describe the chances of it being high or low, left or right, fast or slow. Many people shy away from statistics because, well, math. It seems complicated and unnecessary. On the contrary. It is a way of thinking that hones your intuition and helps you make better decisions. … Just by understanding the relationship between a sample and a population, you can learn how to predict the future.” – Adam MacDonald, Statistics for Shooters

Often as shooters, we use stats to make some kind of comparison or decision. Here are two simple examples:

  1. Fred has a big hunt coming up, so he bought 3 different kinds of factory ammo to see which groups best from his rifle.
  2. Sam is a competitive long-range shooter and reloader, and he tried several different powder weights to find the one that produces the most consistent velocity.

In both of those examples, the shooter is trying to make an informed, data-driven decision. Fred and Sam will both fire a bunch of rounds during a practice session at the range and use the results to decide what they’ll use in the future. But what Fred and Sam measure in the practice session isn’t what they actually care about. What really matters to them is what happens in the big hunt or the upcoming competition. Whether they realize it or not, both are collecting a sample of data and using that to predict the future performance of their rifle/ammo. Let’s say Fred fires a 3-shot group from each box of ammo, and the extreme spreads of those groups measured 0.54, 0.57, and 0.94 inches. He should just go with the smallest, right? Does he need to fire more shots? How can he know he’s making the right choice? Those are questions statistics can help answer.

The Plan for This “Statistics For Shooters” Series

I plan to publish 3 articles focused on how stats can help us as shooters in a way that is practical and applicable. There will be one article focused on helping us make better decisions related to these common applications:

  1. Quantifying muzzle velocity consistency: Gaining insight to minimize our shot-to-shot variation in velocity
  2. Quantifying group dispersion: Making better decisions when it comes to precision and how small our groups are

This article will lay a foundation that we’ll use in the others. So let’s dive into some important basics.

Descriptive Statistics: The Good & The Bad

When we talk about our average muzzle velocity or the extreme spread of a group, both of those are descriptive statistics. So is a baseball player’s batting average or an NFL passer rating. Sports fans use descriptive statistics in everyday conversation. How good of a baseball player was Mickey Mantle? He was a career .298 hitter. To a baseball fan, that is a meaningful statement and is remarkable because that tiny statement encapsulates an 18-season career with over 8,000 at-bats.

Descriptive statistics are very good at summing up a jumble of data, like 18 seasons of baseball or a 10-shot group, and boiling it down to a single number. They give us a manageable and meaningful summary of some underlying phenomenon. The bad news is any simplification invites abuse. Descriptive statistics can be like online dating profiles: technically accurate and yet pretty darn misleading! Descriptive statistics exist to simplify, which always implies some loss of detail or nuance. So here is a very important point: An over-reliance on any descriptive statistic can lead to misleading conclusions.

Even under the best circumstances, statistical analysis rarely unveils “the truth.” Statistics can help us make more informed decisions, but I’ll caution that some professional skepticism is appropriate when it comes to statistics. That is why smart and honest people will often disagree about what the data is trying to tell us. “Lies, damned lies, and statistics” is a common phrase describing the persuasive power of numbers. The reality is you can lie with statistics – or you can make inadvertent errors. In either case, the mathematical precision attached to statistical analysis can dress up some serious nonsense.

What Is the “Middle”?

We use averages all the time, right? Average is one of the most common descriptive statistics, which is easy to understand and helpful – but sometimes average can be deceptive. Here is a great story that illustrates the point, which is from Naked Statistics by Charles Wheelan:


10 guys are sitting in a middle-class bar in Seattle, and each of them earns $35,000 a year. That means the average annual income for the group is $35,000. Bill Gates then walks into the bar, and let’s say his annual income is $1 billion. When Bill sits down on the 11th stool, the average income rises to around $91 million. Obviously, the original 10 drinkers aren’t any richer. If we described the patrons of this bar as having an average annual income of $91 million, that statement would be both statistically correct and grossly misleading. This isn’t a bar where multimillionaires hang out; it’s a bar where a bunch of guys with relatively low incomes happen to be sitting next to Bill Gates.

We often think about the average as being the “middle” of a set of numbers – but it turns out that the average is prone to distortion by outliers. That is why there is another statistic that is often used to signal the “middle”, albeit differently: the median. Okay, don’t check out or let your eyes glaze over! I promise this is applicable and easy to understand. The median is simply the point that divides a set of numbers in half, meaning half of the data points are above the median and half are below it.

If we return to the barstool example, the median annual income for the 10 guys originally sitting at the bar is $35,000. When Bill Gates walks in and perches on a stool, the median income for the group is still $35,000. Think about lining up all 11 of them on stools in order of their incomes, as shown in the diagram below. The income of the guy sitting on the 6th stool (bright yellow shirt) represents the median income for the group because half of the values are above him and half are below him. In fact, if Warren Buffet came in and sat next to Bill the median would still be $35,000!

If you had to bet $100 on what the income was of the very next guy who walked in the door, would $35,000 or $91 million be more likely? When we’re talking about what is most likely to happen in the future, the median can often be a better choice than the average.

When there aren’t major outliers, average and median will be similar – so it doesn’t matter which we use. It’s when there are major outliers that it does matter. Neither is “wrong!” The key is determining which measure of the “middle” is more accurate for a particular situation: median (less sensitive to outliers) or average (more affected by outliers)?

How Spread Out Is the Data?

Often as shooters, we want to understand how spread out a group of shots are, so we measure the extreme spread (ES). That is easy to measure by hand and it is useful. However, like any descriptive statistic, we are simplifying multiple data points into a single number – so we lose some level of detail. There is another statistic we can use to describe how spread out data points are. To understand it, let’s go to another example from Naked Statistics by Charles Wheelan:


Let’s say we collected the weights for two sets of people:

  1. 250 people who qualified for the Boston Marathon
  2. 250 people on an airplane flying to Boston
Standard Deviation Weight Comparison Boston Marathon vs Plane To Boston from Naked Statistics

Let’s assume the average weight for both groups was 155 pounds. If you’ve ever been squeezed into the middle seat on a flight, you know many American adults are larger than 155 pounds. However, if you’ve flown much you also know there are crying babies and poorly behaved children on flights, all of whom have huge lung capacity but not much weight. When it comes to calculating the average weight, the 320-pound football players on either side of your middle seat is likely to offset the six-year kicking the back of your seat from the row behind.

In terms of average and median weights, the airline passengers and marathon runners are nearly identical. But they’re not! Yes, the weights have roughly the same “middle,” but the airline passengers have far more dispersion, meaning their weights are spread farther from the midpoint. The marathon runners may all appear like they all weigh the same amount, while the airline passengers have some tiny people and some bizarrely large people.

Standard deviation (SD) is the descriptive statistic that allows us to communicate how spread out values are from the average with a single number. How to calculate SD isn’t straight-forward, but virtually nobody does it by hand – so I won’t bore you with the formula. Typically, a chronograph or app calculates it for us, or we can use a formula in a spreadsheet.

Some sets of numbers are more spread out than others, and that is what SD will provide insight into. The SD of the weights for our 250 airline passengers will be much higher than the SD of the weights for our 250 marathon runners because the weights of the marathon runners are much less spread out.

The Normal Distribution

Not only does SD describe how spread out data is, but it also helps introduce one of the most important, helpful, and common distributions in statistics: the normal distribution. Data that is distributed “normally” is symmetric and forms a bell shape that will look familiar.

A normal distribution can be used to describe so many natural phenomena. Wheelen points out a few practical examples:

  • Think of a distribution that describes how popcorn pops in your microwave. Some kernels start to pop early, maybe one or two pops per second; after a little time kernels start popping frenetically. Then gradually the number of kernels popping per second fades away at roughly the same rate as the popping began.
  • The height of American men is distributed normally, meaning they are roughly symmetrical around the average of 5 foot 10 inches.
  • According to the Wall Street Journal, people even tend to park in a normal distribution at shopping malls, with most cars park right in front of the entrance – the “peak” of the normal curve – and “tails” of cars going off to the right and left of the entrance.

“The beauty of the normal distribution – its Michael Jordan power, finesse, and elegance – comes from the fact that we know how the data will be spread out by only having to know one stat: standard deviation. In a normal distribution, we know precisely what proportion of the observations will lie within one standard deviation of the average (68%), within two standard deviations (95%), and within three standard deviations (99.7%). While those exact percentages may sound like worthless trivia, they are the foundation on which much of statistics is built.” – Charles Wheelen

Normal Distribution & Standard Deviation

By simply knowing the average and standard deviation of weights from our two collections in the example above, we could come up with a very good estimate of how many people on the plane weighed between 130-155 pounds, or what the odds were of a Boston marathon runner weighing over 200 pounds.

The more random/independent factors that play into an outcome, the more normal a distribution usually becomes. It’s no coincidence that almost every random process in nature works like this. Because there are so many factors that play into how a rifle and ammo performs that makes it an ideal application for a normal distribution.

So, what does all this mean to me as a shooter? Great question! If we fire a bunch of shots over our LabRadar, MagnetoSpeed, or other chronograph, those devices will calculate what our average muzzle velocity and SD were for those shots. Let’s say the average was exactly 3,000 fps and the SD was 9.0 fps. Because we expect this to form a normal distribution, we can come up with the following chart with real numbers based on that average and SD from our sample:

Muzzle Velocity SD Example

If the average is 3,000 fps and the SD is 9.0 fps, we can reasonably expect 68% of our bullets to leave the barrel between 2991-3009 fps (represented by combining both dark blue areas), and 95% of our bullets will leave the barrel between 2982-3018 fps (the dark blue areas combined with the two medium blue areas).

But, standard deviation and normal distributions have more applications for us as shooters than just muzzle velocity, and we’ll tap into other powerful applications in subsequent articles.

Sample Size & Confidence Levels: How Many Shots Do We Need To Fire?

Have you ever been reading through a forum and saw some nerd complain about a sample size being “too small to draw meaningful conclusions”? So, how many shots do we need to fire to have a “good” sample size? The answer depends on how minor the differences are that we’re trying to detect and how much confidence we want to have in the results being predictive of the future. So the short answer is, “It depends.” I realize that may not be very helpful, so I’ll try to provide a more helpful answer and share a useful tool that can give us a straight answer based on specific data you collected from your rifle.

The first step is to understand no test result is definite, and we’d often be more accurate speaking in ranges and probabilities than absolute values. That may sound cynical, but it’s an important concept. Let’s run through an example.

Let’s say I loaded up 250 rounds of ammo for a match, and I fired 10 of those rounds (our sample) and recorded the muzzle velocity for each shot on my LabRadar. After 10 shots, the LabRadar reported the average was 2,770.4 fps and the SD was 8.82 fps. Great! Now we know how the other 240 rounds will perform, right? Not really. The only thing we know with 100% certainty is the average and SD of the 10 rounds I just fired, and anything we say about what the remaining 240 rounds (our population) should be said in terms of ranges and probabilities. We can only talk about data collected in the past with absolute precision. Predicting the future is all about ranges and probabilities.

The problem is our LabRadar or MagnetoSpeed gives us very precise statistics for our sample (e.g. SD of 8.82 fps), but it doesn’t tell us how to use those to make estimations for our population – which is what we actually care about! That is also true for how we measure groups. We might measure a 5-shot group to have an extreme spread of 0.26 MOA, but that doesn’t tell us the odds of how small the next group will be. Here is a key concept: Just because we can measure or calculate something to the 2nd decimal place doesn’t mean we have that level of accuracy or insight into the future!

With a rifle, we have no choice but to guess what the population is from the samples it provides. The larger the sample, the more likely we are to have correctly measured the population. This is called ‘confidence.’”, explains MacDonald. So if we’re asking how much confidence we can have in the results, statistics can give us a straight answer!

Let’s say I recorded the following velocities over 10 shots: 2777, 2763, 2767, 2774, 2754, 2777, 2773, 2766, 2762, and 2775. Based on those 10 shots, here are the predicted ranges for various confidence levels:

Confidence Level Range of Likely Averages for Remaining Rounds Range of Likely SD’s for Remaining Rounds
99% 2,761 – 2,777 4.7 – 17.4
95% 2,763 – 2,774 5.3 – 14.0
90% 2,764 – 2,773 5.6 – 12.6
85% 2,765 – 2,773 5.8 – 11.8
75% 2,766 – 2,772 6.2 – 10.8
50% 2,767 – 2,771 6.8 – 9.5

What the info above is telling us is that after firing those 10 shots that had an average of 2,768.8 fps and an SD of 8.82 fps, we can say with 99% confidence that the SD of our remaining 240 rounds will fall between 5.5 and 20.1 fps. There is only a 1 in 100 chance that it would fall outside of that range. But that is a huge range! Maybe going for 99% confidence is too strict, so let’s drop to 75% confidence and we can see the SD is predicted to be 7.1 – 12.5 fps. A 75% confidence interval means we can expect that 1 time out of 4 (25% of the time) the real value of the population would fall outside of that range.

One way to get more confidence in the results is to increase the sample size, so let’s say that we fired another 10 rounds for a total string of 20 shots. The range related to a 95% confidence level would shrink from 6.1-16.1 to 7.9-15.1. That first range is a window of 10 fps, and the second is 7.2 fps – not a huge difference for doubling the sample size. The confidence level you are comfortable with is a personal trade-off between accepting some risk that your results are not accurate vs. investing more time and money to keep testing.

Stats Calculator for Shooters

Adam McDonald is a Canadian FTR shooter and he wrote two outstanding articles related to statistics for shooters, which I highly recommend (click here for Part 1 and Part 2). Adam also created an insanely useful tool to help us calculate the ranges and probabilities for a certain set of data and the desired level of confidence – without needing a math degree. Here is a link to Adam’s Stats Calculator for Shooters (alternate link, if that other doesn’t work for you.)

The screenshot below shows two examples of what Adam’s calculator can do. In these examples, I simply put in three numbers:

I also selected the desired confidence level, and viola – the tool tells me the ranges I can expect for the population based on the sample data provided.

Stats Calculator from Adam MacDonald at AutoTrickler

I entered the same average and SD for both samples above, and only changed the desired confidence level and sample size between the two examples. We can see the range for the SD on the left is 5.5-26.4 fps, and on the right, it is 7.3-12.6. That wider range is a result of a sample size of just 5 shots and a desired confidence level of 95%. The narrower range was based on 20 shots and I lowered the confidence level to 90%. Adding more data points and compromising to a lower confidence level in the results will both effectively narrow the range. Hopefully, this illustrates some of the basics of how Adam’s tool can help.

Calculated Values For Our Sample vs. Predicted Ranges for Our Population

The key point here is that we can know the SD of both samples we fired is precisely 9.2 fps. Our chronograph tells us that number with absolute certainty. But, let’s say those samples came from a batch of 200 rounds of ammo we loaded, and we want to predict what the SD of the remaining rounds will be. In that case, we must switch to speaking in terms of ranges and confidence levels. We can’t say for certain that the entire population has an SD of 9.2 fps, only our sample. What this calculator tells us is that based on the 20-shot sample we collected, 9 times out of 10 the SD of the remaining 180 rounds would land between 7.3 and 12.6 fps. If we want more certainty or a more precise range, we have to fire more rounds. There is no free lunch!

Summary & Key Points

We’ve covered a lot of ground, so let’s recap some key points from this article:

  • Often the ammo performance we measure at the range isn’t what we actually care about. Whether we realize it or not, we’re often collecting a sample of data and using that to predict the future performance of our rifle or ammo. That’s when statistics can help!
  • Descriptive statistics (like average, median, extreme spread, standard deviation) are very good at summing up a bunch of data points into a single number. They provide a manageable and meaningful summary, but because they exist to simplify that implies some loss of detail or nuance. An over-reliance on any descriptive statistic can lead to misleading conclusions.
  • Average and median are both measures of the “middle” of a set of numbers. Neither is wrong. The key is determining what is more accurate for a particular situation: Average is more affected by outliers. Median is less sensitive to outliers.
  • Standard deviation (SD) is a number that communicates how spread out values are from the average.
  • Because there are so many independent factors that play into how a rifle and ammo performs, it can be an ideal application for a normal distribution. The power of a normal distribution comes from the fact that we know how the data will be spread out by only having to know one stat: standard deviation. In a normal distribution, we know precisely what proportion of the observations will lie within one standard deviation of the average (68%), within two standard deviations (95%), and within three standard deviations (99.7%).
  • How many shots do we need to fire to have a “good” sample size? The answer depends on how minor the differences are that we’re trying to detect and how much confidence we want to have in the results being predictive of the future.
  • The more samples used in a calculation, the more confidence we can have in the results.
  • A very important key is to understand that average and SD have confidence levels associated with them in the first place. Just because we fire 10 shots and measure an SD doesn’t mean we will get the same SD next time. In fact, it’s unlikely we’d get the same number. Just because we can measure or calculate something to the 2nd decimal place doesn’t mean we have that level of accuracy or insight into the future! We can only speak in terms of absolute, precise values about shots fired in the past. When we’re trying to predict the future, we can only speak in terms of ranges and probabilities.
  • The confidence level you are comfortable with is a personal trade-off between accepting some risk that your results are not accurate vs. investing more time and money to keep testing.

“The rifle talks to us by generating samples at $1 a pop,” MacDonald explains. “If we want to know how it truly works, we need to play its game. With enough samples, we can try to measure the population, but it can be expensive.” Engleman, another author on the subject of statistics and shooting, joked that if he measured 1,000 shots he’d be able to have a ton of confidence in the results, but “I would also burn out my barrel, do a lot of reloading and never make it to a match!” 😉

I’m a practical guy. I realize we can’t shoot a sample size of 100 or even 30 shots for every powder charge and seating depth we try in our load development. I’m not suggesting that. In fact, over the next two articles, I specifically want to share how we can get the most out of the shots we do fire – and how to leverage those to make more informed decisions.

Other Articles In This Series

Stay tuned for the next two articles in this series, which will dive into how we can get the most out of the shots we fire, and make more informed decisions.

  1. How To Predict The Future: Fundamentals of statistics for shooters (this article)
  2. Quantifying Muzzle Velocity Consistency: Gaining insight to minimize our shot-to-shot variation in velocity
  3. Quantifying Group Dispersion: Making better decisions when it comes to precision and how small our groups are
  4. Executive Summary: This article recaps the key points from all 3 articles in a brief bullet-point list

You can also view my full list of works cited if you’re interested in diving deeper into this topic.

© Copyright 2023 PrecisionRifleBlog.com, All Rights Reserved.

Source link