Suppose you have the task of adding up long list of numbers – perhaps your daily expenditures over a month. You do your sum and get a particular result. But you’re not sure whether you got it right. You may have made a mistake in adding or punching in the numbers if you were using a calculator. What do you do? You do the sum again. And if you’re a cautious accountant you might even do it a third time. If you get the same result everytime you feel you have got it right.
Lesson: When in doubt, repeat. Repeatability of the result generates confidence in it. Repeatability is reliability.
Actually, our example of adding up a list of numbers is not a good one. Because, in this case there is only one true answer, and we shall get it everytime we do our sum correctly. But, the real life situations that we are interested in are the results that we get from measuring a sample of people from some universe. Again, we are not sure if the results are true. So, in line with our commonsense philosophy, we should be repeating the sampling exercise. If we did, it is highly unlikely that we would get exactly the same result, because different people would be included this time. In fact, if we repeated the sampling exercise many times and measured the same thing on different samples of people, we would find that most of the results fall within a range.
We would be entitled to come to a conclusion that, most probably, the truth that we are trying to estimate must lie somewhere in that range. If we had a method of being more precise and if we could say, for example, that after repeating the sampling exercise many times, 95 percent of the results would fall within a certain range, then there would be a 95 percent chance that the truth would lie in that range.
The width of this range is a measure of the precision of our estimate – narrower the range, higher the precision. Our objective is to narrow this range as much as possible, because that would bring us closer to the elusive truth. Precision replaces the concept of accuracy. We will never be able to say how accurate is our estimate of the truth, but we can say how precise it is.
But how do we get a fix on this range? Taking just one sample in real life is problematic and costly enough. Repeating the exercise many times may be conceptually brilliant, but completely undoable in practice.
Actually, you don’t have to repeat the sampling exercise. This is where the science of inferential statistics comes in. By analysing the data in one sample that you have taken, specifically the variation contained in it, and by making some assumptions about the pattern of variation in the total universe, it can calculate the 95 percent or 99 percent or any other precision range that would actually come to pass if you did take the repeated samples. The whole purpose of inferential statistics is to save you the trouble of actually repeating the sampling exercise by inferring what would happen if you did.
It sounds like magic, but it is only logic. This logic completely depends on a crucial aspect of reality, namely the ‘Laws of Chance’, more commonly known as ‘Probability’
Can i know did you design the blog youself or someone else did ? Its very nice and i would like one for my blog as well.