## Chi^2 test

The most frequently used but misused method would probably be chi^2 test. Originally it was designed by Pearson for binned data from multinomial distribution. Chi^2 is defined as the summation of (O_i – E_i)^2/E_i over N events (i=1, … , N), where O_i is observed value in i-th bin and E_i is expected value in i-th bin.

However widely used form is the summation of (O_i – E_i)^2/(sigma_i)^2. Often we are confused by what sigma_i we should use in a particular case. In principle sigma_i is a variance in the model not in the data. So the behind idea is to compare the deviation of data from model to the variance in the model and access, on average, how much data deviates from the model comparing to model variance. Therefore the reduced chi^2, which divides chi^2 by the degree of freedom should be close to 1.

To know how to estimate our model variance, we have to know how our observational data is produced. For example, when we analyze CCD image and model it, we have to be aware that there are several hidden processes to generate final products. And we have to mimic image processing using our model. Then we are safe to compare apples and apples, not apples and oranges.