This section contains statistical details for selected options and reports in the Contingency platform.
Viewing the two response variables as two independent ratings of the n subjects, the Kappa coefficient equals +1 when there is complete agreement of the raters. When the observed agreement exceeds chance agreement, the Kappa coefficient is positive, with its magnitude reflecting the strength of agreement. Although unusual in practice, Kappa is negative when the observed agreement is less than chance agreement. The minimum value of Kappa is between -1 and 0, depending on the marginal proportions.
For Bowker’s test of symmetry, the null hypothesis is that the probabilities in the two-by-two table satisfy symmetry (pij=pji).
where pij is the count in the ith row and jth column of the 2x2 table.
The Likelihood Ratio Chi-square test is computed as twice the negative log-likelihood for Model in the Tests table. Some books use the notation G2 for this statistic. The difference of two negative log-likelihoods, one with whole-population response probabilities and one with each-population response rates, is written as follows:
The Pearson Chi-square is calculated by summing the squares of the differences between the observed and expected cell counts. The Pearson Chi-square exploits the property that frequency counts tend to a normal distribution in very large samples. The familiar form of this Chi-square statistic is as follows:
where O is the observed cell counts and E is the expected cell counts. The summation is over all cells. There is no continuity correction done here, as is sometimes done in 2-by-2 tables.
r and c are row and column sums of P