For the first chart in Attribute Gauge Chart that plots all X, Grouping variables on the x-axis, the % Agreement is calculated as follows:
For the second chart in Attribute Gauge Chart that plots all Y, Response variables on the x-axis, the % Agreement is calculated as follows:
 • n = number of subjects (grouping variables)
 • ri = number of reps for subject i (i = 1,...,n)
 • m = number of raters
 • k = number of levels
 • Ni = m x ri. Number of ratings on subject i (i = 1,...,n). This includes responses for all raters, and repeat ratings on a part. For example, if subject i is measured 3 times by each of 3 raters, then Ni is 3 x 3 = 9.
 A B C 1 1 1 1 2 1 1 0 3 0 0 0
If you view the two response variables as two independent ratings of the n subjects, the Kappa coefficient equals +1 when there is complete agreement of the raters. When the observed agreement exceeds chance agreement, the Kappa coefficient is positive, and its magnitude reflects the strength of agreement. Although unusual in practice, Kappa is negative when the observed agreement is less than the chance agreement. The minimum value of Kappa is between -1 and 0, depending on the marginal proportions.
 • n = number of subjects (grouping variables)
 • m = number of raters
 • k = number of levels
 • ri = number of reps for subject i (i = 1,...,n)
 • Ni = m x ri. Number of ratings on subject i (i = 1, 2,...,n). This includes responses for all raters, and repeat ratings on a part. For example, if subject i is measured 3 times by each of 2 raters, then Ni is 3 x 2 = 6.
 • xij = number of ratings on subject i (i = 1, 2,...,n) into level j (j=1, 2,...,k)The individual category Kappa is as follows: