Attribute gauge chart plots the % Agreement, which is a measurement of rater agreement for every part in the study. The agreement for each part is calculated by comparing the ratings for every pair of raters for all ratings of that part. See Statistical Details for Attribute Gauge Charts.
Follow the instructions in Example of an Attribute Gauge Chart to produce the results shown in Attribute Gauge Chart.
The first chart in Attribute Gauge Chart uses all X grouping variables (in this case, the Part) on the x-axis. The second chart uses all Y variables on the x-axis (typically, and in this case, the Rater).
Note: The Kappa value is a statistic that expresses agreement. The closer the Kappa value is to 1, the more agreement there is. A Kappa value closer to 0 indicates less agreement.
The Agreement Report shows agreement summarized for each rater and overall agreement. This report is a numeric form of the data presented in the second chart in the Attribute Gauge Chart report. See Attribute Gauge Chart.
The Agreement Comparisons report shows each rater compared with all other raters, using Kappa statistics. The rater is compared with the standard only if you have specified a Standard variable in the launch window.
The Agreement within Raters report shows the number of items that were inspected. The confidence intervals are score confidence intervals (as suggested by Agresti and Coull, 1998). The Number Matched is the sum of the number of items inspected, where the rater agreed with him or herself on each inspection of an individual item. The Rater Score is the Number Matched divided by the Number Inspected.
The Agreement across Categories report shows the agreement in classification over that which would be expected by chance. It assesses the agreement between a fixed number of raters when classifying items.
The Effectiveness Report appears only if you have specified a Standard variable in the launch window. For a description of a Standard variable, see Launch the Variability/Attribute Gauge Chart Platform. This report compares every rater with the standard.
The Agreement Counts table shows cell counts on the number correct and incorrect for every level of the standard. In Effectiveness Report, the standard variable has two levels, 0 and 1. Rater A had 45 correct responses and 3 incorrect responses for level 0, and 97 correct responses and 5 incorrect responses for level 1.
Effectiveness is defined as follows: the number of correct decisions divided by the total number of opportunities for a decision. For example, say that rater A sampled every part three times. On the sixth part, one of the decisions did not agree (for example, pass, pass, fail). The other two decisions would still be counted as correct decisions. This definition of effectiveness is different from the MSA 3rd edition. According to MSA, all three opportunities for rater A on part six would be counted as incorrect. Including all of the inspections separately gives you more information about the overall inspection process.
In the Effectiveness table, 95% confidence intervals are given about the effectiveness. These are score confidence intervals. It has been demonstrated that score confidence intervals provide increased coverage probability, particularly where observations lie near the boundaries. (See Agresti and Coull, 1998.)
The Misclassifications table shows the incorrect labeling. The rows represent the levels of the standard or accepted reference value. The columns contain the levels given by the raters.
The Conformance Report shows the probability of false alarms and the probability of misses. The Conformance Report appears only when the rating has two levels (such as pass or fail, or 0 or 1).
The number of parts that have been incorrectly judged to be nonconforming divided by the total number of parts that are judged to be conforming.
The number of parts that have been incorrectly judged to be conforming divided by the total number of parts that are actually nonconforming.
Calculates the Escape Rate, which is the probability that a non-conforming part is produced and not detected. The Escape Rate is calculated as the probability that the process will produce a non-conforming part times the probability of a miss. You specify the probability that the process will produce a non-conforming part, also called the Probability of Nonconformance.