=== Proportional agreement of positive and negative results ===
+
=== Proportional agreement of positive and negative results ===
−
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, ''kappa'' is not always a good measure of the true level of agreement between two tests ([#_ENREF_6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to ''kappa'' ([#_ENREF_3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.
+
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, ''kappa'' is not always a good measure of the true level of agreement between two tests ([#6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to ''kappa'' ([#3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.
Revisi terkini pada 10 Mei 2015 14.10
Proportional agreement of positive and negative results
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, kappa is not always a good measure of the true level of agreement between two tests ([#6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to kappa ([#3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.