Translations:Advanced Field Epi:Manual 2 - Diagnostic Tests/137/en: Perbedaan revisi

(Importing a new version from external source)
 
(Importing a new version from external source)
 
Baris 1: Baris 1:
=== Proportional agreement of positive and negative results ===
+
=== Proportional agreement of positive and negative results ===  
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, ''kappa'' is not always a good measure of the true level of agreement between two tests ([#_ENREF_6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to ''kappa'' ([#_ENREF_3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.
+
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, ''kappa'' is not always a good measure of the true level of agreement between two tests ([#6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to ''kappa'' ([#3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.

Revisi terkini pada 10 Mei 2015 14.10

Informasi pesan (berkontribusi)

Pesan ini tidak memiliki dokumentasi. Jika Anda tahu di mana dan bagaimana pesan ini digunakan, Anda dapat membantu penerjemah lain dengan menambahkan dokumentasi untuk pesan ini.

Definisi pesan (Advanced Field Epi:Manual 2 - Diagnostic Tests)
=== Proportional agreement of positive and negative results === 
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, ''kappa'' is not always a good measure of the true level of agreement between two tests ([#6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to ''kappa'' ([#3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.
Terjemahan=== Proportional agreement of positive and negative results === 
In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, ''kappa'' is not always a good measure of the true level of agreement between two tests ([#6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to ''kappa'' ([#3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.

Proportional agreement of positive and negative results

In some circumstances, particularly where the marginal totals of the 2-by-2 table are not balanced, kappa is not always a good measure of the true level of agreement between two tests ([#6 Feinstein and Cicchetti, 1990]). For example, in the first example above, kappa was only 0.74, compared to an overall proportion of agreement of 0.94 In these situations, the proportions of positive and negative agreement have been proposed as useful alternatives to kappa ([#3 Cicchetti and Feinstein, 1990]). For this example, the proportion of positive agreement was 0.78, compared to 0.96 for the proportion of negative agreement, suggesting that the main area of disagreement between the tests is in positive results and that agreement among negatives is very high.