In agreement measures, we consider the case in which a sample of n individuals or subjects is rated independently by two or more raters. There are various measures of agreement amongst which are Cohen's Kappa, Intraclass Kappa, Weighted Kappa, Raw agreement, and Tau (τ) indices just to mention a few. Several authors have argued the case for often-called chance agreement effect, where for example two raters A and B employ different sets of criteria for classifying same objects. In such a case, the observed agreement will be said to be primarily due to chance. This paper has been carried out to observe agreement for the beyond-chance situations using τ statistic amongst other measures of agreement. The asymptotic distribution for estimated τis derived and its mean and variance obtained. This allows a confidence bound for τ to be proposed. We use some practical examples to determine the confidence bounds across different degrees of freedom.
展开▼