The Effects of Random Error in Content Analysis: What Does Intercoder Reliability Really Mean?Hester, Joe Bob
Using thematic analysis in psychology. Qual Res Psychol Routledge. 2006;3:77–101. Article Google Scholar O’Connor C, Joffe H. Intercoder reliability in qualitative research: debates and practical guidelines. Int J Qual Methods. SAGE Publications Inc; 2020;19:1609406919899220. Dedoose Version 9.0...
Comprehending spoken language plays a crucial role in many forms of human interaction by providing access to a shared understanding of manifold aspects of social life and cooperative work. However, as research has shown, understanding spoken utterances is far more complex than simply mapping heard sou...
Intercoder reliability was 0.96 for subthemes, and 0.98 for valences. The remaining discussion posts and interview transcripts were coded by one of the authors. Participants’ interview responses were analyzed in light of each theme and subtheme, and additional diagrams – yielding six in total –...
Intercoder agreement and Cohen’s Kappa test were used to estimate inter-rater agreement. To assist with the interpretation of the data, the qualitative comments and application of the coding framework were considered for categories defined according to their response to the quantitative questions. For...
This is one of a series of cross-cultural studies comprising the International Self-Identity Research Project (ISIRT) designed to investigate adolescents' self-identity process in a variety of cultures. Intercoder reliability was established among the researcher and three otherISIRTteam members. ...
In this research, the three reliability aspects have been met, as described in the methodology section. Further, it involves the traceability of all key-decision and development points along the research implementation, to corroborate the findings and the conclusions. This has been achieved using ...
Superti, ChiaraGrimmer J, King G, Superti C. 2016. The unreliability of measures of intercoder reliability, and what to do about it. Unpublished manuscript, Dep. Polit. Sci., Stanford Univ. http://web.stanford.edu/∼jgrimmer/ Handbib.pdf...
7.8. Intercoder reliability All studies were coded by two independent raters (SLK and a trained research assistant), which is called “double coding” (Cooper, 2016). The trained research assistants were blind to the research hypotheses. To evaluate intercoder reliability, we used the kappa coeffic...