hello all ,
In our project are analyzing responses to two questions. The data was set up as follows: Each document represented one user, and within that document there are the responses to both questions.
We had 3 coders independently code all of the documents (individual users responses) into a project.
I merged all three projects from the 3 coders.
Now, I am trying to get a Kappa to determine Interrater reliability for each question (not each document,which is a user response).
I'm realizing that we probably should have made each document by the specific questions rather than by user to make this easier, but is there anyway to get a Kappa for each question?
Version: MAXQDA 2020
System: Windows 10