Coding comparison query

The coding comparison query compares coding by two users to measure 'inter-rater reliability'—the degree of coding agreement between them. Agreement is measured by two statistical methods:

  • Percentage agreement: the number of content units on which coders agree (to code or not to code), divided by the total number of units—reported as a percentage.
  • Cohen's kappa coefficient: a statistical measure that takes into account the amount of agreement expected by chance—expressed as a decimal in the range –1 to 1 (where values ≤ 0 indicate no agreement, and 1 indicates perfect agreement).

You can select specific nodes or cases for comparison, in selected data files, datasets, externals and/or memos. You can also select groups of nodes or files by identifying search folders, sets or classifications they belong to.

It is possible to compare coding between groups of users, however, standardly, comparisons are made between individual coders. If comparing groups, content is taken to be coded if at least one member of a group coded it. Coders are identified by their NVivo user profiles User profiles.

Comparisons can be run for text coding in data file documents (including PDFs), datasets, externals, memos, codes and cases. Coding in picture, audio and video files cannot be compared. Region coding in PDFs cannot be compared.

You can select characters, sentences or paragraphs as the unit of comparison:

  • Characters: Individual text characters in each code reference are used for comparison. This option provides the most accurate comparison in terms of the number of words coded.

  • Sentences: Coding of whole sentences is compared. If any characters in a sentence are coded, then the entire sentence is treated as coded. The query treats all sentences equally, regardless of length.

    This option may be best if you know that all, or most, coding references in a project are complete sentences. If each sentence contains one complete concept, claim, opinion, decision etc. then your results may be more meaningful comparing sentences rather than characters—the concepts, claims, etc. are likely to be expressed in different numbers of words.

  • Paragraphs: Coding of whole paragraphs is compared. If any characters in a paragraph are coded, then the entire paragraph is treated as coded. The query treats all paragraphs equally, regardless of their length.

    This option may be best if you know that all, or most, coding references in a project are complete paragraphs. If each paragraph contains one complete concept, claim, opinion, decision etc. then your results may be more meaningful comparing paragraphs rather than characters or sentences—the concepts, claims, etc. are likely to be expressed in different numbers of words.

You can save query configuration settings to run the same query at a later time, when more coding has been done. Queries are saved under Search / Queries in the Navigation View.

You cannot save query results in NVivo, however you can export them to other applications (e.g. Excel) to save. Export query results

Run a coding comparison

  1. On the ribbon Query tab, select Coding Comparison.
  2. For Search in, select the files within which you want to compare the coding:
    • Files and Externals: all data files and externals, but not memos
    • Selected Items: selected data files, externals and/or memos
    • Selected Folders or Sets: all data files, externals and/or memos in selected folders and/or static sets
    • Files with Classifications: all data files, externals and memos with selected file classifications
  3. For Coded At, select the nodes or cases that you want to compare:
    • All Nodes: coding to all nodes and cases in the selected files
    • Selected Nodes: coding to specific nodes and/or cases in the selected files
    • Nodes in Selected Sets: all the nodes and cases included in selected sets
    • Cases with Classifications: all the cases with selected case classifications

    NOTE: If you include an aggregate node in the scope of a query, content coded to it and its direct children is included in the results. Aggregate nodes (gather all content in a parent node)

  4. For User Group A and User Group B, click An arrow icon representing location to select the users whose coding you want to compare (recommended to compare individual coders). If you include more than 1 user in a group, any coding carried out by any of the users in the group counts towards the comparison.
  5. Select whether you want the calculations to be based on units of character, sentence or paragraph (see above).
  6. To save the query settings, check Save Query at the top of the dialog box. Name the query and optionally provide a description.
  7. Click Run Query at the top of the Detail View.

Query results are displayed in the Detail View (see below).

Coding comparison results

Coding comparison query results.

Coding comparison results are displayed in two views in the Detail View: statistical results (this section) and a copy of all the content in the query showing the coded sections highlighted—see View coding agreement or disagreement below.

1 Overall, unweighted, kappa coefficient for all the codes and files queried. ('Unweighted' means that each file in the query contributes equally to the score, regardless of file size. This value is not provided 'weighted'.)

2 The nodes and cases being compared—expand to see the files that had content coded to the node/case.

3 Files where the node or case was coded, with itemized statistics.

4 The file size, measured as the number of characters, sentences, or paragraphs, depending on the setting used.

5 The kappa coefficient for each node or case over the full scope of the query, or for each node or case per file (for expanded nodes/cases).

6 These columns show percentage agreement:

  • Agreement Column = sum of columns A and B and Not A and Not B
  • A and B = the percentage of content coded to the selected node by both Group A and Group B
  • Not A and Not B = the percentage of content coded by neither Group A nor Group B

7 These columns show percentage disagreement:

  • Disagreement Column = sums of columns A and Not B and B and Not A
  • A and Not B = the percentage of content coded by Group A and not Group B
  • B and Not A = the percentage of content coded by Group B and not Group A

8 Display results with either:

  • Unweighted Values  Files are treated equally (regardless of size) when calculating the overall results for each node/case.
  • Weighted Values  File size is taken into account when calculating the overall results for each node/case. For example, if using paragraphs as the unit of comparison, a document with 10 paragraphs would contribute 5 times more than one with 2 paragraphs.

9 Select the Show coding comparison content check box if you want to see the data content where there is agreement or disagreement (see below).

View coding agreement or disagreement

This view shows the text in the scope of the query. The text is highlighted corresponding to 'coded by both coders' and 'coded by one coder but not the other'.

Check Show coding comparison content to view.

Reviewing the agreement and disagreement in Coding Comparison results.

1 Select a code or file in the query results and then check Show coding comparison content to display the content pane. Alternatively, double-click the code or file.

Once the coding comparison content pane is open, click on another code or file in the left-hand pane to see its content.

2 Colored highlighting indicates text coded to the currently selected code or case:

  • green: coded by both groups (Agreement)
  • yellow: coded by Group A only (Disagreement)
  • blue: coded by Group B only (Disagreement)

Text that was not coded by either group is displayed as grey text (Agreement).

NOTE  Highlighted areas may include text that was not actually coded if the query used sentences or paragraphs as the unit of comparison. The wholes of partially coded sentences or paragraphs are used for coding comparison under these settings.

3 Show or hide highlighting.

4 Use the colored bars in the scroll bar to quickly see where there is agreement and disagreement. Click in the bar or scroll to navigate.

How is percentage agreement calculated?

NVivo calculates percentage agreement for each combination of node or case and file. It also calculates values for each node or case across all the files in a query.

Percentage agreement is the percentage of file content (measured in characters, sentences or paragraphs), on which the two users agree that it should be coded to a specific node or case, or not.

For example, in a document with 1000 characters where:

  • 800 characters have not been coded by either user
  • 50 characters have been coded by both users, and
  • 150 characters have been coded by only one user

the percentage agreement is (800 + 50) ÷ 1000 = 85%, because both users 'agree' about 850 of the characters. (Replace 'characters' with 'sentences' or 'paragraphs' if these are selected for comparison.)

How is the Cohen kappa coefficient calculated?

Cohen’s kappa is widely used to quantify the level of inter-rater agreement between two raters (i.e. coders). The formula calculates the agreement between two coders and then adjusts for agreement that would happen by chance.

The formula is: κ = P0Pe / 1 – Pe
where P0 is the amount of agreement between two coders (equivalent to the 'percentage agreement' calculated by NVivo) and Pe is the probability of chance agreement.

The formula can be illustrated by the following table, where:

  • Pyy is the proportion of content that both coders assigned to a node
  • Pyn is the proportion that coder 1 assigned to the node and coder 2 did not
  • Pny is the proportion that coder 2 assigned to the node and coder 1 did not
  • Pnn is the proportion that neither coder coded to the node.

The sum of these proportions is 1: Pyy + Pyn + Pny + Pnn = 1

    Coder B
    Assigned node Did not assign node
Coder A Assigned node Pyy Pyn
Did not assign node Pny Pnn

 

The observed agreement, P0 is: Pyy + Pnn

The probability of chance agreement, Pe is: (Pyy + Pyn) × (Pyy + Pny) + (Pny + Pnn) × (Pyn + Pnn)

 

Interpreting kappa coefficients

If two users are in complete agreement about which content to code in a file, then the kappa coefficient is 1. If there is no agreement other than what could be expected by chance, the kappa coefficient is ≤ 0. Values between 0 and 1 indicate partial agreement.

Different authors have suggested different guidelines for interpreting kappa values, for example (from Xie, 2013):

Landis & Koch (1977) Altman, DG (1991) Fleiss et al (2003)
κ Strength of agreement κ Strength of agreement κ Strength of agreement
0.81–1.00 excellent 0.81–1.00 very good 0.75–1.00 very good
0.61–0.80 substantial 0.61–0.80 good 0.41–0.75 fair to good
0.41–0.60 moderate 0.41–0.60 moderate < 0.40 poor
0.21–0.40 fair 0.21–0.40 fair    
0.00–0.20 slight < 0.20 poor    
< 0.00 poor        
Kappa vs. percent agreement

Kappa values can be low when percentage agreement is high. For example, if two users code different small sections of a file leaving most content uncoded, the percentage agreement is high, because there is high agreement on content that should not be coded. But this situation is likely to occur by chance (i.e. if the coding was random), and so the kappa coefficient is low.

Conversely, if most of a file is not coded but there is agreement on the content that is coded, then percentage agreement is again high, but now the kappa value, too, is high, because this situation is unlikely to occur by chance.

All kappa coefficients are 0 or 1

If all the kappa values in a query are 0 or 1 it may indicate that one of the two users being compared has not coded any of the selected files to the selected nodes, i.e. you may have selected the wrong files, codes, or coders for the query.