Coding comparison query

The coding comparison query compares coding by two users to measure 'inter-rater reliability'—the degree of coding agreement between them. Agreement is measured by two statistical methods:

  • Percentage agreement: the number of content units on which coders agree (to code or not to code), divided by the total number of units—reported as a percentage.
  • Cohen's kappa coefficient: a statistical measure that takes into account the amount of agreement expected by chance—expressed as a decimal in the range –1 to 1 (where values ≤ 0 indicate no agreement, and 1 indicates perfect agreement).

You can select specific nodes (or relationships or sentiments) or cases for comparison, in selected data files, datasets, externals and/or memos. You can also select groups of nodes or files by identifying search folders, sets or classifications they belong to.

It is possible to compare coding between groups of users, however, standardly, comparisons are made between individual coders. If comparing groups, content is taken to be coded if at least one member of a group coded it. Coders are identified by their NVivo user profiles User profiles.

Both text coding and region coding can be compared. These are treated separately, producing separate results:

  • All coding in documents, datasets, externals, memos, codes and cases is text coding.
  • All coding in picture, audio and video files is region coding, using pixel ranges (rectangles defined by their top-left and bottom-right corners) or timespans. Note: 
    • If you code transcript text in an audio or video file, the timespan of the entire transcript row is treated as having been coded. So, for example, two coders who code different sentences in a row will be reported as in agreement for the entire row timespan.
    • If you code text in the description/notes of a picture file all the pixels in the picture are treated as having been coded.
  • PDFs can have both region (pixel range) and text coding.

Text coding uses text characters as the unit of comparison.

You can save query configuration settings to run the same query at a later time, when more coding has been done. Queries are saved under Search / Queries in the Navigation View.

You cannot save query results in NVivo, however you can export them to other applications (e.g. Excel) to save. Export query results

Run a coding comparison

  1. On the Explore tab, select Coding Comparison. The Coding Comparison Query dialog box opens.
  2. For User Group A and User Group B, select the users whose coding you want to compare (recommended to compare individual coders). If you include more than 1 user in a group, any coding carried out by any of the users in the group counts towards the comparison.
  3. Use the At field to select the nodes and/or cases you want to compare coding for:
    • All Nodes: coding to all nodes and cases in the selected files
    • Selected Nodes: coding to selected nodes and/or cases in the selected files
    • Codes and Cases in Selected Static Sets: all the nodes and cases included in selected sets
    • Cases Assigned to Selected Classifications: all the cases with selected case classifications
    • Codes and Cases in Selected Search Folders: all the nodes and cases included in selected dynamic sets

    NOTE: If you include an aggregate node in the scope of a query, content coded to it or its direct children is included in the results. Aggregate nodes (gather all content in a parent node)

  4. In the Scope field select the files in which you want to compare the coding:
    • Files & Externals: all data files and externals, but not memos
    • Selected Items: selected data files, externals and/or memos
    • Items in Selected Folders: all data files, externals and/or memos in selected folders
    • Files, Externals & Memos in Selected Sets: all data files, externals and memos included in selected sets
    • Files Assigned to Selected Classifications: all data files, externals and memos with selected file classifications
    • Files, Externals & Memos in Selected Search Folders: all the data files, externals and memos included in selected search folders
  5. Select Display kappa coefficient and/or percentage agreement to include in the results (you must select at least one).
  6. Select Text coding and/or Region coding for the type/s of coding you want to compare:
    • All coding in documents, externals, memos, nodes and cases is text
    • All coding in picture, audio and video files is region coding, using pixel ranges or timespans
    • PDFs can have both region (pixel range) and text coding.
  7. To save the query settings, check Add to project at the top of the dialog box. Name the query and optionally provide a description.
  8. Click Run.

Query results are displayed in the Detail View (see below).

Coding comparison results

Coding comparison query results.

Results for text and region coding are shown on different tabs in the Detail View.

Each row shows data for one node or case in one file. Results for single nodes or cases across all files in the query are not shown, nor an overall value for all nodes or cases and files. You can calculate these independently by exporting the results data Calculating across multiple files and/or nodes).

To view the content that a row in the results table refers to, right-click in the row and select Open Node/Sentiment/Relationship/Case or Open File .

1 The node, sentiment, relationship or case being compared.

2 The name of the file, and its location, where the node or case was coded.

3 The file size, measured as follows:

  • Documents, datasets, memos and externals  = number of characters
  • PDFs = number of pages and number of characters
  • Media file = duration in minutes/seconds/tenths of seconds
  • Picture = the total number of pixels expressed as height multiplied by width

4 The kappa coefficient—shown only if you selected Display Kappa Coefficient.

5 The green columns show percentage agreement (shown only if you selected Display percentage agreement):

  • Agreement = sum of columns A and B and Not A and Not B
  • A and B = the percentage of content coded to the selected node by both Group A and Group B
  • Not A and Not B = the percentage of content coded by neither Group A nor Group B

6 The red columns show percentage disagreement (shown only if you selected Display percentage agreement):

  • Disagreement Column = sums of columns A and Not B and B and Not A
  • A and Not B = the percentage of content coded by Group A and not by Group B
  • B and Not A = the percentage of content coded by Group B and not by Group A

How is percentage agreement calculated?

NVivo calculates percentage agreement for each combination of node or case and file.

Percentage agreement is the percentage of file content (measured in characters, pixels or tenths of seconds), on which the two users agree that it should be coded to a specific node or case, or not.

For example, in a document with 1000 characters where:

  • 800 characters have not been coded by either user
  • 50 characters have been coded by both users, and
  • 150 characters have been coded by only one user

the percentage agreement is (800 + 50) ÷ 1000 = 85%, because both users 'agree' about 850 of the characters. (For pictures or PDF regions, pixel ranges are used instead of characters, and for media files, tenths of seconds.)

How is the Cohen kappa coefficient calculated?

Cohen’s kappa is widely used to quantify the level of inter-rater agreement between two raters (i.e. coders). The formula calculates the agreement between two coders and then adjusts for agreement that would happen by chance.

The formula is: κ = P0Pe / 1 – Pe
where P0 is the amount of agreement between two coders (equivalent to the 'percentage agreement' calculated by NVivo) and Pe is the probability of chance agreement.

The formula can be illustrated by the following table, where:

  • Pyy is the proportion of content that both coders assigned to a node
  • Pyn is the proportion that coder 1 assigned to the node and coder 2 did not
  • Pny is the proportion that coder 2 assigned to the node and coder 1 did not
  • Pnn is the proportion that neither coder coded to the node.

The sum of these proportions is 1: Pyy + Pyn + Pny + Pnn = 1

    Coder B
    Assigned node Did not assign node
Coder A Assigned node Pyy Pyn
Did not assign node Pny Pnn

 

The observed agreement, P0 is: Pyy + Pnn

The probability of chance agreement, Pe is: (Pyy + Pyn) × (Pyy + Pny) + (Pny + Pnn) × (Pyn + Pnn)

 

Interpreting kappa coefficients

If two users are in complete agreement about which content to code in a file, then the kappa coefficient is 1. If there is no agreement other than what could be expected by chance, the kappa coefficient is ≤ 0. Values between 0 and 1 indicate partial agreement.

Different authors have suggested different guidelines for interpreting kappa values, for example (from Xie, 2013):

Landis & Koch (1977) Altman, DG (1991) Fleiss et al (2003)
κ Strength of agreement κ Strength of agreement κ Strength of agreement
0.81–1.00 excellent 0.81–1.00 very good 0.75–1.00 very good
0.61–0.80 substantial 0.61–0.80 good 0.41–0.75 fair to good
0.41–0.60 moderate 0.41–0.60 moderate < 0.40 poor
0.21–0.40 fair 0.21–0.40 fair    
0.00–0.20 slight < 0.20 poor    
< 0.00 poor        
Kappa vs. percent agreement

Kappa values can be low when percentage agreement is high. For example, if two users code different small sections of a file leaving most content uncoded, the percentage agreement is high, because there is high agreement on content that should not be coded. But this situation is likely to occur by chance (i.e. if the coding was random), and so the kappa coefficient is low.

Conversely, if most of a file is not coded but there is agreement on the content that is coded, then percentage agreement is again high, but now the kappa value, too, is high, because this situation is unlikely to occur by chance.

All kappa coefficients are 0 or 1

If all the kappa values in a query are 0 or 1 it may indicate that one of the two users being compared has not coded any of the selected files to the selected nodes, i.e. you may have selected the wrong files, codes, or coders for the query.

If one user’s work has been imported from another project it may indicate that their coding was not imported. When merging projects with the intention of running coding comparisons, ensure that all documents and codes in the projects (including coding structures) match properly. When configuring import:

  • select to import 'All' project items, or 'Selected (including content)' with 'Coding' selected.
  • select 'Merge into existing item” for duplicate items

See Merge projects or import items from another project

Calculating across multiple files and/or nodes

NVivo calculates percentage agreement and kappa coefficients for each combination of node or case and file. It does not calculate values for a single node or case across all the files in a query's scope, nor overall values for all the nodes/cases in all the files. To get these values, export the results into another application, such as Excel, to calculate yourself Export query results.

Before calculating the additional values decide how you want to weight the files—that is, treat each file equally or weight them according to the amount of codable content they contain.

To help understand the calculations, download the Coding Comparison Calculation Examples spreadsheet, which has four worked examples using spreadsheet formulas:

  • Average figures for a single node across 3 files (weighting each file equally)
  • Average figures for a single node across 3 files (weighting each file according to its size)
  • Average figures for 5 nodes across 3 files (weighting each file equally)
  • Average figures for 5 nodes across 3 files (weighting each file according to its size)