Wednesday, July 29, 2020

Converting a continuous variable into a binary variable i.e. dichotomising


If you Google "dichotomising data" you will find lots of warnings that this is basically a bad idea!. Why so? Because if you do so you will lose information. All those fine details of differences between observations will be lost.

But what if you are dealing with something like responses to an attitude survey? Typically these have five-pointed scales ranging from disagree to neutral to agree, or the like. Quite a few of the fine differences in ratings on this scale may well be nothing more than "noise", i.e. variations unconnected with the phenomenon you are trying to measure. A more likely explanation is that they reflect differences in respondents "response styles", or something more random

Aggregation or "binning" of observations into two classes (higher and lower) can be done in different ways. You could simply find the median value and split the observations at that point. Or, you could look for a  "natural" gap in the frequency distribution and make the split there. Or, you may have a prior theoretical reason that it makes sense to split the range of observations at some other specific point.

I have been trying out a different approach. This involved not just looking at the continuous variable I wanted to dichotomise, but also its relationship with an outcome that will be of interest in subsequent analyses. This could also be a continuous variable or a binary measure.

There are two ways of doing this. The first is a relatively simple manual approach. In the first approach, the cut-off point for the outcome variable has already been decided, by one means or another.  We then needed to vary the cut-off point in the range of values for the independent variable to see what effect they had on the numbers of observations of the outcome above and below its cut-off value. For any specific cut-off value for the independent variable an Excel spreadsheet will be used to calculate the following:
  1. # of True Positives - where the independent variable value was high and so was the outcome variable value
  2. # of False Positives - where the independent variable value was high but the outcome variable value was low
  3. # of False Negative - where the independent variable value was low but the outcome variable value was high
  4. # of True Negatives - where the independent variable value was low and the outcome variable value was low
When doing this we are in effect treating cut-off criteria for the independent variable as a predictor of the dependent variable.  Or more precisely, a predictor of the prevalence of observations with values above a specified cut-off point on the dependent variable.

In Excel, I constructed the following:
  • Cells for entering the raw data - the values of each variable for each observation
  • Cells for entering the cut-off points
  • Cells for defining the status of each observation  
  • A Confusion Matrix, to summarise the total number of observations with each of the four possible types described above.
  • A set of 6 widely used performance measures, calculated using the number of observations in each cell of the Confusion Matrix.
    • These performance measures tell me how good the chosen cut-off point is as a predictor of the outcome as specified. At best, all those observations fitting the cut-off criterion will be in the True Positive group and all those not fitting it would be in the True Negative group. In reality, there are also likely to be observations in the False Positive and False Negative groups.
By varying the cut-off points it is possible to find the best possible predictor i.e. one with very few False Positive and very few False Negatives. This can be done manually when the cut-off point for the outcome variable has already been decided.

Alternatively, if the cut-off point has not been decided for the outcome variable, a search algorithm can be used to find the best combination of two cut-off points (one for the independent and one for the dependent variable).  Within Excel, there is an add-in called Solver, which uses an evolutionary algorithm to do such a search, to find the optimal combination

Postscript 2020 11 05: An Excel file with the dichotomization formula and a built-in example data set is available here  

  
2020 08 13: Also relevant: 

Hofstad, T. (2019). QCA and the Robustness Range of Calibration Thresholds: How Sensitive are Solution Terms to Changing Calibrations? COMPASSS Working Papers, 2019–92. http://www.compasss.org/wpseries/Hofstad2019.pdf  

 This paper emphasises the importance of declaring the range of original (pre-dichotomised) values over which the performance of a predictive model remains stable