Results

Logistic Regression

The behaviour of drivers has been used to claim that people of a higher social class are more unpleasant (Piff et al., 2012). Piff and colleagues classified social class by the type of car (vehicle) on a five-point scale and observed whether the drivers cut in front of other cars at a busy intersection (vehicle_cut). Do a logistic regression to see whether social class predicts whether a driver cut in front of other vehicles (piff_2012_vehicle.jasp).

Model Summary - vehicle_cut
Model Deviance AIC BIC df ΔΧ² p McFadden R² Nagelkerke R² Tjur R² Cox & Snell R²
M₀ 205.495 207.495 211.108 273     0.000 0.000
M₁ 201.334 205.334 212.560 272 4.161 0.041 0.020 0.029 0.016 0.015
Note.  M₁ includes vehicle
Coefficients
Wald Test
95% Confidence interval
(odds ratio scale)
Model   Estimate Standard Error Odds Ratio z Wald Statistic df p Lower bound Upper bound
M₀ (Intercept) -1.954 0.183 0.142 -10.665 113.741 1 < .001 0.099 0.203
M₁ (Intercept) -3.163 0.662 0.042 -4.779 22.835 1 < .001 0.012 0.155
  vehicle 0.365 0.184 1.441 1.985 3.938 1 0.047 1.005 2.067
Note.  vehicle_cut level 'Yes' coded as class 1.

Performance Diagnostics

Confusion matrix
Predicted
Observed No Yes % Correct
No 240 0 100.000
Yes 34 0 0.000
Overall % Correct 87.591
Note.  The cut-off value is set to 0.5

In the output above M0 tells us about the model when only the constant is included. In this example there were 34 participants who did cut off other vehicles at intersections and 240 who did not. Therefore, of the two available options it is better to predict that all participants did not cut off other vehicles because this results in a greater number of correct predictions. Therefore, this baseline model made correct predictions for 240/274 = 87.6% of the observations, and incorrect predictions for 34/274 = 12.4% of the observations.

The table labelled Coefficients at this stage contains only the constant/intercept, which has a value of b0 = −1.95.


The second row of the Model Summary table above deals with the model (M1) after the predictor variable has been added to the model. As such, a person is now classified as either cutting off other vehicles at an intersection or not, based on the type of vehicle they were driving (as a measure of social status). The output shows summary statistics about the new model. The model fit has improved significantly because the chi-square in the table is significant,  = 4.16, p = .041. Therefore, the model that includes the variable vehicle predicted whether or not participants cut off other vehicles at intersections better than the model that includes only the constant.


The Confusion matrix indicates how well M1 predicts group membership. In step 1, the model correctly classifies 240 participants who did not cut off other vehicles and does not misclassify any (i.e. it correctly classifies 100% of cases). For participants who do did cut off other vehicles, the model correctly classifies 0 and misclassifies 34 cases (i.e. correctly classifies 0% of cases). The overall accuracy of classification is, therefore, the weighted average of these two values (87.6%). Therefore, the accuracy is no different than when only the constant was included in the model.


The Coeffcients table shows that the Wald statistic is .047, which is less than .05. Therefore, we can conclude that the status of the vehicle the participant was driving significantly predicted whether or not they cut off another vehicle at an intersection. However, I’d interpret this significance in the context of the confusion matrix, which showed us that adding the predictor of vehicle did not result in any more cases being more accurately classified.


The odds ratio is the change in odds of the outcome resulting from a unit change in the predictor. In this example, the odds ratio for vehicle in step 1 is 1.441, which is greater than 1, indicating that as the predictor (vehicle) increases, the value of the outcome also increases, that is, the value of the categorical variable moves from 0 (did not cut off vehicle) to 1 (cut off vehicle). The 95% confidence interval ranges from 1.005 to 2.067, which just excludes 1, further underscoring the significance.


While the inclusion of vehicle seems to have significantly improved on the model fit, the model has not improved in accuracy, and if we look at the BIC (which is a little stricter about including additional predictors than Deviance is), it has actually increased as a result of looking at the vehicle that someone is driving. While this example highlights why it's a good idea to look at various model fit metrics ( , BIC, Odds ratio, confusion matrix) because they do not seem to be in agreement here, it does make the inference trickier and I would be a bit hesitant before making any strong claims based on these data.