WHY DO REGRESSION COEFFICIENTS HAVE THE "WRONG" SIGN?

Often, a coefficient in a multiple regression model has a sign that is contrary to our expectations. Here are some reasons why this can happen. 

1. PARTIAL RELATIONSHIPS ARE DIFFERENT FROM MARGINAL RELATIONSHIPS

    The interpretation of a parameter is entirely dependent upon the model in which the parameter appears.  If you have the "wrong" sign, you may not be thinking clearly about the "held fixed" meaning of the parameters.  Here is an example, taken directly from

Rinott, Y. and Tam M. (2003), "Monotone Regrouping, Regression, and Simpson's Paradox",  The American Statistician 57, 139-141. 

Here Y = SAT score, X1 = High School GPA, X2 = Time (1992 - 2002).  The data (exaggerated) look like this:


SAT     |               3    4 4
        |                3      4  4
        |      1       2  3  3       4
        |              2  2       4 3
        |     1   1    1  2        3
        |             1      22
        |        1 1
        |       1      1
         ------------------------------------------
             1990            2000              Time


(Symbol plotted is value of X1, in Grade points: 1=D, 4=A).

While SATs are generally increasing over time, the SATs are decreasing within each grade strata, as evidenced by the decreasing pattern within each of GPAs 1,2,3,4.  In the case of Rinott and Tam's study, they argued that this discrepancy is caused by grade inflation.

The sign of the estimate of b2 in the multiple regression model SAT = b0 + b1GPA + b2Time + e, will be negative, and might seem "wrong", but it is actually correct when you think about the "held fixed" meaning (specifically, holding GPA fixed).  In other words, the "partial" relationship between SAT and Time is a decreasing relationship.

On the other hand, the sign of the estimate of b1 in the simple regression model SAT = b0 + b1Time  + e, will be positive, reflecting the generally increasing trend.  (Note: This regression model depicts the "marginal" relationship between SAT and Time, and shows it to be an increasing relationship.)

"Simpson's paradox" refers to the reversal of signs of directional associations that sometimes occurs when data are aggregated.  Here, in the GPA-defined subgroups, we see negative trends.  However, in the aggregate data, we see a positive trend.


Here is some SAS code that generates and graphs data that more closely align with Rinott and Tam's findings.

 

data plot;
   do t = 1990 to 2010 by 1;
      do student = 1 to 1000;
	  sat = round(1000 + (t-1990)/1.6 + 150*rannor(12321),10);
	  chk = (sat/1600) + (t-1990)/10 + .3*rannor(0);      
      if chk >1.6 then grade = 4;
      else if chk > 1.2 then grade = 3;
      else if chk > 1 then grade = 2; 
	  else grade = 1;
          year = t;
	  output;
	end;
	end;
run;
proc reg;
   one:   model sat = year;
   two:   model sat = year grade;
run; quit;

proc sgplot;
   scatter y=sat x=year;
   reg y=sat x=year;
run;
proc sgpanel;
   panelby grade / rows=4 columns=1;
   scatter y=sat x=year;
   reg y=sat x=year;
run;




2. THE VARIABLE IN QUESTION IS A PROXY FOR ANOTHER VARIABLE

    In some cases the variable in question may be highly correlated with another variable which has been excluded from the analysis. Perhaps it is this excluded variable which is causing the unexpected sign.  Think of the ice cream/drowning example.

Such a variable is also known as a "confounding variable." The wrong sign might be attributable to an excluded confounding variable.


3. MULTICOLLINEARITY

    Multicollinearity causes inflated standard errors, which in turn make it more likely to observe an "incorrect" sign. However, I would not suspect multicollinearity if the coefficient were significantly different from zero.
    Recall also that multicollinearity strains the interpretation of the parameters. Perhaps one should not emphasize parameter interpretation in this case, much in the same way that we should not attempt to interpret intercepts.


4. NONLINEARITY

If the true relationship is nonlinear, this can bias the coefficients enough to change the sign. You see this most often with the intercept term.

 

5. IMPROPER INTERPRETATION OF PARAMETERS. 

The interpretation of a parameter is entirely dependent upon the model in which the parameter appears.  For examples,

In the model where X is transformed to 1/X, b0 is the mean of Y when X is infinity, not the mean of Y when X is 0. 

In the interaction model E(Y) = b0+b1X1 +b2X2 + b3X1X2, the coefficient b1 is NOT the effect of X1, it is instead the effect of X1 when X2=0. 

If you think the sign is "wrong," it might be that you have simply misinterpreted the meaning of the parameter. 

=======================================================

Note: even with "incorrect" signs, the model still may be useful for prediction in the region of X-values from which the model was built, i.e., the model is still useful as a predictive model, as long as you don't extrapolate beyond the region of the data.   This comment applies if you don't care whether the model reflects the underlying science, and you only want a decent prediction of Y.  In other words, if your goal is simply to get a high R2, and not to reveal underlying structure of the data generating process, then you don't care about the signs of the parameters.  On the other hand, some past students who work in the "real world" who construct predictive models for a living have told me that if the model can't be explained to management, then management won't use it, so they wanted the "right signs."