Often, the impact of an assumption violation on the multiple linear regression result depends on the extent of the violation (such as the how inconstant the variance of Y is, or how skewed the Y population distribution is). Some small violations may have little practical effect on the analysis, while other violations may render the multiple linear regression result uselessly incorrect or uninterpretable.
A "new" X variable might be derived from one or more X variables already in the equation, such as using the square of X1 along with X1 to handle curvature in X1, or adding X1*X2 as a new variable to handle interaction between X1 and X2.
If an implicit X variable is not included in the fitted model, the fitted estimates for the coefficients may be biased, and not very meaningful, and the fitted Y values may not be accurate.
Another possible cause of apparent dependence between the Y observations is the presence of an implicit block effect. (The block effect can be considered another type of implicit X variable, albeit a discrete one.) If a blocking variable is suspected, an analysis of covariance can be performed, essentially dividing the data into different regression equations based on the value of the blocking variable.
If multiple values of Y are collected at the same values of X, this can act as another type of blocking, with the unique combinations of values of the Xs acting as blocks. These multiple Y measurements may be less variable than the overall variation in Y, and, given their common values of the Xs, they are not truly independent of each other. If there are many replicated X values, and if the variation between Y at replicated values is much smaller than the overall residual variance, then the variance of the estimate of the coefficients may be too small, making the test of whether they are 0 (and, the test of the goodness of the overall fit) anticonservative (more likely than the stated significance level to reject the null hypothesis, even when it is true). In this case, an alternative method is to replace each replicated unique combination of X values by a single data point with the average Y value, and then perform the regression analysis with the new data set. A possible drawback to this method is that by reducing the number of data points, the degrees of freedom associated with the residual error is reduced, thus potentially reducing the power of the test.
For serially correlated error terms, the estimates of the coefficients will be unbiased, but the estimates of their variances will not be reliable. If they are positively serially correlated, the estimate of residual variance and the estimates of the variances of the coefficients may all be too small, making the tests and confidence intervals that involve them unreliable. This kind of serial correlation may appear when there are one or more implicit X variables.
If you are unsure whether your Y values are independent, you may wish to consult a statistician or someone who is knowledgeable about the data collection scheme you are using.
In cases of severe multicollinearity, it may not be possible to calculate some of the diagnostic measures of influence or leverage, or even to perform the fit itself. In such cases, the data are said to be ill-conditioned.
Once the regression line has been fitted, the boxplot and normal probability plot (normal Q-Q plot) for residuals may suggest the presence of outliers in the data. After the fit, outliers are usually detected by examining the residuals or the high-leverage points.
The method of least squares involves minimizing the sum of the squared vertical distances between each data point and the fitted line. Because of this, the fitted line can be highly sensitive to outliers. (In other words, least squares regression is not resistant to outliers, and thus, neither are the fitted coefficient estimates.) An outlier may act as a high-leverage point, distorting the fitted equation and perhaps fitting the main body of the data poorly.
If you find outliers in your data that are not due to correctable errors, you may wish to consult a statistician as to how to proceed.
For data from a normal distribution, normal probability plots should approximate straight lines, and boxplots should be symmetric (median and mean together, in the middle of the box) with no outliers. Except for substantial nonnormality that leads to outliers in the X-Y data, if the number of data points is not too small, then the multiple linear regression statistic will not be much affected even if the population distributions are skewed.
Robust statistical tests operate well across a wide variety of distributions. A test can be robust for validity, meaning that it provides P values close to the true ones in the presence of (slight) departures from its assumptions. It may also be robust for efficiency, meaning that it maintains its statistical power (the probability that a true violation of the null hypothesis will be detected by the test) in the presence of those departures. Linear regression is fairly robust for validity against nonnormality, but it may not be the most powerful test available for a given nonnormal distribution, although it is the most powerful test available when its test assumptions are met. In the case of nonnormality, a non-least-squares regression method, or employing a transformation of one or more X variables may result in a more powerful test.
Unless the heteroscedasticity of the Y is pronounced, its effect will not be severe: the least squares estimates will still be unbiased, and the estimates of the coefficients will either be normally distributed if the errors are normally distributed, or at least normally distributed asymptotically (as the number of data points becomes large) if the errors are not normally distributed. The estimate for the variance of the coefficients will be inaccurate, but the inaccuracy is not likely to be substantial if the X values are symmetric about their means.
Heteroscedasticity of Y is usually detected informally by examining the X-Y scatterplots of the data before performing the regression. If both nonlinearity and unequal variances are present, employing a transformation of Y may have the effect of simultaneously improving the linearity and promoting equality of the variances. Otherwise, a weighted least squares multiple linear regression may be the preferred method of dealing with nonconstant variance of Y.
If the linear model is not correct, the shape of the general trend of the X-Y plot may suggest the appropriate function to fit (e.g., a polynomial, exponential, or logistic function). Alternatively, the plot may suggest a reasonable transformation to apply. For example, if the X-Y plot arcs from lower left to upper right so that data points either very low or very high in X lie below the equation suggested by the data, while the data points with middling X values lie on or above that straight line, taking square roots or logarithms of the X values may promote linearity.
If the assumption of equal variances for the Y is correct, the plot of the observed Y values against X should suggest a band across the graph with roughly equal vertical width for all values of X. (That is, the shape of the graph should suggest a tilted cigar and not a wedge or a megaphone.)
A fan pattern like the profile of a megaphone, with a noticeable flare either to the right or to the left as shown in the picture suggests that the variance in the values increases in the direction the fan pattern widens (usually as the sample mean increases), and this in turn suggests that a transformation of the Y values may be needed.
Unfortunately, simple X-Y plots may not be as useful in multiple regression as they are for simple linear regression. If there is multicollinearity, then that can cause the plots of Y against individual X values to be misleading. For example, the apparent increase in variance for Y as X1 increases might be due to the effect of other X variables on Y.
If the ratio of the total number of coefficients (including the intercept) to the total number of data points is greater than 0.4, it will often be difficult to fit a reliable model. Many of the individual data points may become influential points, because there is so little information (data) available for each coefficient to be fitted.
A rule of thumb is to aim to have the number of data points be at least 6 times, and ideally at least 10 times, the number of X variables.
Even if none of the test assumptions are violated, a linear regression on a small number of data points may not have sufficient power to detect a significant difference between a coefficient and 0, even if the coefficient is non-zero. The power depends on the residual error, the observed variation in X, the selected significance (alpha-) level of the test, and the number of data points. Power decreases as the residual variance increases, decreases as the significance level is decreased (i.e., as the test is made more stringent), increases as the variation in observed X increases, and increases as the number of data points increases. If a statistical significance test with a small number of data values produces a surprisingly non-significant P value, then lack of power may be the reason. The best time to avoid such problems is in the design stage of an experiment, when appropriate minimum sample sizes can be determined, perhaps in consultation with a statistician, before data collection begins.
Do a keyword search of PROPHET StatGuide.
Back to StatGuide multiple linear regression page.
Back to StatGuide home page.
©1997 BBN Corporation All rights reserved.