# 4.4.9: Regression

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$

( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\id}{\mathrm{id}}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\kernel}{\mathrm{null}\,}$$

$$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$

$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$

$$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$

$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$

$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vectorC}[1]{\textbf{#1}}$$

$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$

$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$

$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

From the menu bar, click on “Analyze,” then on “Regression,” and on “Linear….” In the left window, select the dependent variable you wish to analyze, and click on the top right arrow.  Then select your independent variables, and click on the second right arrow.

You can use “Selection Variable” in much the same way as “Select Cases.”  If you wish to include only some cases in the analysis, select the variable you will use as a filter, click on the third right arrow, click on “Rule…,” define the rule you wish to use to select cases for analysis, and click on “Continue.”

REGRESSION provides three alternatives for handling missing data.  Listwise deletion means that if a case has missing data for any of the variables in the correlation matrix, it will be deleted from all calculations.  This insures that all coefficients will be based on the same cases, but will eliminate a case from all calculations even if it is missing data for only one or two variables in the correlation matrix.  Another option is pairwise deletion.  Pairwise deletion means that each correlation will be based on all case with non-missing values for the two variables in question.  This has the advantage of using as much information as possible for the calculation of each coefficient.  The disadvantage is that the coefficients may not be based on the same subset of cases, since different cases may be missing data for different variables.  A final alternative is to substitute mean values for any missing data, which may or may not make sense depending on how your data is structured.  If you only have a small proportion of missing data, it will not make much difference which option you choose.  If you have a lot of missing data, no option is very satisfactory.  The default option for REGRESSION is listwise deletion.  If you wish to use either of the alternatives just described, click on “Options” and select the option you prefer.

“Options” can also be used to save residual scores as a new variable.

Click on “Continue” (if you have not already done so) and on “OK.”

Output will include the statistical significance (“Sig.”) of the overall equation, and of each term on the right side of the equation.  If the significance level is given as “.000” this does not really mean that there is a zero probability of the relationship occurring by chance.  Rather, it means that the probability is less than .0005.  Note also that the significance levels given are for “two-tailed tests,” that is, for hypotheses that predict a relationship, but do not specify whether the relationship is positive or negative.  When an hypothesis correctly predicts the direction of the relationship, a “one-tailed” test is appropriate.  The significance level (the risk that the relationship is due to chance) for a one-tailed test is half that of a two-tailed test.  For example, if the two-tailed probability that the relationship is due to chance is .04, the one-tailed probability is only .02.

The “Constant” described in the output is the “a” coefficient (the Y intercept).

4.4.9: Regression is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.