Now we want to solve for b, so we need to get rid of X. Let's assume that error will equal zero on average and forget it to sketch a proof: To give you an idea why it looks like that, first remember the regression equation: If we solve for the b weights, we find that For each person, the 1 is used to add the intercept in the first row of the column vector b. With raw scores, we create an augmented design matrix X, which has an extra column of 1s in it for the intercept. This says to get Y for each person, multiply each X i by the appropriate b,, add them and then add error. In matrix terms, the same equation can be written: The linear part is composed of an intercept, a, and k independent variables, X 1.X k along with their associated raw score regression weights b 1.b k. This says that Y, our dependent variable, is composed of a linear part and error. In raw score form the regression equation is: What is the meaning of the covariance or correlation matrix of the b weights? How is it used? Describe the solution for regression weights for raw scores using matrix algebra.ĭescribe the solution for standardized regression weights from a correlation matrix using matrix algebra.ĭescribe the sampling distributions of the b and beta weights.