[SOUND] Welcome. Econometrics uses vectors and matrices a lot. In this lecture, I tell you why and show how you can make calculations with matrices. In econometrics, we examine relations between economic variables. For example, we relate the stock returns of companies to their size and their growth ratio. We can collect these observations in a spreadsheet or a table. The columns correspond with the variables and the rows with the companies. In mathematics, we call such an ordered set of numbers a matrix. We denote a matrix with a single capital letter. For example, A. The matrix A on the slide contains the observations of the explanatory variables in the table. A single column is called a vector or column vector. We use a small letter to denote it. Here the vector y contains all returns. A single row is called a row vector. The vector c is a row vector with the observations for company PQR. In the sequel, the term vector always means a column vector. For a row vector, the term row is explicitly mentioned. Vectors and matrices have dimensions. The matrix A has dimensions five by two. The vector y has dimensions five by one, and the row vector c has dimensions one by three. A matrix with dimensions p by q has p rows and q columns. A specific number in a matrix is called an element. The element in row three, column two of matrix A is denoted by a lower case a with three and two as subscripts. It has value 1.55 in our example. To refer to the second row of matrix A, we write a capital A with subscript two followed by a bullet. To refer to a column, the bullet comes first followed by the column number. A with subscript bullet two gives the second column of A. We can multiply any matrix or vector by a scalar, that is a single number. If A is a p by q matrix, and c is a scalar, then the product B of c and A is also a p by q matrix. Every element in B is equal to c times the element in A. Mathematically, we say that for all i and j. Bij is equal to c times aij. The letter i indicates the row and j the column. We can sum matrices and vectors, if they have the same dimensions. Consider two p by q matrices A and B. Their sum C is a new p by q matrix. Every element in this matrix is equal to the sum of the corresponding elements in A and B. I invite you to answer the following question. Does the order of the summation matter? We prove that the order of summation does not matter as follows. Let C denote the sum of A and B. For each element, cij, we have that cij = aij + bij. That's the same as bij + aij. Because it applies to each element, it applies to the matrix summation as a whole. It is also possible to multiply two matrices A and B with each other when the number of columns in A is equal to the number of rows in B. Let A be a p by q matrix and B a q by r matrix. Their product, C is a p x r matrix. Often, we do not include a dot and write the product as AB. The element cij is given by the sum of the multiplication of each element in row i of matrix A with the corresponding element of column j in matrix B. We can also use the bullets to show that we use row i of A and column j of B and we can also write the result in sigma notation. When A and B are both p by p, we can calculate A x B as well as B x A. But is the result the same? I invite you to think about this in the following question. To find the answer, we workout the matrix multiplication. You can see on the slide that the elements of the product AB differ from those of the product BA. I have highlighted this for element 2 1 on the slide. However, when the restrictions on the slide are met, the products are the same. You can easily check this result yourself. This result carries over to p by p matrices A and B. The products AB and BA are unequal, except when specific conditions are met. The product AB combines the rows of A with the columns of B, whereas the product BA combines the rows of B with the columns of A. In a sequence of additions or a sequence of multiplications, it does not matter which addition or multiplication we do first. Let's consider q times r matrices B, C and D. I can first sum B and C and then add D. Or first sum C and D and then add B and even first sum B and D and then add C. For multiplication, I introduce a p by q matrix A and an r x s matrix E. I can calculate a product of A and B and then E by first taking the product of A and B and post-multiplying the result by E. I can also first take the product of B and E and then pre-multiply by A. Of course, we cannot change the position of A, B and E in a multiplication. When we combine addition and multiplication, the usual order of operations for scalar numbers apply. Multiplication takes precedence over addition. As usual, parentheses can be used to indicate the order of operation. In the expression A(B + C) on the slide. B and C are inside parentheses, so their addition comes first. The result is then pre-multiplied by A. Because the multiplication operates both on B and C, we can also find a result as AB + AC. We also see now that this result is different from AB + C. Now suppose that A and B are two p by p matrices. And the next question for you, please consider the square of A + B. In the answer on the slides, I use the rules for addition and multiplication. We cannot simplify the answer any further, because AB and BA are generally not the same. These calculation rules also apply to column and row vectors since they can be seen as matrices with one of its dimensions equal to one. If A is a p by q matrix and b is a vector of size q, then A times b results in a vector d of size p. Element i of d is the sum of the multiplication of each element of row i of A with the corresponding element in b, so it combines row i of A with the whole vector b. If c is a row vector of size p, then c times A returns a row vector e of size q. Element j of e combines c with column j of A. The multiplication of a row vector u with a column vector of similar size yields a scalar, say w. W is the sum of the multiplication of the corresponding elements in u and v. It is called the dot product or inner product. When w is equal to zero, u and v are said to be orthogonal. The product of the column vector v of size p with the row vector X of size q, yields the p by q matrix Y. Element Yij is equal to vi times xj. This operation is called the outer product of the vectors v and x. Some matrices and vectors are special. A square matrix has equal numbers of rows and columns. A diagonal matrix is a square matrix whose off-diagonal elements are all equal to zero. The identity matrix often denoted by a capital I is a diagonal matrix with ones on the diagonal. A multiplication of a matrix A with the identity matrix with the correct dimensions, of course, results in the original matrix A. The unit vector, often denoted by the Greek letter iota has all elements equal to one. Matrices and vectors are useful in econometrics, because they enable concise notation and straightforward computations. Research in finance has found a relation between the stock returns of a company and its size and its growth ratio. If this relation is linear, the stock return of company i is a constant b1 plus coefficient b2 times its size plus b3 times its growth ratio plus an unexplained part ei. We use the data now from the first slide. Now we collect the observed stock returns in the vector y. The observed size and growth ratios in the matrix X with an additional column that always has the value one to capture the constant. Further, we collect the coefficients b1, b2 and b3 in the vector b and the unexplained parts in the vector e. We can then write our model in matrix form, as y = X times b + e. When we know the values of the elements in b, we can calculate the elements of e, as y minus X times b. Econometrics deals with finding values for b. Now, I invite you to make the training exercise to train yourself with the topics of this lecture. You can find this training exercise on the website and this concludes our introduction into vectors and matrices.