layout |
---|
default |
The magnitude or norm of a vector
An orthonormal basis consists of vectors that are both orthogonal to each other and of unit length (norm 1). For example, in the Euclidean space
Each vector is orthogonal to the others and has a norm of 1:
Any vector in
Two vectors in a vector space
A vector space (or linear space) is a collection of vectors where two operations are defined: vector addition and scalar multiplication. These operations must satisfy certain properties, such as associativity, commutativity, and distributivity.
Formally, a vector space is a set
-
Closure under addition: If
$\mathbf{v}, \mathbf{w} \in V$ , then$\mathbf{v} + \mathbf{w} \in V$ . -
Closure under scalar multiplication: If
$\mathbf{v} \in V$ and$a \in F$ , then$a \mathbf{v} \in V$ . -
Associativity of addition:
$(\mathbf{v} + \mathbf{w}) + \mathbf{u} = \mathbf{v} + (\mathbf{w} + \mathbf{u})$ for all$\mathbf{v}, \mathbf{w}, \mathbf{u} \in V$ . -
Commutativity of addition:
$\mathbf{v} + \mathbf{w} = \mathbf{w} + \mathbf{v}$ for all$\mathbf{v}, \mathbf{w} \in V$ . -
Additive identity: There exists a vector
$\mathbf{0} \in V$ such that$\mathbf{v} + \mathbf{0} = \mathbf{v}$ for all$\mathbf{v} \in V$ . -
Additive inverse: For every
$\mathbf{v} \in V$ , there exists a vector$-\mathbf{v} \in V$ such that$\mathbf{v} + (-\mathbf{v}) = \mathbf{0}$ . -
Associativity of scalar multiplication:
$a(b \mathbf{v}) = (ab) \mathbf{v}$ for all$a, b \in F$ and$\mathbf{v} \in V$ . -
Distributivity of scalar multiplication with respect to vector addition:
$a(\mathbf{v} + \mathbf{w}) = a \mathbf{v} + a \mathbf{w}$ for all$a \in F$ and$\mathbf{v}, \mathbf{w} \in V$ . -
Distributivity of scalar multiplication with respect to scalar addition:
$(a + b) \mathbf{v} = a \mathbf{v} + b \mathbf{v}$ for all$a, b \in F$ and$\mathbf{v} \in V$ . -
Scalar multiplication identity:
$1 \mathbf{v} = \mathbf{v}$ for all$\mathbf{v} \in V$ , where 1 is the multiplicative identity in the field$F$ .
One of the most familiar examples of a vector space is Euclidean space, such as
Another example of a vector space is the space of functions, where vectors are functions, and scalar multiplication or addition follows function rules.
A subspace is a subset of a vector space that is also a vector space under the same operations. For example, the set of all vectors in
For example, if we have a set
Or, if we have a vector
The magnitude or norm of a vector squared is:
This represents the dot product of the vector with itself.
For example, if we have
then: $$ | \mathbf{v} |^2 = \mathbf{v} \cdot \mathbf{v} = \begin{pmatrix} v_1 \ v_2 \ v_3 \end{pmatrix} \begin{pmatrix} v_1 \ v_2 \ v_3 \end{pmatrix} = v_1^2 + v_2^2 + v_3^2 $$
In vector algebra, it doesn't matter whether we represent vectors as row vectors or column vectors, they are essentially the same objects.
However, in matrix algebra, we must be mindful of the shape of a vector (column or row). Specifically, the dot product between two vectors in matrix algebra requires that one vector is transposed to align dimensions.
To compute the norm squared in matrix algebra, we use the transpose of the vector:
Taking the example above:
The transpose of a matrix
This can also be represented as:
$$ \mathbf{(A^T){ij}} = \mathbf{A{ji}} $$
The transpose of a product of two matrices is equal to the product of the matrices' transposes in reverse order.
Definitions:
- Let $ \mathbf{A} $ be an $ m \times n $ matrix.
- Let $ \mathbf{B} $ be an $ n \times p $ matrix.
We know that:
$$ ((\mathbf{A} \mathbf{B})^T){ij} = (\mathbf{A} \mathbf{B}){ji} $$
By the definition of matrix multiplication, the $ (j,i) $-th entry of $ \mathbf{A} \mathbf{B} $ is:
$$ (\mathbf{A} \mathbf{B}){ji} = \sum{k=1}^n \mathbf{A}{jk} \mathbf{B}{ki} $$
This represents the dot product of the $ j $-th row of $ \mathbf{A} $ and the $ i $-th column of $ \mathbf{B} $, where $ k $ runs over the shared dimension $ n $.
Now, let's write the same summation for $ \mathbf{B}^T \mathbf{A}^T $:
$$ (\mathbf{B}^T \mathbf{A}^T){ij} = \sum{k=1}^n \mathbf{B}{ki} \mathbf{A}{jk} $$
Thus, we find that:
$$ ((\mathbf{A} \mathbf{B})^T){ij} = (\mathbf{B}^T \mathbf{A}^T){ij} $$
Hence:
We can extend this property to a product of an arbitrary number of matrices
By applying the transpose to the first matrix and the rest of the product, we get:
Where:
Now applying the transpose:
Continuing this process recursively for each matrix, we eventually get:
For 3 matrices
Derivative of the Quadratic Form $\frac {\partial} {\partial \mathbf{x}} (\mathbf{x}^T \mathbf{A} \mathbf{x})$
If
A singular matrix is a square matrix that does not have an inverse. In other words, for a matrix
then
Properties of singular matrices:
- Determinant is zero:
$\det(\mathbf A)=0$ . This indicates that the matrix compresses space in such a way that the volume of the transformed space collapses to zero, implying no unique inverse exists. - Linearly dependent rows or columns
- Rank deficiency (i.e. rank smaller than number of rows or columns)
- No unique solution to
$\mathbf{A} \mathbf{x} = \mathbf{b}$
Determinant is like a scaling factor by which the matrix transforms the volume of an object in space. For a
If the determinant is non-zero, the matrix transforms space by stretching or compressing it, but still preserves some volume (or area in 2D). A non-zero determinant indicates that the transformation is invertible — you can "undo" the transformation and recover the original space.
However, when the determinant is zero, the matrix collapses the space into a lower dimension (e.g., turning a 3D space into a 2D plane or a 2D plane into a line). In this case, the matrix has compressed all the volume to zero, effectively squashing the space into a flat, lower-dimensional subspace. This means that information is lost — points that were distinct before the transformation may now overlap after the transformation. This means you can no longer undo (i.e. invert) the transformation.
A square matrix maps between spaces of the same dimension (
On the other hand, a rectangular matrix maps between spaces of different dimensions (
If:
-
$n>m$ : dimensions are reduced, information is lost, cannot invert and span$\mathbb{R}^n$ again. -
$n<m$ : dimensions are expanded, but the output cannot span the full space$\mathbb{R}^m$ . Instead, it is in a subspace of dimension$n$ in$\mathbb {R}^m$ space
Therefore rectangular matrix tranformations are non-invertible and an inverse is not defined.