👆 Tap any section with ↗ TAP for interactive charts, worked examples & practice
LINEAR ALGEBRA · Vectors, matrices, eigenstuff, and applications
Midterm & Final Reference · Ultra-Dense A4
Generated by AskSia.ai — graphs, formulas, traps
VECTORS & VECTOR SPACES ↗ TAP
Vector basics in ℝⁿ

A vector v ∈ ℝⁿ is an ordered list of n numbers. Operations:

u + v = (u₁+v₁, ..., uₙ+vₙ)
c·u = (cu₁, ..., cuₙ) (scalar multiplication)
u·v = u₁v₁ + ... + uₙvₙ (dot product, returns scalar)

Dot product gives geometry:

u·v = |u|·|v|·cos θ → θ = 90° iff u·v = 0 (orthogonal)
Span and linear independence

Span(v₁, ..., vₖ) = all linear combinations c₁v₁ + ... + cₖvₖ. Linearly independent iff the only way to combine them to 0 is all c=0.

Subspace
Closed under addition and scalar mult, contains 0. Examples: lines/planes through origin in ℝ³.
Basis
A linearly independent spanning set. Every vector in the space has a unique expression in basis. # vectors = dimension.
ConceptTest
Span ℝⁿHave ≥ n vectors that are linearly indep.
Linearly indep.RREF has pivot in every column
BasisBoth above + exactly n vectors for ℝⁿ
⚡ EXAM TRAP — VECTORS vs POINTS

A vector is a direction + magnitude (no fixed location). A point is a position. They look the same in coordinates but live in different geometric worlds. Vector spaces always include 0; affine point sets don't.

ORTHOGONALITY & PROJECTIONS ↗ TAP
Orthogonal sets

Orthogonal: u·v = 0. Orthonormal: orthogonal AND each |v| = 1.

If {q₁, ..., qₖ} orthonormal:
x = (x·q₁)q₁ + (x·q₂)q₂ + ... + (x·qₖ)qₖ

Coefficients come straight from dot products — no system to solve. This is why orthonormal bases are gold.

Projection onto a subspace
proj_v(u) = (u·v / v·v) · v (project u onto line spanned by v)
proj_W(x) = Σ (x·qᵢ)·qᵢ (project x onto W with onb q₁,...,qₖ)
Gram-Schmidt
Turn any basis {v₁,...,vₖ} into orthogonal {u₁,...,uₖ}: u₁=v₁; uⱼ = vⱼ − Σ_{i<j} proj_{uᵢ}(vⱼ). Normalize for orthonormal.
Least squares
To solve Ax=b when no exact solution: solve AᵀA·x̂ = Aᵀ·b (normal equations). x̂ minimizes |Ax − b|².
QR decomposition
A = QR where Q has orthonormal columns, R upper triangular

Comes from Gram-Schmidt on columns of A. Used to solve least squares numerically.

⚡ EXAM TRAP — ORTHOGONAL ≠ INDEPENDENT (sort of)

Nonzero orthogonal vectors are automatically linearly independent — but the converse fails. Independent vectors can have any angle between them; orthogonality is a stronger condition.

MATRICES & OPERATIONS ↗ TAP
Multiplication
(AB)ᵢⱼ = Σ_k Aᵢₖ · Bₖⱼ (row of A · column of B)

Need # cols of A = # rows of B. Result has rows of A and cols of B.

PropertyForm
AssociativeA(BC) = (AB)C
DistributiveA(B+C) = AB + AC
NOT commutativeAB ≠ BA in general
Transpose(AB)ᵀ = BᵀAᵀ (order flips!)
IdentityAI = IA = A
Matrix as transformation

An m×n matrix A maps ℝⁿ → ℝᵐ. Each column of A is the image of a standard basis vector.

A·eᵢ = i-th column of A
Common 2D maps
Rotation θ: [[cos, −sin],[sin, cos]]. Reflection over y=x: [[0,1],[1,0]]. Scaling: diag(a, b).
Block multiplication
If you can split A and B into compatible blocks, multiply blocks like scalars. Saves work in big matrices.
⚡ EXAM TRAP — TRANSPOSE OF A PRODUCT

(AB)ᵀ = BᵀAᵀ, not AᵀBᵀ. The order flips. Same rule for inverses: (AB)⁻¹ = B⁻¹A⁻¹. Memorize this — it shows up everywhere.

EIGENVALUES & EIGENVECTORS ↗ TAP
The defining equation
A·v = λ·v (v ≠ 0)

v is an eigenvector, λ is its eigenvalue. A acts on v just as scaling — no rotation, no shearing — just stretches by λ.

How to find them
▼ THE 3-STEP RECIPE

1. Solve det(A − λI) = 0 → characteristic polynomial → λ values

2. For each λ, solve (A − λI)v = 0 → null space gives eigenvectors

3. Stack: A is diagonalizable iff total # of independent eigenvectors = n

Trace + det shortcut
For 2×2: λ₁ + λ₂ = trace(A), λ₁λ₂ = det(A). Useful sanity check.
Geometric meaning
Eigenvectors are the special directions A doesn't rotate — only stretches. λ > 1 expands, |λ| < 1 shrinks, λ < 0 flips.
Diagonalization
A = PDP⁻¹ where P = [v₁ | v₂ | ... | vₙ], D = diag(λ₁, ..., λₙ)

Lets you compute Aᵏ easily: Aᵏ = PDᵏP⁻¹. Powers of diagonal D are trivial.

⚡ EXAM TRAP — λ=0 IS A REAL EIGENVALUE

If det(A) = 0, then λ = 0 is an eigenvalue (with eigenvectors = null space). Don't dismiss it because it 'doesn't stretch'. Zero eigenvalues correspond to non-invertibility.

DIAGONALIZATION & APPLICATIONS ↗ TAP
When can we diagonalize?

A is diagonalizable iff it has n linearly independent eigenvectors.

Sufficient conditionWhy
n distinct eigenvaluesdistinct λ → independent eigenvectors
A symmetricspectral theorem: orthonormal eigenbasis
A normal (AAᵀ = AᵀA)diagonalizable by unitary matrix
Computing matrix powers
Aᵏ = P · Dᵏ · P⁻¹ where Dᵏ = diag(λ₁ᵏ, ..., λₙᵏ)

Massive speed-up. Computing A¹⁰⁰ directly is hopeless; via diagonalization it's instant.

Applications
Markov chains
Steady-state vector = eigenvector for λ=1. PageRank, weather modeling, customer flows.
SVD
A = UΣVᵀ where Σ has singular values σᵢ ≥ 0. Works for ANY matrix (rectangular too). Used in PCA, image compression, recommendation systems.

PCA in 1 sentence: diagonalize the covariance matrix; eigenvectors point along axes of max variance.

⚡ EXAM TRAP — DEFECTIVE MATRICES

Some matrices have repeated eigenvalues but too few independent eigenvectors. e.g. [[2,1],[0,2]] has λ=2 (twice) but only 1 eigenvector. NOT diagonalizable. You need Jordan form.

SYSTEMS, RREF & RANK ↗ TAP
Solving Ax = b
▼ ROW-REDUCE TO RREF

1. Form augmented matrix [A | b]

2. Use 3 row operations: swap, scale, add multiple of row

3. Reduce until RREF: pivots = 1, columns above/below pivots = 0

4. Read solutions off RREF

Three outcomes
RREF shapeSolutions
Pivot in every columnunique solution
No row [0…0 | nonzero]infinite solutions (free vars)
Row [0 0 … 0 | k] with k≠0NO solution (inconsistent)
Key invariants
Rank
# pivots = # linearly independent rows = # linearly independent columns. Same number — that's a theorem.
Nullity
# free variables. Rank-Nullity: rank(A) + nullity(A) = # columns.
⚡ EXAM TRAP — REF vs RREF

REF (echelon) is just stair-step shape. RREF (reduced) requires pivots = 1 AND columns above pivots = 0. RREF is unique; many REFs exist for the same A. Most problems want RREF.

DETERMINANTS & INVERSES ↗ TAP
Determinant — what it tells you
det(A) ≠ 0 ⟺ A invertible ⟺ Ax=0 has only trivial solution
Computing det
2×2: |a b; c d| = ad − bc
3×3: cofactor expansion along any row/col
n×n: row-reduce and track sign flips + scalings
PropertyEffect on det
Swap rowsdet negates
Scale row by cdet multiplies by c
Add k·row to anotherdet unchanged
Transposedet(Aᵀ) = det(A)
Productdet(AB) = det(A)·det(B)
Inversedet(A⁻¹) = 1/det(A)
Geometric meaning

|det(A)| = volume scaling factor. det(A) = 2 means A doubles areas (2D) or volumes (3D). det < 0 means orientation flipped.

Cramer's rule
For Ax=b with det(A)≠0: xᵢ = det(Aᵢ)/det(A) where Aᵢ replaces col i with b. Slow for big systems.
Inverse formula
A⁻¹ = (1/det(A)) · adj(A). For 2×2: swap diagonal, negate off-diagonal, divide by det.
⚡ EXAM TRAP — det(A+B) ≠ det(A) + det(B)

The determinant is multilinear, not additive. There's no nice rule for det of a sum. Don't make this up.

DECISION BOX — pick the technique ↗ TAP
Read the question. Find the trigger.
If you see…Use §
'span', 'linear combination'§1 vector spaces
'linearly independent'§1 + §3 (RREF check)
'basis', 'dimension'§1 + §3
'compute AB'§2 multiplication
'rotation', 'reflection'§2 transformation matrix
'solve Ax = b'§3 RREF
'rank', 'pivot', 'free var'§3
'invertible', 'det', 'singular'§4
'volume', 'area scaling'§4 |det|
'eigenvalue', 'eigenvector'§5 char polynomial
'diagonalize', 'Aᵏ', 'PDP⁻¹'§5 + §7
'orthogonal', 'projection'§6
'Gram-Schmidt'§6
'least squares'§6 normal equations
'Markov chain', 'steady state'§7 (eigenvector for λ=1)
'PCA', 'principal component'§7 SVD / spectral
▼ LAST-MINUTE PROCEDURE

Question about Ax=b? RREF the augmented matrix.

Question about A alone? Eigenvalues, det, rank.

Geometry word ('rotation', 'project')? Build the matrix, then apply.

'Find a basis for…'? Find spanning set, RREF, keep pivot columns.

⚡ EXAM TRAP — INVERSES vs SOLUTIONS

To solve Ax = b, you almost never compute A⁻¹ explicitly. Just RREF [A | b]. Computing A⁻¹ is wasteful and numerically unstable. Inverses matter for theory, not for numerics.

⚡ FINAL EXAM TRAP — DIMENSION COUNTING

Always sanity-check dimensions before computing. A 4×3 matrix can't be inverted. A·B with mismatched inner dims is undefined. Half the lost points come from skipping this check.

LINEAR ALGEBRA · Comprehensive Cram Sheet · Ultra-Dense A4
✦ AskSia.ai
For exam prep only · Check your professor's formula sheet rules · asksia.ai/library

Want one for YOUR exact syllabus?

Sia is your free desktop study agent. Drop your professor's slides — Sia builds you a sheet tailored to YOUR test. Better than this library because it knows YOUR materials.

↓ Download Sia · Free