A Brief History of Solving Simultaneous Equations via Matrices
I was flipping through a Linear Algebra textbook today - particularly spurred to it due to my increasing efforts in picking up more AI/ML related topics.
Somehow - given the way textbook presented these techniques stripped off most of the historical context - I couldn’t resist doing some digging on my own.
300 BC - 200 BC: Nine Chapters on The Mathematical Art
Introduces the idea of “counting board” - behaves like an “augmented matrix”.
Input System of Equations:
Augmented Matrix:
1750: Gabriel Cramer publishes Cramer’s Rule in Introduction to the Analysis of Algebraic Curves
He uses determinants to solve systems with unique solutions based on his system.
1810: Carl Friedrich Gauss came up with Gaussian Elimination to solve least-squares problems
A detailed account of how Gauss’s method came to be - is explained in this page: How ordinary elimination became Gaussian elimination. Interestingly - there is a Newton’s notebook connection to the topic, and a lot of backstory behind how the method got published.
1850: James Joseph Sylvester coins the word “Matrix”
In the following page from Sylvester’s “Additions to the Articles on a New Class of Theorems” (link) - we can find the first reference to the word “Matrix”. Matrix is derived from “mater” - which in latin stands for mother. He visualized matrix as mother’s womb, which gives birth to many determinants - smaller matrices derived from the original by removing rows and columns.
1858: Arthur Cayley formalizes Matrix Algebra in Memoir on the Theory of Matrices
His work directly influenced the development of the Ax = b notation for representing system of equations we use today
So there we have it: roughly 2,000 years of accumulated human thought culminating in linear algebra — powering modern GPUs, AI, and much of today’s computational world.
It’s hard not to pause at the scale of the intellectual structure we inherit, and how casually we now build on top of it.






