wiki home
Online Reference
Dictionary, Encyclopedia & more
by:

photo gallery

structures:matrix:applications

Applications

Maybe this whole discussion is hinting why HSC Physics is kind of lacking. The teachers cannot even depend on students knowing calculus, maths is not a co-requisite, and yet even extension 2 maths only gives a taste of some of the language required to build modern physics. But it is not just Physics. The mathematical structures I will give an extremely abbreviated overview of are fundamental to analysis of all kinds not just engineering, and to computation and design.

Matrices are everywhere. They can be operators transforming vectors or shapes in cartesian space. GPUs, the ‘graphics cards’ that run the screens on your devices are just very parallel matrix calculators, they do matrix operations fast and do many at the same time. Photoshop or whatever graphics applications you use do matrix operations constantly. Solving a set of linear simultaneous equations, like we do in year 8 in high school, is basically the original motivation for matrices. That problem has been an inspiration for maths going way way back, one of the earliest mathematics — but not in europe, europe focussed on formal geometry instead and eventually imported this stuff. The coefficients form a matrix where each row is an equation and the steps we do by hand are matrix operations. A wide range of practical problems in business and planning are, if you frame them that right way, combinations of those kinds of relationships and we are looking for the best outcome, the highest or lowest overall value. This is called optimisation, and in its simplest form it is something we study in high school.

A supercomputer, with thousands of those matrix calculating GPUs repurposed for more general matrix work, runs simulations with varied assumptions using matrix operations and finds the best or average of the ones it tries by iterating, by stepping in little jumps through time. But more detail makes this very much a longer calculation, a supercomputer cannot manage to cope when we try more detail. Now we use trained Neural Nets and are just right now getting Quantum Computers going. But both of these are much more sophisticated and intense … matrix calculations!!! This time much further abstracted from step by step logic.

working with matrices

So far we know a matrix is a table of scalars (often Real Numbers), rows and columns sometimes representing data or perhaps representing a list of vectors. But perhaps it is just a matrix in its own right. I have mentioned we can add them and multiply them by a value from their scalars, and that they can represent the transformations we have studied in cartesian space (rotation, stretching etc). I also mentioned that Complex Numbers can fit in here in an intriguing way. So here are some details.

Any matrix can be multiplied by its type of scalar, multiply each element individually:

`3\ [ [1, 2, 3],[4, 5, 6] ]=[ [3, 6, 9],[12, 15, 18] ]`

Two matrices of the same type (scalar and dimension) can be added, add each position individually:

`[ [1, 2, 3],[4, 5, 6] ]+[ [2, 2, 1],[1, 1, 3] ]=[ [3, 4, 4],[5, 6, 9] ]`

This is the same as the vector operations. Dividing is just multiply by the reciprocal. The negative is just times negative 1. Subtraction is just adding the negative. All as with numbers.

We often use capital letters for matrices, and the lower case letter with an index for its elements:

`A=[ [a_{11}, a_{12}, a_{13}],[a_{21}, a_{22}, a_{23}] ]`

That is row first, column second, starting top left — the order we write in english. If we need a comma in the indices we will use one! We call this a \(2{\times}3\) Real Matrix (assuming those scalars are Real Numbers).

We can switch the rows and columns of any matrix, called Transpose, notation is `A^T`

`[ [1, 2, 3],[4, 5, 6] ]^T=[ [1, 4],[2, 5],[3, 6] ]`

Matrix multiplication

is only valid when the number of columns in the left matrix match the number of rows in the right. Written adjacently, no dot or cross or other operator symbol.

Each element in the product is the Dot Product of its row from the left and its column from the right (which then must thus be the same length, hence the restriction above).

For \(\ AB=C\ \) we have \(\quad c_{ij}=\texttt{A-row}_i\bullet\texttt{B-col}_j\ \) or:

\(\displaystyle A:m{\Tiny\times}n,\quad B:n{\Tiny\times}p,\quad C:m{\Tiny\times}p\qquad AB=C:c_{ij}=\sum_{k=1}^n a_{ik}b_{kj}.\)
\(\left[\matrix{3 & 0 & 2 \cr 0 & \frac13 & 1 }\right]\left[\matrix{3 & 0 \cr 0 & \frac13 \cr 1 & 1 }\right] =\left[\matrix{11 & 2 \cr 1 & \frac{10}9.}\right]\)

This is well defined for an \(m{\Tiny\times}n\) matrix `A` and an `m` dimensional row vector \(\bf m\) or an `n` dimensional column vector \(\bf n\). So \({\bf m}A\) gives a new `n` dimensional row vector. `A` maps any vector in an `m`-space to a new vector in `n`-space. Or the vector might represent a point in cartesian space. \(\ A\bf n\ \) likewise maps a column vector in `m`-space to `n`-space.

We are starting to see here how Space, Scalar, Vectors and Matrices fit together. We have a vector in `n`-space \(\bf v\), in one interpretation it is a list of scalars `v_i`. In another interpretation it is a matrix `V` which

Put another way `A` multiplied by a vector is a function — with any point (or vector) in `m`-space the domain, the codomain is in `n`-space. Multiplying matrices is composing these functions. So if `B` is an \(n{\Tiny\times}p\) matrix and \(\bf p\) a vector in `p`-space then consider:

using row vectors, so right-multiplying thus\(\quad A\!:n{\mapsto}m,\ B\!:m{\mapsto}p,\ C\!:n{\mapsto}p\)
\({\rm f}({\bf m})={\bf m}A,\quad{\rm g}({\bf n})={\bf n}B,\quad{\rm h}={\rm g}\circ{\rm f}, \quad C=AB\)
then:\(\quad{\bf m}C={\rm h}({\bf m})={\rm g}({\rm f}({\bf m}) )\)

or using column vectors, left multiplying (flipping domain and codomain)
\({\rm a}({\bf n})=A{\bf n},\quad{\rm b}({\bf p})=B{\bf p},\quad{\rm c}={\rm a}\circ{\rm b}, \quad C=AB\)
then:\(\quad C{\bf p}={\rm c}({\bf p})={\rm a}({\rm b}({\bf p}) )=AB{\bf p}\)

The second might seem a more natural notation, fitting better with our function notation. The first fits more with the mapping notation.

square matrices

Square matrices behave in a more straightforward way, especially considered with a vector or cartesian space of the same dimension. Two square matrices of the same dimension multiply to make another of the same dimension, the set of `n`-dimensional matrices is closed under multiplication. There is an Identity under this multiplication, the square matrix with all ones on the main diagonal and zeros elsewhere. This multiplication is not commutative. They can be multiplied by a vector of the same dimension, considered as a single row matrix on the left or a single column matrix on the right. The result is the same kind of vector. This means they define transformations in the corresponding cartesian or vector space. They tie together several very important mathematical structures mentioned in the discussion here

Consider first \(2{\times}2\) matrices.

Planning

Something extremely simplified nevertheless giving an idea of using matrices, tables of numbers, very explicitly in a business context is answering a simple logistics question.

We have some materials, we make some intermediate products then use them to make a range of final products. How much do we need of the original materials to satisfy an order for some combination of final products?

Adding some constraints and a goal to this would inform some business decisions. Maybe we have limits on our input supplies and a revenue per unit of each final product, and we want maximum revenue. This question very rapidly becomes computationally intense, so lets keep it extremely simple as an example.

Table 1 shows a row for each input material needed with the quantities needed for each intermediate product. So inputs are rows, products are columns. It is a matrix, `A`.

`[ [5, 2, 0, 1], [2, 0, 0, 3], [1, 1, 5, 4] ]=A`

Table 2 is the same for the final products with intermediate products as inputs (so rows). Matrix `B`.

`[ [3, 4], [2, 5], [6, 1], [3, 2] ]=B`

Then Matrix Multiplication is an operation between two matrices, the left must have the same number of columns as the rows on the right. `T=AB` is a useful new \(3{\times}2\) ‘Totals’ table, its rows are the amounts of original materials required for each final product, the columns. These matrices summarise steps or processes, inputs are rows, outputs are columns.

For the answer to our question, a list of materials to produce 20 of the first product and 50 of the second, this ‘Order’ is one column, two rows, an output from two inputs:

\(AB\left[\matrix{20\cr50}\right]=\left[\matrix{{2040}\cr{1000}\cr{2040}}\right]. \quad\)

Assuming the data for your operations is there in matrix form then many questions are quick to enter into a calculator on your phone. More likely someone in IT has made a web form with the most likely questions, spitting out the answers, and if you are lucky your question is there. They are the ones with the maths.

Nevertheless that three number list was not hard — I just went to matrix mode, entered the two matrices, then the equation, and read the answer. Then copy–paste into the wiki and a little formatting put it here. I could generally copy–paste a table from text into a matrix in my phone calculator or back but without a program in between I need to edit it into the format below. There are plenty of programs that will read a table in many different formats and save in these standard ones, they are running on my phone. The calculator can copy or paste the latex formulas from the wiki as matrices, or can copy or paste like
{{5,2,0,1},{2,0,0,3},{1,1,5,4}}
which I could make after copying a table pretty quicky if the tables were big enough to avoid typing them in.

A matrix maths package is a very important tool dealing with more complicated algorithms giving answers to more complicated business, engineering or planning questions. Again specific non-maths user apps often present some query interface for anticipated questions and maintain a database of tables, then use the matrix package for the replies. Matrix arithmetic is a tool many sophisticated analysts use directly, a lot more powerful than a spreadsheet but without the advantage that spreadsheets are a simple extension of familiar account books.

Optimisation

This involves lots of calculation and modelling of systems. Mostly these are not simple enough for a closed algebraic solution, we try by running simulations with different options. The first app that sold personal computers was the spreadsheet. It put them on office desks for the very first time and set up Bill Gates, Steve Jobs and their companies. Visicalc. That is a mathematical model, often of a business or project, you put in numbers and equations then can play around trying different possibilities and it calculates the outcome live. You settle on something that works for you. It was so useful and effective that it has hardy changed since, except you can make the cells look pretty with nice formatting and there is a display option that summarises a matrix of number cells very usefully called a Pivot Table. This is a semi-manual, semi-automated system for optimisation and it completely transformed business and the office. You can also put in a big table of data (a matrix) and use formulas for statistics. I was 19 the year it was released, doing 2nd year Computer Science at uni, not quite the first cohort to have a 1st year Computer Science option but almost. I didn’t go that way then, I ran the student radio station and did not study much. I actually did essentially the same degree starting 2010, about when you were in kindy and when I was 50. Big computers (price maybe a million and a whole floor of an office building) in banks, navy, physics departments, census bureaus and meteorology offices had been usual since I was in kindy, but only a very few in very wealthy departments for the decade before that.

A spreadsheet is a 2D table, each cell has a value which might be an equation. It is a kind of messy matrix with internal relationships. The maths systems that became available on a personal computer, now on your phone if you want, started using matrix language and operations as the interface. It is a very natural engineering, business and data language. Those systems are pushing into much more natural language interfaces now. Because they can, we have finally got to the level of computational sophistication and scale of machine that can do it, so we do. It was always the goal, the dream. Now the app translates your questions into matrix for you. But if you do not understand the underlying operations it is very hard to ask the right question or understand the implications of the reply! And because we have powerful computers in our pocket, and extremely powerful ones almost always immediately available via them on the cloud, and then much more powerful ones again available in any workplace that could use them we have come to rely on this maths. We do things every day that would have been utterly impossible when you were starting kindy. To understand and interact with these things properly we need serious maths language. At the same time we always have a calculator nearby. Probably an app, or a few for different contexts, on your phone. I just gave a cafe a $20 note for a $14.90 lunch and the change needed a calculator. No cash registers here, just a drawer with some cash and of course a device to pay by card. Most of us do not do arithmetic in our head at all. But we very certainly need the language to talk to the computer, the mathematical model in our head to frame the question and the ability to interpret the reply. And that does need number and quantity, and you are pretty shaky on that if you cannot do arithmetic quickly and accurately in your head. We need it even more because those questions and answers are more ubiquitous in life.

Linear Algebra

The linear relationship above could be expressed as \(\left[\matrix{a&b&c&d}\right]{\Tiny\left[\matrix{x\cr y\cr z\cr1}\right]}=\left[0\right].\)
Every linear relationship between these variables can be represented by a 4-vector, every 4-vector represents a relationship. The vector space is isomorphic to the set of such relationships. This was the original motivation for matrix algebra. A set simultaneous equations can be represented by the rows of a matrix. The school method called ‘elimination’ is then a series of row operations on that matrix aiming to make the entries below and to the left of a diagonal from the top left. We can multiply a row, add rows and swap rows leaving the problem the same. Then with all those zeros we can easily substitute to find the values of the variables, if there is a solution. Each of these is left-multiplying an elementary matrix, a square matrix dimension the number of equations. These can be combined by multiplication.

`[ [1,0,0], [0,1,0], [0,0,4] ]` is multiply last equation by 4

`[ [0,1,0], [1,0,0], [0,0,1] ]` is swap the first two equations

`[ [1,0,0], [-2,1,0], [0,0,1] ]` is subtract twice the first from the second.

This is a very easily automated procedure. A modern notation for one of the most ancient mathematical methods.

Cosmology

We slowly came to understand that the classical dynamics that Newton described was an approximation. A very good one in most ordinary circumstances. At the time it was formulated, 340 years ago, there were serious questions about frames of reference, but that was a purely theoretical, philosophical consideration. Then about 160 years ago our description of light as an electromagnetic process was formed. Maxwell’s equations describe light as waves, the interaction between a changing electric field and a changing magnetic fields. This was a radical shift from the idea of light as some kind of particle bouncing around. But familiar kinds of waves, say sound, are in a medium. Sound can be thought of as pressure waves in air, or as air particles vibrating. Particles is somewhat more concrete, waves are much much easier to work with.

structures/matrix/applications.txt · Last modified: 2025/02/16 19:58 by simon