A branch of maths, Linear algebra covers vectors, vector spaces, linear mappings, and systems of linear equations. Linear algebra is a foundational element in various scientific and engineering disciplines due to its wide range of applications. With this article, we will explore key applications of how linear algebra's numerous application is applied in real life scenarios in different fields using mathematical concepts.
Economists around the globe use mathematical operations of Linear algebra to solve optimization problems and to create input-output models on how different sectors interact and correlate, these models help understand the behavior of the Economy and the impact of change in one sector on other sectors, we will look into both economic models key concept in details here
Linear algebra in economics helps solve real life problems such as profit maximization, cost minimization, and optimizing resource allocation, Economists use linear algebra for these problems using linear programming and linear transformation techniques of mathematical models
For example, let us understand profit maximization,
Imagine a company Buzzytunes produces two different products 1) wireless headsets (a) and 2) wired headsets (b), the profit from wireless headsets is $40 per piece, and for wired headsets, it's $30 per piece, Buzzytunes only has limited resources of 100 labor hours and raw materials for only 80 units, every piece of wireless headset requires 2 hours of labor and 1 set of raw material. And wired headset requires 1 hour of labour and 2 sets of raw materials. Keeping the objective of maximizing profit here we will formulate this as a linear programming problem using mathematical logic:
Maximize P= 40a+30b
Constraints :-
2ah+1bh ≤100 (labor)
1ar+2br ≤80 (raw material)
A,b ≥0 (non-negativity)
Using the simplex method, we’ll convert inequalities to equality and add slack variables for both constraints, s1,s2
Constraints:-
2ah+1bh +s1=100 (labor)
1ar+2br +s2 =80 (raw material)
A,b,s1,s2 ≥0 (non-negativity)
Initial simplex tableau (representation in tabulated format) :
The simplex algorithm iteratively improves the solution by entering and leaving variables until the optimal solution is found.
Iteration 1 :-
Pivot column ~ Most negative value in objective row is -40 in column a, hence a is the entering variable
Pivot row ~ smallest positive ratio of RHS to Pivot column is 100/2=50 for C1, hence s1 is the leaving variable
Performing row operations to make the pivot element 1 and other elements in the pivot column 0 : Pivot on element 2 in the 1st row and 1st column
Iteration 2 :
Pivot column ~ Most negative value in objective row is -10 in column b, hence a is the entering variable
Pivot row ~ smallest positive ratio of RHS to Pivot column is 30/(3/2)=20 for C2, hence s2 is the leaving variable
Performing row operations to make the pivot element 1 and other elements in the pivot column 0 :- pivot on element 3/2 in 2nd row and 2nd column
When all the coefficient in the objective row are non negative, that means optimal solution is found
a= 40
b= 20
P= 2600
Buzzytunes should produce 40 wireless and 20 wired headset to maximize the profits at $2600, this is a real life application for economists.
These models use matrices to show the relationships between sectors, showing how the output from one sector is an input to another.
Imagine an economy with three sectors 1) fisheries , 2) manufacturing and 3) services, the input output table or matrix shows flow of goods and services between all sectors, each value in the matrix shows output from one sector used as an input for another sector.
Say A is input output matrix
A =
Here, element aij is the fraction of output from sector j required by sector i,
Say x is the total output for each sector
x =
Say d is the vector of final demand for each sectors output:
d=
The relationship between total output and final demand is
x= Ax + d
(I-A)x = d
X = (I-A)^-1 d
I is identity matrix solving this equation, economists can find the total output required to meet the final demand, accounting for inter-sector dependencies.
Economists use these models to predict the impact of changes in one sector on others. For example, if there's an increase in demand for fisheries products, the input-output model can show how this change affects manufacturing and services sectors, considering the interdependencies.
By using linear algebra, economists can handle real life large-scale models with many variables and constraints, enabling analysis of complex economic systems efficiently. This approach is essential in modern science to understand and predict economic behaviors.
Linear algebra is used in cryptographic systems(cryptography methods) to transform plaintext message into a ciphertext (an encrypted message) and vice versa, one of the basic example of the same is Hill cipher, it uses basic concepts like matrix multiplication and types of matrices for encryption & decryption (messages for cryptography) which has plenty real world applications.
For abstract concept practical application example we will understand Hill cipher:, here any text is converted to numbers as (A=0, Z=25)
First choose a key matrix K , for simplicity of example lets keep it 2x2 matrix
k=
Plain text message for example is ‘DDAY’ !
P1=
P2=
Encrypt the original message by multiplying the key matrix K by P1 & P2 Vector:
(K*P1) & (K*P2)
After multiplication for P1,
C1 =
C2=
In order to keep all values under 25, we reduce the resulting vectors modulo 26
For C1 mod 26 is the same vector
For c2: 72mod 26 = 20 and 120 mod 26 = 16
Hence, matrix cryptography resulting ciphertext vectors correspond to (18=S,21=V,20=U,16=Q) ‘SVUQ’
decode messages (decryption of messages)
Find inverse matrices of k modulo 26
det(K) = (3*5) - (3*2) = 9
The modular inverse of 9 modulo 26 is 3 (since 9×3mod 26=1), multiply adj(K) with 3
Mod26 =
Multiply ciphertext vectors (C1*K^-1) & (C2*K^-1)
For C1 =
For C2 =
MOD 26 would convert these to original values (3,3) & (0,24)
matrix cryptography , From SVUQ to DDAY, This is also closely related to quantum computing, where linear algebra plays a vital role in developing quantum algorithms for secure communications. Modular arithmetic is often used in cryptography alongside linear algebra. For example, in the Hill cipher, modular arithmetic make sure that matrix operations give values within a certain range, maintaining the integrity of the encrypted message.
Linear codes are used to detect and correct errors in data transmission. One popular linear code is the Hamming Code, which can detect up to two-bit errors and correct one-bit errors.
For example we will understand Hamming code (7,4)
4 bit data vector D = (1,0,1,1)
Encode the data vector into a 7-bit codeword C using a generator matrix G:
G=
The codeword C is obtained by multiplying D by G:
C=D×G
C= (1,0,1,1,0,0,1)
Received a codeword C′ (1,0,1,1,0,0,0):
Calculate the syndrome S by multiplying C′ with the parity-check matrix computation H:
H=
The syndrome S = C′ x H(transpose) = (1,1,0)
Syndrome S shows the position of error, in this case 4th bit is erroneous
Correct the error by flipping the 4th bit of C’ (1,0,1,0,0,0,0)
Finally Extract the original data D from the corrected codeword C.
If we summarize this , linear algebra provides important tools for cryptographic systems, enabling secure encryption and decryption, as well as trusted error detection and correction using mathematical concepts in real life applications, and this ensures the integrity and confidentiality of data.
We can call Linear algebra the lifeblood of machine learning! It plays a very important role in representing as well as manipulating data through linear functions and linear transformations. Linear algebra is the study of vectors, matrices and linear transformation and Geometry provides a visual understanding of these concepts in machine learning, especially in high-dimensional spaces and neural networks( facial recognition systems, deep learning models),Linear algebra techniques are also crucial in signal analysis and signal processing, where signals are represented as vectors or matrices. This allows for efficient manipulation and transformation of signals, enabling noise reduction, feature extraction, and signal compression, along with this LA plays foundational role in search engines
vectors represent data points or arrays and matrices represent datasets or tables. How is it related to machine learning you may ask?!, we understand for a fact that computers understand numbers and a way to represent them is vectors and matrices, for example in a dataset any feature is represented as a vector which is a one-dimensional array (although geometrically they do have magnitude and direction) and the dataset is represented as matrices, hence by default it becomes crucial to understand the operations on matrices like addition, multiplication, transposition and matrix decomposition method, these understanding on a simple 3x3 matrices will enable an engineer to do various operations on a matrix or we can say a table or large dataset.
For example transposition:
It flips a matrix around its diagonal. Useful for image processing or data cleaning. For example, an image can be represented as a matrix where each pixel is a value. Transposing this matrix would be useful for applying filters or even performing image recognition in real life example.
Eigenvalue is denoted by (λ), An eigenvalue of a square matrix A is a scalar (a single number), Eigenvector is denoted by (v), multiplication of Matrix with Eigenvector results into a scaled version of Eigenvector scaled by λ. for a square matrix A, an eigenvector v and corresponding eigenvalue λ satisfy Av=λv.
In the machine learning context, finding Eigenvalues and Eigenvectors helps us decompose the data into principal components, these are “Directions” of maximum variance in the data and ultimately this can be very useful in dimensionality reduction (i.e basically reducing the number of features in a dataset while retaining the essential information), this Principal component analysis (PCA) is used in feature selection where most important features are selected before model training and fitting functions .
Singular Value Decomposition (SVD): SVD is a matrix factorization method that breaks down a matrix into three simpler matrices. important for dimensionality reduction and data compression. It helps identify patterns, reduce noise, and extract important features from data, which is useful in techniques like collaborative filtering and recommender systems.
Linear independence:- in simple words, none of the vectors can be formed by scaling and adding other vectors in the set is linearly independent. linear independence ensures that each feature in the dataset provides unique information for example if we have a population data set and there’s a column of gender and every data point represents the same gender, then the feature would not be as much useful for any sort of prediction, preventing redundancy and improving model performance. It is also important to find the rank of matrices, which affects their invertibility and the stability of algorithms like linear regression model and matrix factorizations as a part of linear models.
Matrix Equation:- Solving matrix equation is important for understanding transformations on spaces and covariance matrices and systems of equations as linear equations,
Many algorithms like Linear regression can be shown with matrix operations: linear regression fits a line to a set of data points such that it minimizes the sum of squared differences between the observed values and the values predicted by the line usually called the Best fit line! solving linear equations
this involves solving linear equations in the form y=Xβ+ϵ , Estimation of Coefficients= β^=(XTX)−1XTy
Matrix representation of linear regression uses powerful linear algebra concepts to achieve efficient computation especially required while working with real life large datasets.
Linear algebra is very important in analyzing electrical circuits. Kirchhoff's laws can be expressed as a system of linear equations, which are then solved using matrix operations. These systems can include different types of matrices, such as diagonal matrices, to simplify complex electrical circuit analyses.
In analyzing complex electrical circuits, linear algebra helps simplify the calculations by using matrix operations like scalar multiplication and various types of matrices to find the current and voltage in the electrical circuit elements.
Linear algebra is used to describe the states and transformations of quantum systems. Quantum states are shown by vectors, and quantum operations are shown by matrices, linear algebra helps describe complex problems, analyze behavior, and predict targets in quantum mechanics.
Quantum state:
Quantum state can be shown as a vector in complex vector space, also often described using diagonal matrices (diagonal matrices simplify the analysis of quantum operations), For example, imagine a simple quantum system with two possible states, |0⟩ and |1⟩. A general state can be written as a superposition of these states:
∣ψ⟩=α∣0⟩+β∣1⟩
Here α & β are complex numbers, also, ∣α∣^2+∣β∣^2=1.
Operators and observables:
Observables, like a spin or the energy of a particle, are shown by Hermitian matrices (types of matrices) (operators). For a two-state system, an observable might be shown by a 2x2 matrix. Imagine the Pauli-Z matrix:
σz=
To measure the observable σz on the state ∣ψ⟩, we need to calculate the expected value using the following formula: ⟨ψ∣σz∣ψ⟩
Say if ∣ψ⟩=
, then the expected value would be
(α∗β∗)
= ∣α∣^2−∣β∣^2
This will give expected measurement results of the observable σz
Balance chemical equations makes sure that the number of atoms for every element is same on both sides of the equation. Linear algebra provides a organized approach to balance these equations using types of matrices and scalar multiplication.
For Example: Balancing C2H6+O2→CO2+H2O (combustion of ethane (C2H6) with oxygen (O2), resulting in the formation of carbon dioxide (CO2) and water (H2O).
The unbalanced equation
C2H6+O2→CO2+H2O
Setting up systems of equations
For carbon C : 2x1=x3
For hydrogen (H): 6x1=2x4
For oxygen (O): 2x2=2x3+x4
Here, x1, x2, x3, and x4 are the coefficients of C2H6, O2, CO2, and H2O, respectively.
Now lets convert this to a Matrix form,
We can use Gaussian Elimination theory to find the following chemical equation,
x1 =2 , x2 =7, x3 =4 x4 =6
Now we can come up with the final Balanced equation,
2C2H6+7O2→4CO2+6H2O
Route optimization finds the most efficient path through a network model. This is commonly modeled using graphs, where linear algebra techniques can be applied to adjacency matrices in daily life.
how the network model as an adjacency matrix, where each element aij means the weight (e.g., distance or time) from node i to node j.
Imagine we have a small network of four cities connected by roads with different distances between them. We want to find the shortest path between every pair of cities. We'll use the Floyd-Warshall algorithm and show the network using an adjacency matrix. We show the cities as nodes and the roads as edges with weights with respect to the distances between the cities. The distances are given as:
City 2 to City 1, City 3 to City 1, City 3 to City 2, and City 4 to City 3 have direct distances given. We can create an adjacency matrix A for this graph, where aij is the distance from city i to city j:
A =
We’ll use Floyd-Warshall algorithm to find the shortest paths between all pairs of nodes.
Set the distance matrix D equal to the adjacency matrix A, For each pair of cities (i, j), we simply update the distance dij as follows:
dij=min(dij,dik+dkj)
Taking K=1
D=
Taking K=2
D=
Taking K=3
D=
Since we applied practical engineering Floyd-Warshall algorithm on this network model, this matrix becomes the final distance matrix D with the shortest paths between all pairs of cities.
The shortest path from city 1 to city 4 is 6 units. The shortest path from city 2 to city 3 is 2 units. The shortest path from city 3 to city 4 is 1 unit. And so on for all pairs of cities.
Linear algebra is very important in computer graphics, game theory, including competitive games especially for transforming images and 3D models(three-dimensional view). We’ll see this with a simple presentation example using 2D transformations to showcase the concepts of scaling, rotation, and translation can be represented as linear systems in mathematical study of complex systems.
Additionally, the concepts of linear space and linear operators play crucial roles in these transformations. For example, 3D modeling and animations in video games and simulations are depended on transformation matrices to manipulate objects in three-dimensional space, creating realistic visual effects.
Let's Imagine a 2D shape, such as a triangle, defined by its vertices. We want to apply various concept of matrix transformation to this shape. We'll use matrices to perform these transformations.
Initial Triangle
Let's say our triangle has vertices at points A(1, 1), B(3, 1), and C(2, 4).
Translation
To translate (move) a shape, we add a translation vector (tx, ty) to each vertex. The translation matrix is:
T=
For example, if we want to move the triangle by (2, 3):
T =
The vertices of the triangle are extended with a 1 in the third dimension:
Multiplying the translation matrix by the vertex matrix:
So, the translated vertices are A'(3, 4), B'(5, 4), C'(3, 7).
Scaling
To scale a shape, we multiply by a scaling matrix:
S =
For example, to scale by 2 in both directions:
S=
Applying this to the translated vertices:
So, the scaled vertices are A''(6, 8), B''(10, 8), C''(6, 14).
Rotation
To rotate a shape, we use a rotation matrix. For a counterclockwise rotation by angle θ:
R=
For example, to rotate by 90 degrees (θ=90∘):
R =
Applying this to the scaled vertices:
So, the rotated vertices are A'''(-8, 6), B'''(-8, 10), C'''(-14, 6).
This article was written by Kartikey Vyas, and edited by our writers team.
🚀 "Build ML Pipelines Like a Pro!" 🔥 From data collection to model deployment, this guide breaks down every step of creating machine learning pipelines with top resources
Explore top AI tools transforming industries—from smart assistants like Alexa to creative powerhouses like ChatGPT and Aiva. Unlock the future of work, creativity, and business today!
Master the art of model selection to supercharge your machine-learning projects! Discover top strategies to pick the perfect model for flawless predictions!