When I started Machine Learning course, there were plenty of courses online. I started watching these courses without refreshing my math concepts, initially I felt there is no need for mathematical basis, but as I started exploring advanced stuff I realized that one cannot master the subject and obviously cannot apply it for real world problems without a strong mathematical background. I have observed this is true for most of the students eager to learn Machine Learning. They jump straight to meat of Machine Learning, classification, clustering and stuff. That is a totally wrong approach.

In this post I’ll introduce basic building blocks which are essential for mastering this subject and being able to apply Machine Learning to real world problems.

**Linear Algebra 101**

This is one among few basics which needs to be covered before getting into machine learning.

Most of you might have done this course in High School, so this will be a refresher course plus some advanced things needed for Machine Learning.

Linear algebra is branch of Mathematics which deals with a general coordinate system and interaction of planes in a generalized coordinate system (I’ll talk about what I mean by a generalized coordinate system) and apply operations on them.

I assume you all are from Mathematical background so you all might have studied Matrices at some point of your education, but have you ever thought what is actually a matrix? Think for a minute and see if you come up with an answer.

A matrix basically denotes a linear mapping between two spaces. What I mean by that is consider a matrix [sin(theta) -cos(theta); cos(theta) sin(theta)]. If you multiply any vector in a simple 2-D space with this matrix then this vector will be rotated by an angle theta (try it out). So this matrix is basically mapping any vector in a space to a vector in a space that is original space rotated by theta angle.

**Vector **

You might all have studied vectors in your high school maths course. You might think why I am writing about a concept that used to seem so abstract in high school. So for your information this is most common term you’ll hear in Machine Learning course or in many CS courses.

So vector is nothing but a collection of numbers. If you remember arrays from data structures course, that represents a vector in a computer. A vector has components, in array the ith component is the ith element in the array.

**Basis vectors**

This is another very common term you’ll see in machine learning literature. Basis vectors are nothing but a set of vectors which corresponds to the axis of the input space you are talking about. So another term here, space. Space is simply set of all possible combinations of numbers. For example input space of 2 dimensions is simply all combination of (x,y) where x and y can be any number from their domains.

This image explains it more intuitively.

**Linear independence**

A set of vectors are called linearly independent when none can be written as function of other vector.

This sums up linear independence and dependence concept.

**Norm of a vector:**

This is just a fancy word to represent length of a vector. Now length can be represented as simple sum of individual components of a vector (absolute value). This is called L1 norm. If we use euclidean distance to find length of vector it is called L2 norm vector.

**Eigenvector:**

This is bit difficult to explain. If you google it, you’ll find only methods to how to calculate it. But not what they are actually. Why they are calculated. I’ll talk about them in detail when I’ll discuss Dimensional reduction techniques.

These are very basics terms used in ML. I’ve kept it short for this post, I’ll give more details in the next post.

If you want to learn on these topics in detail I’ll recommend these :

https://www.khanacademy.org/math/linear-algebra

http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/

Till next time 🙂

Cheers !