These days terms like big data, Machine Learning, Predictive analytics, data mining, Pattern recognition are popping up frequently. But these are lot of terms and there is lot of confusions related to these names and their meanings. I thought for the first post I’ll explain meaning of all these terms and relation between these fields. I intent to write this post for wide range of readers which may or may not have background in Computer Science so I’ll explain these terms in a way most of them can understand.
Internet is a wealth of data and information. With the advent of social media, cloud there is an overflow of data everywhere. It is believed that in every two days we generate data which is equivalent to data generated since beginning of time(computers here) to 2003. It has been learned that over 90% of the data has been generated in the last two years. Facebook generates Petabytes of data every single day. Phew, that’s a lot of data.
But why am I telling you these facts ?, you might have already seen these figures these days. The reason is that there is abundance of data but there is only a fraction of usable knowledge from this data. To make use of this data we need computer algorithms which can mine this haystack to find a needle which we can use to transform the data into some actionable knowledge.
Big data term was initially coined to denote the amount the data which cannot be handled by a single computer. Let me explain this. A single computer has fixed storage size, fixed memory size and fixed processing power. Assuming a standard 1 GHZ processor with 1 GB memory, it would take 1 second for this computer to perform 10^8 operations. Ok that’s lot of computer jargon. Imagine a computer sitting in a postoffice letter sorting room. Computer’s job is to sort the letters based on their address, so that letter with same address can be grouped together. In computer science this is a famous problem of sorting a bunch of numbers in ascending or descending order.
Best algorithms which sort a list of numbers takes time which is approximately proportional to the size of list ( for computer people this means O(NlogN) time algorithm, droping logN for sake of simplicity without loss of generalization). Assuming each operation of computer takes equal and unit time, for a total of 100 Million letters it would take computer total of 10^9 steps to sort the mails. We assumed it would take 1 sec to run 10^8 operations, so here it would take 10 seconds. Which is okay, no big deal, right ?. Now one day post office sees 100 billion letters to be sorted (they got lazy and stacked up letters for past month or it’s the apoclypse (end of internet) and everyone is sending letters). Now 100 billion is 10^14. Now it would take computer 10^6 second to sort these numbers. That is quite long time. You get it right ? As the data grows computer keeps getting slower to produce the output. These are infact small numbers, in machine learning algorithms numbers go to 10^15, 10^20, so it would take forever for a computer to produce results.
So computer scientists gave the word big data to amount of data which a single computer/algorithm cannot handle. For computer algorithms dealing with data of sorts like videos, images, texts from internet one computer is not enough, so these data are Big data.
So big data is not a technology or a buzzword. These days big data is often characterized by 3 V’s : Volume, Variability and Velocity. I’ll leave the meaning of these words for you to figure out ;).
Ever wondered how does Gmail automatically identifies whether a mail is spam or it is useful. How does your Digital Camera recognizes faces in an image. How does Facebook suggests Tags for a person in an image. Is it magic ? what is the mystery ? Yes, you are right my friend it’s Machine Learning. Machine Learning has a long history. It is tied with the invention of Artificial intelligence, whose origin dates back to start of computer science. It is now that humans have created powerful enough computers that it is starting to be used frequently. The question is What is Machine Learning ?
I think the term is self-explanatory. Machines learning something from the data, it’s machine learning.
You might have seen this definition( or some variant of this ) on the net. But one natural question is, how can a machine learn something ? It has no brain of it’s own. Then how the heck does a machine learns. Surprisingly the answer is quite simple (at least in abstract level), machines learn the same way humans learn. Many of the computer systems like compilers, have been designed keeping in mind some similar process in human body. Compilers work in same way humans interpret a language. Parsing the grammer, understanding the context then finally the meaning. Machine learning also in some sense works like humans learn some new thing or skill.
For example when a person learns driving, he has no idea how much gas should be used for particular speed, when to press the breaks, when to switch gears. He learns by example. Some teacher tells him when to change gear, when to press the gas, how much to press the gas, when to press the breaks. And of course some things he learns on his own, by executing these actions and observing the results.
Machines also learn the same way. The are presented with examples, like if you want to teach a machine to drive a car we’ll provide with examples of some combination of gas, break and gear and also the result of applying these combination. Machine will train on these examples and when training is done it is an expert in that subject and now can work in real world and can drive a car on it’s own.
Machine learning is often divided into three main classes:
In this class machine is presented with data which has features and the result along with it. (example like above explanation).
It will learn from this data, which combination of features produces the result and will apply it in real world.
This is an interesting class of learning. In this setting machine is only present with the data and no label or result is provided to it. In the previous example if the setting is this then machine will be given no previous training. A computer will be directly provided a car and it has to figure out how to drive it. It is a challenging class of machine learning problem, as this requires some smart work.
This class of learning is mostly used to build bots. Bots which work in some environment, which have some goal to fulfill and they learn from the environment what is the most optimal way to reach goal. In the previous setting machine will be provided with car and it will be allowed to train by itself how to learn the car. Machine will try out combinations of gas, break and by penalties ( machine hitting something) or by reward (like successfully crossing a street) it will learn how to drive a car.
Now Obviously this is simple explaination of machine learning and the details behind these are quite scary (don’w worry I’ll explain the details from next post).
What picture can better represent the use of machine learning then this. Page and Brin, two guys who literally started the revolution of machine learning in real life and Google’s self driving car, a pinnacle in machine learning itself.
If you search on Google Data mining vs Machine Learning you’ll find a different answer in each link. I thought I’ll go into less technical detail regarding this debate (anyways in coming posts I’ll discuss this issue in depth). Data mining is mining the data to find some useful insight into it. It uses techniques from machine learning and statistics to do so.
Some people argue that data mining is simply application of unsupervised learning. Some say it is Machine learning with some application of statistics. I believe it is mostly application of unsupervised learning.
Pattern recognition in large dataset, for instance mining web data. A good example is ranking web pages according to their relevance is mining huge dataset (entire web), this is data mining. It can be seen as practical application of machine learning on large real world dataset.
Definition of data mining is subjective . As you’ll learn more about machine learning and data mining you’ll form your own definition.
If put in simple words data science is simply intersection Machine learning, data mining, statistics, computer science.
This is the famous venn diagram which explains it. A data scientist is someone who is expert in machine learning, computer science and has domain knowledge of the field where he is applying data science. He is also an hacker. Hacking here means problem solving skills. Don’t fool it with the hackers which are shown in fancy sci-fi movies.
Simply put data scientist is someone who is good with programming, has good knowledge of both descriptive and inferential statistics, good knowledge of machine learning algorithms and has some knowledge of parallel and distributed computing (phew, I know, it takes a lot to be a data scientist!).
That’s a lot of introduction 😉 . Thanks for reading it. Please comment if you find any discrepancy in the writing or in facts.
From the next post every week or so, I’ll write about one algorithm related to machine learning, big data. You might be wondering why should you read my blog when there are hundreds of blogs related to data science. But I believe there is lot of information, and as a beginner it is difficult to find blogs which are comprehensive and describe data science in layman’s terms. I’ll try to write posts which explains all the algorithms and tricks you need to be a data scientist, I’ll also try to include code snippets so that you can see things in action.
Till next time 🙂