Sorry, you need to enable JavaScript to visit this website.
  • Today is: Wednesday, October 16, 2024

You are here

Machine Learning – A Primer

What is Machine Learning?
The common belief is that we use computer program when we need to perform routine, repetitive tasks. We often perceive such tasks as “boring”. These tasks might involve significant numerical calculations to be performed in a short span of time, which are beyond the scope of an ordinary human being. Such a belief assumes that computer programs cannot “think” and adapt to cases which are not already built into the code. Machine learning changes this paradigm. Arthur Samuel, an employee of IBM coined the term machine learning in 1959 . While Samuel used the phrase, machine learning extensively in this work, he did not provide a formal definition. Tom Mitchell a professor at the Carnegie Mellon University defines machine learning as building computer programs that automatically improve through experience . More formally, he defines machine learning as a study of algorithms that improve their performance (P) at some task (T) with experience (E). The key words here are “improve performance” and “experience”. 

Some Examples
Machine learning (hence forth abbreviated as ML), applications have become ubiquitous. Our e-mail app automatically classifies mails as spam and not-spam. The task here is to classify an e-mail, the performance is measured by how accurate the classification is and the experience is through study of specific works featuring/not featuring in each category of mail. Computer programs can also recognize handwritten text. The task is to recognize text, the performance is measured by how accurately the text has been recognized. Here, the program is “taught” by labelling the text and allowing the program to recognize patterns that corresponds to a particular alphabet or numeral. X-Ray scans classified by expert radiologists are processed by ML programs to identify patters. Based on these patters the program analyzes hitherto unclassified X-rays and automatically assigns a classification, as would be done by a human radiologist. Programming by example is another way to think about ML.

How does it work?
The reader will note that ML is about predicting. Mark Twain the American writer once said, “making predictions is difficult, especially if it is about the future”. The good news is – predicting using machine learning techniques is getting better with time. The following images will help illustrate the concept.

We feed the above images of cats and dogs to a ML program. The images are converted to matrix of pixel values, and we assign the label of “0” for a cat and “1” for a dog, as appropriate for each image. The ML program looks at the common features among images classified as “0”. Similarly, the ML program tries to identify common features among images classified as “1”.
 
Thus the ML program “learns” about features that are unique to cats. When fed a new unclassified image, the ML program compares the features of the new image with the ones it has learned earlier and assigns the new image a classification based on its past “training”.

This was hitherto considered to be an exclusive attribute of animals in general and humans in particular. Cognition is a term that means using our brain to make sense of the environment. “Machines” were not expected to display cognitive behavior. Since ML is in the business of predictions, there is an element of uncertainty. No ML algorithm can claim to have perfected the art of prediction. The subjects of predictions and uncertainty are classified under the subject of probability, which itself falls under the broader subject of statistics. Thus, when analyzing a new image, the ML program compares it with the earlier images it has “seen” and assigns the “most likely” label to the new image. ML practitioners must therefore be well versed in computer programming as well as statistics. 

Getting the predictions right
We need to feed a lot of data to the ML program so that it is able make its predictions accurately. This is called “training the model”. We feed lots of data already classified by humans and let the ML program “learn” the features common to each type of classification. Once we have “trained” the model, we test it with data, which the model has hot seen so far and see if it able to get its predictions right. We might need to “tune” the model to improve its performance. Once we have tested the model, and we are satisfied with its accuracy, we are good to go.

Can I learn ML?
Is ML subject suited for only persons who are experts in computer programming and statistics? The answer is yes and no. One must have knowledge of both these subjects to understand and apply ML techniques. Those who are novices in one or both subjects need not feel dismayed as there are plenty of opportunities available for beginners. The biggest hurdle is within us. Over the years, as we grow within the organization, we feel uncomfortable to declare ourselves as novices. There are plenty of resources available to pick up ML skills. Some of them are for free and others are paid. I’ve listed a couple of links at the end of this post that point to some of these resources. The bottom line is -if machines can learn, then why can’t we?

https://medium.freecodecamp.org/every-single-machine-learning-course-on-the-internet-ranked-by-your-reviews-3c4a7b8026c0
https://analyticsindiamag.com/udacity-or-coursera-which-mooc-is-the-best-for-artificial-intelligence-and-machine-learning-upskilling/

 

Coach: 
Wagon: