Machine learning is the use of algorithms to make computers learn a solution to a problem based on data rather than being directly told what to do. These algorithms allow computers to learn from data, and to be able to predict trends such as the variation of stock markets, prices of goods, etc. Machine learning is implemented using different techniques. These techniques can be classified according to the amount and type of supervision they get during training. Following this classification method, there are four major categories: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.
The majority of machine learning systems use supervised learning. Supervised learning is comparable to the process of a math teacher showing how to solve addition. The teacher gives sufficient information to the students so that they are able to solve the problems by themselves in the future.
Supervised learning consists of feeding the machine learning algorithm with labelled data and letting the machine learn from this data. Examples of labelled data can be tagged pictures (the elements in the picture are clearly indicated and labelled).
After learning from this data, the algorithm is supposed to be able to identify values similar to what it was fed within a new set of data. Let’s imagine you want to train a program to recognize pigs in pictures. You will be feeding the program with labelled pictures of pigs and labelled pictures of other animals (so that the program learns the differences between pigs and those animals). If you feed your algorithm with enough data, it will be able to recognize pigs relatively well in other pictures.
Supervised learning problems can be further divided into two categories:
- Classification problems: In this kind of problems, the output variable is a category. We are asking the software to identify an animal, a colour, an emotion etc.. in the data.
- Regression problems: the output of these problems is a numerical value. For example, we can train an algorithm so that it can predict the price of a car given attributes (called predictors) such as the age of the car, the mileage, etc.
Spam filters are common examples of supervised learning tasks. The software is trained with lots of emails labelled spam or not spam. The software learns from these emails so that it is able to identify spam emails in the future.
Some of the most important supervised learning algorithms are k-Nearest Neighbors, Linear Regression, Logistic Regression, Support Vector Machines (SVMs), Decision Trees and Random Forests, and Neural networks.
Supervised learning has inconvenienced. Indeed, labelling the data requires people, which can be expensive and time-consuming. To circumvent this necessity, we have unsupervised learning.
Unsupervised learning is like a baby learning to recognize dogs. The baby may not know the name of the animal but will spot its characteristics: 2 eyes and ears, 4 legs, etc. When the baby knows these characteristics, it can pretty much recognize every dog in its environment.
In the case of unsupervised learning, for example, if the software is given unlabeled pictures of dogs. The machine will identify the characteristics of dogs and will be able to identify dogs in the future.
Because unsupervised learning gives the freedom to software to identify any characteristic in the data, unsupervised learning allows us to discover some unsuspected patterns in the data.
Unsupervised learning problems can be divided into:
- Clustering problems: Those problems have for goal to uncover natural groups in the data. For example, the algorithm can have for goal to group customers according to their purchasing behaviour.
- Association problems: These problems can help uncover relationships between information in the data. For example, the algorithm can discover that people who purchase barbecue sauce and potato chips also tend to buy steak.
Unsupervised learning is sometimes used in medical settings to analyze genetic sequences. The system can group the seemingly random nucleotides combinations (based on their similarity) and identify new patterns that can advance their research on gene therapies or medicines.
Unsupervised learning can also be used for anomaly detection. It can spot unusual credit card transactions, catch manufacturing defects, etc.
Some of the most important unsupervised learning algorithms: Clustering, k-Means, Hierarchical Cluster Analysis (HCA), Expectation-Maximization, Visualization and dimensionality reduction, Principal Component Analysis (PCA), Kernel PCA, Locally-Linear Embedding (LLE), t-distributed Stochastic Neighbor Embedding (t-SNE), Association rule learning, Apriori, Eclat.
III-Semi supervised learning
Semi-supervised learning consists in having both labelled and unlabeled data. The algorithm will learn what the unlabeled data represents a higher accuracy than in unsupervised learning.
Semi-supervised learning does not have the time constraint and expenses associated with supervised learning as only parts of the data are labelled.
A common example of a semi-supervised algorithm is Google Photos’. In Google photo, pictures will be grouped depending on the people in them. This is the unsupervised part of the algorithm (because the pictures are grouped based on similarities in facial traits). When you assign a name to a person in a picture, the software will then be able to explicitly name this person in the pictures. This is the supervised part of the algorithm.
Semi-supervised learning can also be found in fields such as speech analysis and protein classification. The algorithm classifies similar audios or DNA strands and is able to name those groups thanks to the labelled data.
Reinforcement learning has a goal to make an algorithm to solve a task by using a reward/penalty system. The system is rewarded when it is taking appropriate steps to reach a goal, and receive a penalty when it is taking steps that do not bring it closer to the goal. The algorithm then chooses the best path to reach the goal, based on how much rewards it got from different paths. No data is given to the model, it is learning from experience.
Source: Reinforcement learning
In the image above, the robot has a goal to reach the diamond, while avoiding the fire. The robot will learn by trying all possible paths and will then choose the path which gives him the diamond with the least hurdles.
Reinforcement learning is generally used in robotics, autonomous driving.