September 1, 2022
An AI model is trained in several ways. With this article, we are exploring unsupervised learning for image classification. Read ahead to learn everything you need to know to get started.
Machine learning and deep learning are frequently employed for classification and regression problems in practice. Both these techniques use different algorithms to train the models for prediction tasks in businesses and organizations. Image classification is a very common task in healthcare computer vision problems, fraud detection in financing institutes, customer behavior analysis, natural language processing, and many more. The two well-known techniques for machine learning are supervised and unsupervised learning. A good number of different algorithms have been employed depending on the data and the nature of the problem. Supervised learning is relatively simple for the application, while unsupervised learning to train the model is relatively complex and less accurate, although reliable. Unsupervised learning uses a totally different approach due to the non-availability of labels in the data. Yet it has its own advantages in image classification.
In this article, we are going to compare the two options of learning for the model when being applied to image classification but will focus more on the latter covering the different algorithms briefly. We will also cover in detail the concept and process of one algorithm viz clustering for unsupervised learning that is popular for image classification.
Supervised learning is similar to a classroom scenario where the students learn with the help of a teacher who guides them. Here the model learns with the help of labels of the output made available to it and, with the observed features, easily predicts the output on similar objects not shown to it earlier. Thus it is not complex, and the more details provided, the more easily and quickly it can be trained. Whereas in unsupervised learning, the scenario is different as the students have to learn without the guidance of a teacher and on their own. Likewise, the model here learns on its own by grouping the objects by observing similar patterns into one class and different ones in the second or other class. Naturally, the process is slightly complex and relatively less accurate.
Supervised and Unsupervised Machine Learning Approaches
The following table shows a comparison of the two techniques.
Although unsupervised learning generates moderate accuracy results, there are many reasons for which unsupervised learning could be a better choice -
A few advantages offered by unsupervised learning are -
Now, let's us understand some challenges in unsupervised learning -
The algorithms in unsupervised learning fall into two categories viz classification and association. We will show them in schematic form as below and then cover brief information about each except clustering, which will be covered in detail and with an example.
Clustering is employed when we want to discover the groups inherently existing in the data. For example, we can form customer groups based on their purchasing behavior, like the type of particular products or particular brands. An association algorithm becomes applicable when we can establish a rule of relationship, like when a customer who buys a product x is also likely to buy product y. For example, when a customer buys milk, he is most likely to buy bread or toast with it. Different algorithms have been developed for both options, a few of which are shown below.
The process of grouping similar entities together is known as 'clustering' in unsupervised machine learning. This technique aims to identify patterns in the input data and then group similar data points together. Grouping similar entities together helps to profile the attributes of different groups. Using this technique, we can gain insights into underlying patterns of different groups in the input dataset. There are many algorithms developed to implement this technique, but for this article, we will discuss 'K-Means clustering,' one popular and frequently used clustering algorithm in unsupervised machine learning.
This unsupervised learning algorithm is used to form groups of unlabelled data into a random but logical group called clusters denoted as 'k.' The value of k is predetermined before forming actual clusters. Simply put, if k = 3 or 5, the number of clusters will be 3 and 5, respectively. Each dataset with similar properties is included in one group, and those with different properties are grouped into more groups depending on their particular but similar properties. Let us say we have data of different species; then they will be grouped into clusters like a group of birds, fish, and dogs where k will be equal to 3.
Thus this method helps the data to be divided into different groups of images, although it was not labeled. It achieves this on the basis of different but similar characteristics of different species on the basis of size, shape, and other features.
The sequence is as follows -
We will understand this clustering process of unsupervised learning in detail with an example in the later part of this article.
There are also two types of Hierarchical clustering - Agglomerative and Divisive.
i) Agglomerative: This follows a bottom-to-top approach for data separation. To begin with, every data point is a cluster by itself. Hence, a number of clusters are formed, which are then progressively added together, reducing the number of clusters. This process of merging continues till one single cluster is formed.
ii) Divisive: This is the opposite of the Agglomerative approach as it follows a top-to-bottom pattern. The complete dataset starts as a single cluster and is then gradually broken down into multiple clusters until no further division is possible.
This is a technique used in unsupervised learning when large datasets with a number of features are to be analyzed. As the data involves a large number of features, it takes more computational power and significant time. The idea is, therefore, to reduce the number of features, referred to as 'dimensionality reduction.' The dimensions are reduced to a smaller number but maintain the originality of this data structure and prevent loss of information. It is used in movie recommendations and purchases of stocks, etc.
This technique is used to find the relation, association, and/or dependencies between the data items. Various rules are used to find useful relations between the datasets in the database. The Association learning rule benefits product-based business organizations because mapping relations between different products can improve their sales and earn more profits. Similarly, web usage mining, market-based analysis, and continuous production analysis are some important applications of association rule mining.
This algorithm utilizes the principle of frequent itemset mining in transactions which means it tries to find out the group of items that go with each other most frequently. It does this by a two-stage process wherein firstly; the items are grouped that go together and then pruning or removing those which don't frequently go with others. With repetitions of the above steps, finally, a group that has threshold and confidence values according to preset parameters is formed with the association principle. The preset parameters are 'support' which is the frequency of occurrence of an item, and 'confidence' which is a conditional probability.
It is an acronym for 'Frequent Pattern.' It is similar to the Apriori algorithm, but it represents all the data points like a tree structure and is hence known as the frequent pattern tree. The algorithm finds the most frequent pattern from the given database. This approach is faster than Apriori as it reduces the search time to find frequent patterns of an itemset in data.
Eclat stands for 'Equivalent Class Transformation.' It is an improved version of the Apriori Algorithm and follows the mining association rule. Accordingly, a rule is found that will predict the occurrence of an item based on how many times other items occur in that transformation.
With this overview of unsupervised algorithms, let us now look at an example of unsupervised image classification using K-Means clustering in the next section.
The image classification task is usually performed with supervised learning, where the model gets trained on known labels. However, when an unsupervised learning approach is used in image classification, it requires the following steps -
Let us understand what feature extraction means and why it is necessary for image classification in unsupervised learning. For describing an image, we can use parameters like the presence of objects in the image, the color, the brightness or the sharpness, etc. But for a machine, the features for an image will be the presence of edges and ridges, any corners or regions of interest, color intensity, etc., as these can be used by the algorithm to segment the image and assess the similarity for grouping or classification.
Thus, feature extraction is a technique of converting the raw data into numerical features for applying the algorithm while maintaining a true representation of the input. A machine learning algorithm cannot work on an image directly, and the image needs to be converted into an array of numbers for further processing. A well-known technique in this regard is called 'transfer learning,' where we can re-use an already trained model for extracting features from unseen images. These pre-trained models for transfer learning are models that have been trained on millions of images for computer vision. Some popular high-performing transfer learning models for image classification are VGG, Inception, and ResNet. It is possible to access these pre-trained models using the Keras framework.
Here, we will use the flower image dataset containing 3670 images belonging to five categories - daisy, dandelions, roses, sunflowers and tulips. Since it is a large dataset, for demonstration, we will randomly take only 15 images of two categories - daisy and dandelion from the dataset and add them to a single folder. These are some sample images from the dataset -
Using one of the pre-trained image classification models for transfer learning mentioned earlier, we can extract the features of these images. Once the features are extracted, we will apply the K-Means algorithm using the following code -
#Creating Clusters
kmeans = KMeans(n_clusters=2, init='k-means++', random_state=0)
# predict a label for each image based on clusters
Y = kmeans.fit_predict(img_features)
print(Y)
This is the output of the model which indicates the category of all the input images.
Since we have only two categories of flowers, we are using the number of clusters =2. However, for larger datasets with unknown labels, it is essential to experiment with the number of cluster values for acceptable results.
Here are a few images from each cluster -
#cluster 1
#cluster 2
It is clear that the model did a good job at grouping the images into two categories. All images except one image were correctly clustered. This indicates that we can perform image classification using an unsupervised learning approach with transfer learning.
In this article, we discussed unsupervised learning for image classification and different algorithms for the unsupervised learning approach. In summary-
About us: VisionERAis an Intelligent Document Processing (IDP) platform capable of handling various types of documents and images for classification. It can extract and validate data for bulk volumes with minimal intervention. Also, the platform can be molded as per requirements for any industry and use case because of its custom DIY workflow feature. It is a scalable and flexible platform providing end-to-end document automation for any organization.
Looking for a document processing solution that uses deep learning enhanced image classification capabilities? Set up a demo today by clicking the CTA below or simply send us a query through the contact us page!