My algorithm now compares contours of the same size and returns 1 if it’s of the same class and -1 if they are not from the same class. For achieving that I need to train the algorithm so it can create a SVM for the class.
The problem is that contours need to be the same size and I need to identify the center of the image to identify it as being as the same class.
The (not) solution
I will resize the image that represents the contour to the size of the template. Then I will compare them.
But this solution does not work as I would expect. Here’s why:
When resizing the contour our intuition will not be the same as what actually resizing do FOR CONTOURS.
For example (in matlab):
Suppose you have the following matrix that represents the contour:
>> A = [1,0; 1,1] A = 1 0 1 1
When using imresize, you get the following:
>> B = imresize(A, 2) B = 1.0753 0.7826 0.1471 -0.1456 1.0560 0.8381 0.3650 0.1471 1.0143 0.9587 0.8381 0.7826 0.9951 1.0143 1.0560 1.0753
This is not the resized contour that we expected. First of all, the entries are not only either 0s or 1s. Because the default method for resizing the image is the bicubic method, meaning that each pixel is a weighted average of pixels in the nearest 4-by-4 neighborhood.
Therefore if the input image is only of 0s and 1s, the output image will not necessarily be of only 0s and 1s.
One can tweak this by using other methods for resizing the image, but none of them will provide a satisfactory answer. For example using the method of Nearest Neighbor will give the following:
B = imresize(A,2,'nearest') B = 1 1 0 0 1 1 0 0 1 1 1 1 1 1 1 1
The REAL solution:
The real solution is to get the input image (RGB) resize it to the size of the template model and then find the contour of the image and compare it to the template.