How do you focus the image pattern on the CNN algorithm? - image-processing

I have an image of segmented black and white dots. (The white dots in this image show a certain pattern.) I want to send this image to the cnn algorithm. But I want the cnn algorithm to focus on these white points, how can I do this?
Note : I'm building a CNN algorithm for classification.

Related

Artificial Neural Networks for Image Colorization

At the moment, I'm learning about artificial neural networks for colorizing black and white images.
After reading enough literature and considering many examples, I had one, maybe stupid, question: how can a convolutional neural network, like U-net, colorize images? From many articles, I realized that this architecture is able to extract features from an image by applying a convolution operation and subsequently combine them. But where do the a and b color channels come from (we are talking about the Lab color space, of course)? The input is the L-channel of the image, which is a black and white image, the pixels of which contain the value of brightness (intensity), but how are the a and b channels obtained at the output? I would be grateful if someone could explain this to me mathematically and in simple terms.

Can a Keras CNN model build with 2 channel of 28x28 size image, predict real world images(RGB)?

I m building a CNN model with tensorflow Keras and the dataset available is in black and white.
I m using ImageDataGenerator available from keras.preprocessing.image api to convert image to array. By default it converts every image to 3 channel input. So will my model be able to predict real world image(colored imaged) if the trained image is in color and not black and white?
Also in ImageDataGenerator there is parameter named "color_mode" where it can take input as "grayscale" and gives us 2d array to be used in model. If I go with this approach do I need to convert real world image into grayscale as well?
The color space of the images you train should be the same as the color space of the images your application images.
If luminance is the the most important e.g. OCR, then training on gray scale images should produce a more efficient image. But if you are to recognize things that could appear in different colors, it may be interesting to use a color input.
If the color is not important and you train using 3-channel images, e.g. RGB, you will have to give examples in enough colors to avoid it to overfitting to the color. e.g you want to distinguish a car from a tree, you may end up with a model that maps any green object to a tree and all the rest to cars.

Shape Detection using Machine Learning

I would like to detect shapes namely circle, square, rectangle, triangle, etc., using Machine Learning Techniques.
Following are the specifications for shape detection,
Convolutional Neural Network ( CNN ) is used.
For Training, Dataset contains 1000 images in each category for 10 shapes.
For Testing, Dataset contains 100 images in each category for 10 shapes.
All images are 28x28 resize with one channel ( gray channel ).
All the images in the dataset are edge-detected images.
Questions
Is it possible for the machine learning algorithm to differentiate between a square and a rectangle...?, square and a rhombus...?
How can i improve the dataset for shape detection ?
Thanks in Advance...!!!
Yes, and it is not a very hard task for a CNN to do.
One way to improve the dataset is to use image augmentation. I think you can do both horizontal and vertical flips as all these figures are still the same kind of figures when applying this transformation. You can think of other transformations as long as they don't change the axes sizes, because if you change the sizes of the axes a square becomes a rectangle, and viceversa.

Finding shapes using OpenCV Haar cascaded classifier

I am looking for parabolas in some radar data. I am using the OpenCV Haar cascaded classifier. My positive images are 20x20 PNGs where all of the pixels are black, except for those that trace a parabolic shape--one parabola per positive image.
My question is this: will these positives train a classifier to look for black boxes with parabolas in them, or will they train a classifier to look for parabolic shapes?
Should I add a layer of medium value noise to my positive images, or should they be unrealistically crisp and high contrast?
Here is an example of the original data.
Here is an example of my data after I have performed simple edge detection using GIMP. The parabolic shapes are highlighted in the white boxes
Here is one of my positive images.
I figured out a way to do detect parabolas initially using the MatchTemplate method from OpenCV. At first, I was using the Python cv, and later cv2 libraries, but I had to make sure that my input images were 8-bit unsigned integer arrays. I eventually obtained a similar effect with less fuss using scipy.signal.correlate2d( image, template, mode='same'). The mode='same' resizes the output to the size of image. When I was done I performed thresholding, using the numpy.where() function, and opening and closing to eliminate salt and pepper noise using the scipy.ndimage module.
Here's the output, before thresholding.

How to size the image normalization in handwriting recognition?

Handwriting number recognition problem : how can i normalize the hand wiring number image ?someone can help?
Check out how the MNIST dataset is curated here:
http://yann.lecun.com/exdb/mnist/index.html
To quote the relevant section:
The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
With some classification methods (particuarly template-based methods,
such as SVM and K-nearest neighbors), the error rate improves when the
digits are centered by bounding box rather than center of mass. If you
do this kind of pre-processing, you should report it in your
publications.

Resources