Extracting Features from the image manually - machine-learning

I am working on image classification problem. How to find out specific features from the image manually that will help to build a DNN? Consider an image of a man talking on phone while driving for classification as distracted.

You don't do this. Having a good feature extractor is why we take DNNs in the first place
Also: you forgot to look to https://www.kaggle.com/c/state-farm-distracted-driver-detection

Related

Tesseract for License Plate (especially Korean version)

I'm working on my project for License Plate Recognition using OpenCV & Tesseract.
I use OpenCV to change original image to processed image so that Tesseract can read it well.
For example)
Original Image
Processed Image
But the result shows "38다9502"and it recognized 3 to 5.
These situation happens frequently especially when the number is 3 or 5.
Is there any suggestion or solution for it??
You can try retraining tesseract with some of your own data. It looks like a good candidate for simply fine-tuning the model. You may not even need much data, just give it several examples of the digits it is having trouble with.
Instructions for retraining are here: https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00
1)First it can be done with few image processing techniques which is mentioned in this link(https://cvisiondemy.com/license-plate-detection-with-opencv-and-python/)
2)Next if it doesn't show any improvement you can try image thresholding which you can go through in this link(https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html)
3)If above steps didn't work ,then try to enlarge your image size.
I solved this question with using multiple models supported by Tesseract.
With Hangul model, I only received accurate information of Hangul word, not Numbers.
However, with English model, I can received accurate information of Numbers.
So I used these models in parallel and it resulted 99% accuracy of LPR.

How to recognize or match two images?

I have one image stored in my bundle or in the application.
Now I want to scan images in camera and want to compare that images with my locally stored image. When image is matched I want to play one video and if user move camera from that particular image to somewhere else then I want to stop that video.
For that I have tried Wikitude sdk for iOS but it is not working properly as it is crashing anytime because of memory issues or some other reasons.
Other things came in mind that Core ML and ARKit but Core ML detect the image's properties like name, type, colors etc and I want to match the image. ARKit will not support all devices and ios and also image matching as per requirement is possible or not that I don't have idea.
If anybody have any idea to achieve this requirement they can share. every help will be appreciated. Thanks:)
Easiest way is ARKit's imageDetection. You know the limitation of devices it support. But the result it gives is wide and really easy to implement. Here is an example
Next is CoreML, which is the hardest way. You need to understand machine learning even if in brief. Then the tough part - training with your dataset. Biggest drawback is you have single image. I would discard this method.
Finally mid way solution is to use OpenCV. It might be hard but suit your need. You can find different methods of feature matching to find your image in camera feed. example here. You can use objective-c++ to code in c++ for ios.
Your task is image similarity you can do it simply and with more reliable output results using machine learning. Since your task is using camera scanning. Better option is CoreML.You can refer this link by apple for Image Similarity.You can optimize your results by training with your own datasets. Any more clarifications needed comment.
Another approach is to use a so-called "siamese network". Which really means that you use a model such as Inception-v3 or MobileNet and both images and you compare their outputs.
However, these models usually give a classification output, i.e. "this is a cat". But if you remove that classification layer from the model, it gives an output that is just a bunch of numbers that describe what sort of things are in the image but in a very abstract sense.
If these numbers for two images are very similar -- if the "distance" between them is very small -- then the two images are very similar too.
So you can take an existing Core ML model, remove the classification layer, run it twice (once on each image), which gives you two sets of numbers, and then compute the distance between these numbers. If this distance is lower than some kind of threshold, then the images are similar enough.

Haar training - where to obtain eyeglasses images?

I want to train a new haar-cascade for glasses as I'm not satisfied with the results I'm getting from the cascade that is included in OpenCV.
My main problem is that I'm not sure where to get eyeglasses images. I can manually search and download, but that's not practical for the amount of images I really need. I'm specifically looking for images of people wearing eyeglasses.
As this forum contain many experienced computer vision experts, I hope someone here can guide as to how to obtain images for training.
I'll also be happy to hear other approaches for detecting eyeglasses (on people).
Thanks in advance,
Gil
If you simply want images, it looks like #herhuyongtao pointed you to a good place. Then you can follow opencv's tutorial on training.
Another option is to see what others have trained:
There's a trained data set found here that might be of use, which states simply that it is "better". I'm assuming that it's supposed to be better than opencv.
I didn't immediately see any other places for trained or labeled data.

How to classify images using Apache Mahout?

How to perform image classification from mahout? How to convert the image to a form which is accepted by mahout classification algorithms? Is the any starter code to start with? Please share me some starter tutorials. Is mahout good library for image classification?
There are two answers to your question:
The simple answer is that from a Mahout point of view classifying images is no different than classifying any other type of data. You find a suitable set of features to describe your data, and then: train, validate, test, and deploy.
The second answer is a bit more involved, and I'm going to summarize. In the case of images the step in which you compute a suitable set of features spans a whole research area (called computer vision). There are many methods: DHOG, direction of gradient, SURF, SIFT, etc. Depending on the images and what your expectations are, you may obtain reasonable results just using an existing method, or maybe not. It would be impossible to say without looking at your images and you telling us your objectives.

How to check images for custom characters?

I have a set of image files that I can identify. Rather than an OCR, I'd like to search only for matches within the set. What's the ideal platform to quickly find matches?
OpenCV is an advanced computer vision library. It can recognize text blocks, colors, shapes, etc. so it might be of use.
Tesseract can be trained to handle languages, but I can't see a reason why you couldn't train it with shapes. Here's a really confusing training guide.
ImageMagick can also be useful. It's pretty hardcore endless parameter chaining, but you can get it to find images. It's not perfect for this application, but it's been done before. The documentation is insanely huge, but it's about as complete and illustrated as I could wish for (I'm a frequent user, as it's useful for quick image operations via CLI). Here's the image comparison documentation.
I would suggest OpenCV, but it's up to you. Good luck!

Resources