i am trying to create a personal project taking 4-5 guns from Pubg mobile along with their different skins. I want to create a image classifier , classifying all these guns separately. Can you please help me that how should I start and proceed. For example how to create the dataset , how to take images? What data augmentation to apply Like scaling, shifting, rotating etc. Which model to use Alex net? Vgg model?. Key points to keep in mind. Python libraries everything.
Related
I want to make an application for counting game statistics automatically. For that purpose, I need some sort of computer vision for handling screenshots of the game.
There are bunch of regions with different skills in always the same place that app needs to recognize. I assume that it should have a database of pictures or maybe some trained samples.
I've started to learn opencv lib, but not sure what will be better for this purpouse.
Would you please give me some hints or algorithms that I could use?
Here is the example of game screenshot.
You can covert it into gray scale and then use any haar cascade classifier to read the words in that image and then save it into any file format (csv) this way you can utilize your game pics for gathering data so that you can train your models
I am new to computer vision but I am trying to code an android/ios app which does the following:
Get the live camera preview and try to detect one flat image (logo or painting) in that. In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I found the Tensorflow Object Detection API as a good starting point. And support was just announced for importing TensorFlow models into Core ML.
I followed a lot of tutorials to train my own object detector. The training data is the key. I found a pretty good library to generate augmented image. I have created hundreds of variation of my image source (rotation, skew etc ...).
But it has failed! This dataset is probably good for image classification (with my image in full screen) but not in context (the room).
I think transfer-learning is the key, In my case, I used the ssd_mobilenet_v1_coco model as a base. I tried to fake the context of my augmented image with the Random Erasing Data Augmentation technique without success.
What are my available solutions? Do I tackle the problem rightly? I need to make the model training as fast as possible.
May I have to use some datasets for indoor-outdoor image classification and put my image randomly above? How important are the perspectives?
Thank you!
I have created hundreds of variation of my image source (rotation, skew etc ...). But it has failed!
So that mean your model did not converge or the final performance was bad? If your model did not converge then add more data. "Hundred of samples" is very few. So use more images and make more samples, and make your sample s dispersed as possible.
I think transfer-learning is the key, In my case, I used the ssd_mobilenet_v1_coco model as a base. I tried to fake the context of my augmented image with the Random Erasing Data Augmentation technique without success.
You mean fine-tuning. Did you reduced the label to 2 (your image and background) and did fine-tuning. If you didn't then you surely failed. Oh man, you should at least show me your model definition.
What are my available solutions? Do I tackle the problem rightly? I need to make the model training as fast as possible.
To make training converge faster, just add more GPUs and train on multiple GPUs. If you don't have money, rent some GPU cluster on Azure. Believe me, it is not that expensive.
Hope that help
i have a database with almost 20k 3D files, they are drawings from machine parts designed in a CAD software (solid works). Im trying to build a trained model from all of this 3D models, so i can build a 3D object Recognition App when someone can take a picture from one of this parts (in the real world) and the app can provide useful information about material , size , treatment and so on.
If anyone already do something similar, any information you can provide me would be greatly appreciated!
Some ideas:
1) Several pictures: instead of only one. As Rodrigo commented and Brad Larson tried to circumvent with his method, the problem with the user taking only one picture for the input is that you are necessarily lacking information to make a triangulation and form a point cloud in 3D. With 4 pictures taken from a slightly different angle, you can already reconstruct parts of the object. Comparing point clouds would make the endeavor much easier for any ML algorithm, Neuronal Networks (NN), Support Vector Machine (SVM) or others. A common standard to create point clouds is ASTM E2807, which uses the e57 file format.
On the downside a 3D vision algorithm might be heavy on the user's device, and is not the easiest to implement.
2) Artificial picture training: By training on pre-computed artificial pictures like Brad Larson suggested, you take over much of the computation, to the user's benefit. Be aware that you should probably use "features" extracted from the pictures, not the complete picture, both to train and to classify. The problem with this method is that you might be very sensitive to lighting and background context. You should take care to produce CAD pictures that have the same lightning conditions for all objects, so that the classifier doesn't overfit certain aspects of the "pictures" that do not belong to the object.
This aspect is where solution 1) is much more stable, it is less sensitive to the visual context.
3) Scale: The size of your object is an important descriptor. You should thus add scale information to your object descriptor before training. You could ask the user to take pictures with a reference object. Alternatively you can ask the user to make a rule-of-thumb estimate of the object size ("What are the approximate dimensions of the object, in [cm]?"). Providing size could make your algorithm significantly faster and more accurate.
If your test data in production is mainly images of the 3D object, then the method in the comment section by Brad Larson is the better approach and it is also easier to implement and takes a lot less effort and resources to get it up and running.
However if you want to classify between 3D models there are existing networks which exist to classify 3D point clouds. You will have to convert these models to point clouds and use them as training samples. One of those and which I have used is Voxnet. I also suggest you to add more variations to the training data like different rotations of the 3D model.
You can used Pre-Trained 3D Deep Neural Networks as there are many networks that could help you in your work and would produce high accuracy.
I need to compare two images in a project,
The images would be two fruits of the same kind -let's say two different images of two different apples-
To be more clear, the database will have images of the stages which an apple takes from the day it was picked from a tree until it gets rotten..
The user would upload an image of the apple they have and the software should compare it to all those images in the database and retrieve the data of the matching image and tell the user at which stage is it...
I did compare before images using OpenCv emgu but I really don't have much knowledge if it's the best way...
I need an expert advise is what i said in the project even possible? or the whole database images' will match the user's image!
And is this "image processing" or something else?
And is there any suggested tutorials to learn how to do this?
I know it seems not totally clear yet, but it's just a crazy idea that I wish I can get a way to know more how i can bring it to life!
N.B the project will be an android application
This is an example of a supervised image classification problem, which is a pretty broad field. You can read up on image classification here.
The way that you would approach this problem would be to define a few stages of decay (fresh, starting to rot, half rotten, completely rotten), put together a dataset of many images of the fruit in each stage, and train an image classifier on each stage. The sample dataset should contain images of many different pieces of fruit in many different settings. If you want to support different types of fruit, you would need to train a different classifier for each fruit.
There are many image classification tools out there. To name a few:
OpenCV's haar classifier
dlib's hog classifier
Matlab's Computer Vision System Toolbox
VLFeat
It would be up to you to look into which approach would work best for your situation.
Given that this is a fairly broad problem, I wouldn't expect to come up with a solid solution quickly unless you've had experience with image classification. If you are trying to develop a product, I would recommend getting in touch with a computer vision expert that you could contract to solve it.
If you are just looking to learn more about image classification, however, this could be a fun way to play around with different tools and get a feel for what's out there. You may want to start by learning about Machine Learning in general. Caltech offers a free online course that gives a pretty good intro to the subject.
Can anyone advise me way to build effective face classifier that may be able to classify many different faces (~1000)?
And i have only 1-5 examples of each face
I know about opencv face classifier, but it works bad for my task (many classes, a few samples).
It works alright for one face classification with small number of samples. But i think that 1k separate classifier is not good idea
I read a few articles about face recognition but methods from these articles reqiues a lot of samples of each class for work
PS Sorry for my writing mistakes. English in not my native language.
Actually, for giving you a proper answer, I'd be happy to know some details of your task and your data. Face Recognition is a non-trivial problem and there is no general solution for all sorts of image acquisition.
First of all, you should define how many sources of variation (posing, emotions, illumination, occlusions or time-lapse) you have in your sample and testing sets. Then you should choose an appropriate algorithm and, very importantly, preprocessing steps according to the types.
If you don't have any significant variations, then it is a good idea to consider for a small training set one of the Discrete Orthogonal Moments as a feature extraction method. They have a very strong ability to extract features without redundancy. Some of them (Hahn, Racah moments) can also work in two modes - local and global feature extraction. The topic is relatively new, and there are still few articles about it. Although, they are thought to become a very powerful tool in Image Recognition. They can be computed in near real-time by using recurrence relationships. For more information, have a look here and here.
If the pose of the individuals significantly varies, you may try to perform firstly pose correction by Active Appearance Model.
If there are lots of occlusions (glasses, hats) then using one of the local feature extractors may help.
If there is a significant time lapse between train and probe images, the local features of the faces could change over the age, then it's a good option to try one of the algorithms which use graphs for face representation so as to keep the face topology.
I believe that non of the above are implemented in OpenCV, but for some of them you can find MATLAB implementation.
I'm not native speaker as well, so sorry for the grammar
Coming to your problem , it is very unique in its way. As you said there are only few images per class , the model which we train should either have an awesome architecture which can create better features within an image itself , or there should be an different approach which can achieve this task .
I have four things which I can share as of now :
Do data pre-processing and then create a bigger dataset and train on a neural network ideally. Here, we can do pre-processing like:
- image rotation
- image shearing
- image scaling
- image blurring
- image stretching
- image translation
and create atleast 200 images per class. Please checkout opencv documentation which provides many more methods on how you can increase the size of your dataset. Once you do this, then we can apply transfer learning , which is a better approach than training a neural network from scratch.
Transfer learning is a method where we train a network on our own custom classes , and this network is already pre-trained on 1000's of classes. Since our data here is very less, I would prefer transfer learning only. I have written a blog on how you can approach this using tranfer learning after you have the required amount of data. It is linked here. Face recognition also is a classification task itself, where each human is a separate class. So, follow the instructions given in the blog , may be it would help you create your own powerful classifer.
Another suggestion would be , after creating a dataset , encode them properly. This encoding would help you preserve the features in an image and can help you train better networks. VLAD ,Fisher , Bag of Words are few encoding techniques. You can search few repositories online which have implemented these already on ORL database. Once you encode , train the network on the encodings , you will obviously see a better performance.
Even do check out , Siamese network here which is meant for this purpose I feel . Here they compare two images with similar characteristics on different networks and there by achieve better classification accuracies . Git repository is here.
Another standard approach would be using SVM , Random forests since the data is less. If you still prefer neural networks the above methods would serve you the purpose. If you intend to go with encodings , then I would suggest random forests , as it is highly preferrable in learning and flexible too.
Hopefully , this answer would help you proceed in the right direction of achieving things.
You might want to take a look at OpenFace, a Python and Torch implementantion of face recognition with deep neural networks: https://cmusatyalab.github.io/openface/