Real Time Detection of Multiple Objects with Various Classes with YOLO - opencv

I am currently working on detecting whether motorbike drivers are wearing a helmet or not using YoloV3. Firstly I am using the coco model to detect motorbike and then sending the detected motorbike image to the helmet detection Yolo model.
Currently, Yolov3 is taking lots of time in loading weights and performing the detection for each frame.
Is there any way to reduce this time taken as I need to perform the detections in real time.
Also, should I go for multiple Yolo models or should I train a single model containing both the motorbike and helmet class?

I see that you have two different questions:
Is there any way to reduce this time taken as I need to perform the
detections in real time?
Should I go for multiple Yolo models or should I train a single
model containing both the motorbike and helmet class?
Q1: YOLO should work in real-time with GPU unless the image resolution is enormous or your hardware does not meet the requirements. If you can not change the two things above,
Try the smaller model, yolov3_tiny for example. From PyTorch Hub;
model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny')
Q2: Single model, since YOLO is capable of object detection with multiple classes without sacrificing much speed and accuracy.

Related

"Incremental" face training in OpenCV

I've been working with some face detection in OpenCV. I have a couple projects I've done - one does face detection which uses a pre-built model. Some others do different things where I collect my own images and train my own models. When I do the latter, it's generally with much smaller datasets that what you'd use for face training.
On my face recognizer - many of the common faces I work with do not get detected properly (due to odd properties like masks, hats, goggles, glasses, etc). So I want to re-train my own model - but grabbing the gigantic "stock" datasets, adding my images to it may take a VERY long time.
So the question is: is there a way to start with an existing model (XML file) and run the trainer in a way that would just add my images to it?
This is called "Transfer Learning". Tenserflow (Keras) has a lot of support for this. It basically consists of taking a pre-existing model with pre-existing weights, "freezing" weights on certain layers, adding new layers in or below existing ones, and then retraining only the un-frozen layers.
it can't be used to readily just "continue" learning, but can be used to add additional things into the training - for newer aspects (like potentially, adding masked people to an already trained model of unmasked people, as in my original question)

How to solve the "cold start" problem in computer vision based deep learning models?

By “Cold Start” I mean that often computer vision models for object detection or semantic segmentation require about 5000 images per class. So if an idea if floated within the company for e.g. we want to use object detection to count the number of wood logs when the truck is dispatched and then use the same app to count the number that is received.
So now the challenge is that you have only a few images of woods logs on a truck but to train any model you need thousands, so what do practitioners typically do for these prototypes?
Because at this stage it is not clear what model to try? It is also not very feasible to ask business to invest in collecting thousands of images of logs and label them?
That is why I am calling this “Cold Start”. How do you start?
What I have looked into is Conditional GANs, Pix-2-Pix but I am trying to understand the recommended method on how to start when you have very few images per object class.
I expect that when I drop a few images in a folder and call this library I end up getting a lot more images per class so I can then start my prototyping.
Note that asking for software libraries is specifically off-topic here.
No, there is no magic solution: if your data set doesn't have enough information in its images to train a hand-crafted model, no amount of software will change that fact. However, the first approach is to challenge that "fact": how do you know that you don't have enough images? What happened when you used what you have to train a model? You will train for more epochs before the model converges, but you should be able to achieve far better than random accuracy by training a comparable quantity of iterations.
I seriously doubt that you'll need to collect and label thousands of images: you have a very restricted paradigm, photos of log trucks taken from an vantage point you control. Training a model to count non-overlapping near-circles will take much less differentiation than, say, distinguishing motor vehicles from postal boxes.
Experiment with the basic models you have at hand -- you already have much more of the solution than you realize. If your data set is too small, go out the yard with a digital camera and get twice as many, three times, whatever you need. Flip the images left-right to get more input.
Does that get you moving?
Transfer learning solves the problem you are describing as "Cold Start". Basically you can import the weights obtained after training using a big and open dataset and just fine-tune them using the smaller dataset you already have. Data augmentation, freezing some of the layers, etc may help improving the results of a fine-tuned model.

MobileNet vs SqueezeNet vs ResNet50 vs Inception v3 vs VGG16

I have recently been looking into incorporating the machine learning release for iOS developers with my app. Since this is my first time ever using anything ML related I was very lost when I started reading the different model descriptions that Apple has made available. They have the same purpose/description, the only difference being the actual file size. What is the difference between these models and how would you know which one is best fit ?
The models Apple makes available are just for simple demo purposes. Most of the time, these models are not sufficient for use in your own app.
The models on Apple's download page are trained for a very specific purpose: image classification on the ImageNet dataset. This means they can take an image and tell you what the "main" object is in the image, but only if it's one of the 1,000 categories from the ImageNet dataset.
Usually, this is not what you want to do in your own apps. If your app wants to do image classification, typically you want to train a model on your own categories (like food or cars or whatever). In that case you can take something like Inception-v3 (the original, not the Core ML version) and re-train it on your own data. That gives you a new model, which you then need to convert to Core ML again.
If your app wants to do something other than image classification, you can use these pretrained models as "feature extractors" in a larger neural network structure. But again this involves training your own model (usually from scratch) and then converting the result to Core ML.
So only in a very specific use case -- image classification using the 1,000 ImageNet categories -- are these Apple-provided models useful to your app.
If you do want to use any of these models, the difference between them is speed vs. accuracy. The smaller models are fastest but also least accurate. (In my opinion, VGG16 shouldn't be used on mobile. It's just too big and it's no more accurate than Inception or even MobileNet.)
SqueezeNets are fully convolutional and use Fire modules which have a squeeze layer of 1x1 convolutions which vastly decreases parameters as it can restrict the number of input channels each layer. This makes SqueezeNets extremely low latency, in addition to the fact they don't have dense layers.
MobileNets utilise depth-wise separable convolutions, very similar to inception towers in inception. These also reduce the number of a parameters and hence latency. MobileNets also have useful model-shrinking parameters than you can call before training to make it exact size you want. The Keras implementation can use ImageNet pre-trained weights too.
The other models are very deep, large models. The reduced number of parameters / style of convolution is not used for low latency but just for the ability to train very deep models, essentially. ResNet introduced residual connections between layers which were originally believed to be key in training very deep models. These aren't seen in the previously mentioned low latency models.

Making a trained model (machine learning) from 3D models

i have a database with almost 20k 3D files, they are drawings from machine parts designed in a CAD software (solid works). Im trying to build a trained model from all of this 3D models, so i can build a 3D object Recognition App when someone can take a picture from one of this parts (in the real world) and the app can provide useful information about material , size , treatment and so on.
If anyone already do something similar, any information you can provide me would be greatly appreciated!
Some ideas:
1) Several pictures: instead of only one. As Rodrigo commented and Brad Larson tried to circumvent with his method, the problem with the user taking only one picture for the input is that you are necessarily lacking information to make a triangulation and form a point cloud in 3D. With 4 pictures taken from a slightly different angle, you can already reconstruct parts of the object. Comparing point clouds would make the endeavor much easier for any ML algorithm, Neuronal Networks (NN), Support Vector Machine (SVM) or others. A common standard to create point clouds is ASTM E2807, which uses the e57 file format.
On the downside a 3D vision algorithm might be heavy on the user's device, and is not the easiest to implement.
2) Artificial picture training: By training on pre-computed artificial pictures like Brad Larson suggested, you take over much of the computation, to the user's benefit. Be aware that you should probably use "features" extracted from the pictures, not the complete picture, both to train and to classify. The problem with this method is that you might be very sensitive to lighting and background context. You should take care to produce CAD pictures that have the same lightning conditions for all objects, so that the classifier doesn't overfit certain aspects of the "pictures" that do not belong to the object.
This aspect is where solution 1) is much more stable, it is less sensitive to the visual context.
3) Scale: The size of your object is an important descriptor. You should thus add scale information to your object descriptor before training. You could ask the user to take pictures with a reference object. Alternatively you can ask the user to make a rule-of-thumb estimate of the object size ("What are the approximate dimensions of the object, in [cm]?"). Providing size could make your algorithm significantly faster and more accurate.
If your test data in production is mainly images of the 3D object, then the method in the comment section by Brad Larson is the better approach and it is also easier to implement and takes a lot less effort and resources to get it up and running.
However if you want to classify between 3D models there are existing networks which exist to classify 3D point clouds. You will have to convert these models to point clouds and use them as training samples. One of those and which I have used is Voxnet. I also suggest you to add more variations to the training data like different rotations of the 3D model.
You can used Pre-Trained 3D Deep Neural Networks as there are many networks that could help you in your work and would produce high accuracy.

It's possible to do object detection (one-class) in images retraining Inception model?

There is a way to do object detection, retraining Inception model provided by Google in Tensorflow? The goal is to predict wheter an image contains a defined category of objects (e.g. balls) or not. I can think about it as a one-class classification or multi-class with only two categories (ball and not-ball images). However, in the latter I think that it's very difficult to create a good training set (how many and which kind of not-ball images I need?).
Yes, there is a way to tell if something is a ball. However, it is better to use Google's Tensorflow Object Detection API for Tensorflow. Instead of saying "ball/no ball," it will tell you it thinks something is a ball with XX% accuracy.
To answer your other questions: with object detection, you don't need non-ball images for training. You should gather about 400-500 ball images (more is almost always better), split them into a training and an eval group, and label them with this. Then you should convert your labels and images into a .record file according to this. After that, you should set up Tensorflow and train.
This entire process is not easy. It took me a good couple of weeks with an iOS background to successfully train a single object detector. But it is worth it in the end, because now I can rapidly switch out images to train a different object detector whenever an app needs it.
Bonus: use this to convert your new TF model into a .mlmodel usable by iOS/Android.

Resources