I am using Google Object Detection API with my own dataset. Mostly after 50K steps it begins to converge with 60 percents accuracy. I think it works fine in general. But when if you look at TotalLoss graphic or in general all loss graphics, it fluctuates so much. It looks like this:
What could be the reason for this? Is it normal or not? If not what is explanation.
Also occasionally, I see in the example images some bounding boxes are doubled in one area, why is that?
Yes, fluctuation in the loss is very normal particularly because the detection pipelines are usually trained with small batch sizes (batch size 1 in the case of Faster R-CNN), so you typically only see a meaningful decrease in the loss if you average over many steps.
yes as #Jonathan explained, 'wiggles' are mostly observed when you have smaller batch sizes. Unfortunately, you are going to need at least 16GB memory to handle batches more than 1 when using Faster-RCNN on Tensorflow API. The only way except you don't have much power of processing is just to be patient until some thousands of iterations completed, in my case it was more than 100,000.
Related
I would like to use the Haar classifier to detect the presence of vehicles in a scene (trying with only cars so far). Since I have not found many trained XML files online, I decided to generate my own.
I found some image sets of vehicles that have been used for similar purposes (training computer vision algorithms) and used these to create my own XML files. It has been almost a week and some of them have finished, so I tried using them but the results were terrible. The classifiers I found online worked decently, at least it appears they are trying to detect vehicles and work fast enough for real-time application (maybe 5-10 FPS or so).
Whereas mine can take several minutes to analyze a frame using detectMultiScale() using the same parameters, and if I pass different parameters (e.g. increase min size, decrease max size, increase scaling factor) it will work faster (maybe 1 FPS) but detects absolutely nothing of note, never detects any vehicle and randomly detects some spots of asphalt as a vehicle.
Where did I go wrong in generating my files? I have limited time to complete this task and these classifiers can take a whole week to train so I have very few attempts remaining. For reference, my methodology is (following this tutorial):
-Take all positive and negative images; if no negative images supplied, take negative images from another data set, at least as many negatives as positives
-Generate as many samples as the number of positives
-Use same parameters as suggested, except image size (set to the size of images in a given data set), and nstages (set to 10 because 20 takes far too long)
-For the npos parameter, I use 1/10th the number of samples, using the full number of samples resulted in "assertion failed" after a few hours, apparently the number of samples cannot be the same as the npos according to this so I gave myself a safety margin.
TL;DR Haar classifier I trained myself performs much worse than one found online (in terms of time and accuracy), need advice on how to improve it and not waste another week training it.
There are two problems here. One, the accuracy of the classifier is low. The other, the classifier runs too slow.
There seems to be no problem with the reference that you used. The steps seem accurate, and I have personally tried them in that order and managed to get good results.
As #Micka mentions, nPos around 90% of the original sample count is good enough. minHitRate is a parameter that you can change. Did you observe the numbers that are displayed while training? How was the accuracy improving, and did your classifier stop training (or are you using the trained parameters before learning ends?)?
For the low speed in detection, the most likely reason is that your training data did not have simple features to learn quickly. Did you trying detection on the data that you used for training? How were the results in that case? Compiler settings or high image resolution can be a problem too, but if you tried the same inputs and settings with other classifiers, this is unlikely.
If you like tor try a different approach (and have a GPU), YOLO V2 should be much faster and more accurate for this task.
I am using FCN (Fully Convolutional Networks) and trying to do image segmentation. When training, there are some areas which are mislabeled, however further training doesn't help much to make them go away. I believe this is because network learns about some features which might not be completely correct ones, but because there are enough correctly classified examples, it is stuck in local minimum and can't get out.
One solution I can think of is to train for an epoch, then validate the network on training images, and then adjust weights for mismatched parts to penalize mismatch more there in next epoch.
Intuitively, this makes sense to me - but I haven't found any writing on this. Is this a known technique? If yes, how is it called? If no, what am I missing (what are the downsides)?
It highly depends on your network structure. If you are using the original FCN, due to the pooling operations, the segmentation performance on the boundary of your objects is degraded. There have been quite some variants over the original FCN for image segmentation, although they didn't go the route you're proposing.
Just name a couple of examples here. One approach is to use Conditional Random Field (CRF) on top of the FCN output to refine the segmentation. You may search for the relevant papers to get more idea on that. In some sense, it is close to your idea but the difference is that CRF is separated from the network as a post-processing approach.
Another very interesting work is U-net. It employs some idea from the residual network (RES-net), which enables high resolution features from lower levels can be integrated into high levels to achieve more accurate segmentation.
This is still a very active research area. So you may bring the next break-through with your own idea. Who knows! Have fun!
First, if I understand well you want your network to overfit your training set ? Because that's generally something you don't want to see happening, because this would mean that while training your network have found some "rules" that enables it to have great results on your training set, but it also means that it hasn't been able to generalize so when you'll give it new samples it will probably perform poorly. Moreover, you never talk about any testing set .. have you divided your dataset in training/testing set ?
Secondly, to give you something to look into, the idea of penalizing more where you don't perform well made me think of something that is called "AdaBoost" (It might be unrelated). This short video might help you understand what it is :
https://www.youtube.com/watch?v=sjtSo-YWCjc
Hope it helps
I have been using cascade classifier to train some kind of plants. Here is a sample image for what I want to detect
I sampled the little green plants for positives, and made negatives out of images with similar background and no green plants (as suggested by many sources). Used many images similar to this one for sampling.
I did not have a lot of training data so of course I did not expect some idealistic classification results.
I have set the usual parameters min_hit_rate 0.95 max_false_alarm 0.5 etc. I have tried training with 5,6,7,8,9 and 10 stages. The strange thing that happens to me is that during the training process I get hit rate of 1 during all stages, and after 5 stages I get good acceptance ratio 0.004 (similar for later stages 6,7,8...).
I tried testing my classifier on the same image which I used for the training samples and there is very illogical behavior:
the classifier detects almost everything BUT the positive samples i took from it (the same samples in the training with HIT RATION EQUAL TO 1).
the classifier is really but really slow it took over an hour for single input image (down-sampled scale factor 1.1).
I do not get it how could the same samples be classified as positives during training (through all the stages) and then NONE of it as positive on the image (there are a lot of false positives around it).
I checked everything a million times (I thought that I somehow mixed positives and negatives but I did not).
Can someone help me with this issue?
I can try and help but of course I can't train this thing for you unless you send me your images.
In my experience if you aren't getting the desired results, you are simply giving traincascade the wrong or not enough images (either or both positives or negatives).
I did not get great results until I created an annotation file using the built-in opencv_annotation tool. Have you done that? How many positives?
Did your negatives contain the background that you are attempting to detect your object in? This is key and can't be overlooked.
Also, I would use LBP, it's much faster.
If you or anyone is still stuck and have some positives created, send them to me and I'll see if I can train this thing.
And also, I have written hopefully a one-stop tutorial about this stuff after my experiences with it:
http://johnallen.github.io/opencv-object-detection-tutorial/
I have written a multi-view face detection code using opencv face detector. I am running five detectors (trained for different pose angles) over an image and taking their weights to detect faces in an image. I have made the code parallel using TBB parallel_for but it improved the performance by just 1.7-times. I would like to ask if there is any better way to run five detectors in parallel?
I am running my code on a cluster with 16-cores. I think number of threads (that in my case are 5) are too less to utilize the complete power.
Any suggestions?
Thanks,
Some possible problems to look into:
One of the detectors takes longer than the other detectors to run. For example, if one detector takes 4 units of time, and the other four detectors each take 1 unit of time, the most possible speedup is 2x. Parallelizing the slow detector itself might help in this kind of situation.
The detectors run so fast that the parallel_for does not have time to spread the work. If the detectors each take at least 0.1 sec, this should not be a problem.
Memory bandwidth can be a limiting resource, particularly if working sets do not fit in outer-level cache.
A profiler such as Intel(R) VTune(TM) Amplifier can sometimes help to track down these problems. Both commercial and non-commercial licenses exist for Amplifier. [Disclaimer: I work for Intel.]
I have to apply a convolution filter on each row of many images. The classic is 360 images of 1024x1024 pixels. In my use case it is 720 images 560x600 pixels.
The problem is that my code is much slower than what is advertised in articles.
I have implemented the naive convolution, and it takes 2m 30s. I then switched to FFT using fftw. I used complex 2 complex, filtering two rows in each transform. I'm now around 20s.
The thing is that articles advertise around 10s and even less for the classic condition.
So I'd like to ask the experts here if there could be a faster way to compute the convolution.
Numerical recipes suggest to avoid the sorting done in the dft and adapt the frequency domain filter function accordingly. But there is no code example how this could be done.
Maybe I lose time in copying data. With real 2 real transform I wouldn't have to copy the data into the complexe values. But I have to pad with 0 anyway.
EDIT: see my own answer below for progress feedback and further information on solving this issue.
Question (precise reformulation):
I'm looking for an algorithm or piece of code to apply a very fast convolution to a discrete non periodic function (512 to 2048 values). Apparently the discrete time Fourier transform is the way to go. Though, I'd like to avoid data copy and conversion to complex, and avoid the butterfly reordering.
FFT is the fastest technique known for convolving signals, and FFTW is the fastest free library available for computing the FFT.
The key for you to get maximum performance (outside of hardware ... the GPU is a good suggestion) will be to pad your signals to a power of two. When using FFTW use the 'patient' setting when creating your plan to get the best performance. It's highly unlikely that you will hand-roll a faster implementation than what FFTW provides (forget about N.R.). Also be sure to be using the Real version of the forward 1D FFT and not the Complex version; and only use single (floating point) precision if you can.
If FFTW is not cutting it for you, then I would look at Intel's (very affordable) IPP library. The have hand tuned FFT's for Intel processors that have been optimized for images with various bit depths.
Paul
CenterSpace Software
You may want to add image processing as a tag.
But, this article may be of interest, esp with the assumption the image is a power or 2. You can also see where they optimize the FFT. I expect that the articles you are looking at made some assumptions and then optimized the equations for those.
http://www.gamasutra.com/view/feature/3993/sponsored_feature_implementation_.php
If you want to go faster you may want to use the GPU to actually do the work.
This book may be helpful for you, if you go with the GPU:
http://www.springerlink.com/content/kd6qm361pq8mmlx2/
This answer is to collect progress report feedback on this issue.
Edit 11 oct.:
The execution time I measured doesn't reflect the effective time of the FFT. I noticed that when my program ends, the CPU is still busy in system time up to 42% for 10s. When I wait until the CPU is back to 0%, before restarting my program I then get the 15.35s execution time which comes from the GPU processing. I get the same time if I comment out the FFT filtering.
So the FFT is in fact currently faster then the GPU and was simply hindered by a competing system task. I don't know yet what this system task is. I suspect it results from the allocation of a huge heap block where I copy the processing result before writing it to disk. For the input data I use a memory map.
I'll now change my code to get an accurate measurement of the FFT processing time. Making it faster is still actuality because there is room to optimize the GPU processing like for instance by pipelining the transfer of data to process.