I am working with the BackgroundSubtractorMOG2 class in OpenCV (Python), and am trying to extract the individual components of the background model. As I understand it, each pixel will be modeled by the mixture of a varying number of gaussian distributions, each defined by a mean and variance. So, how can I determine what all of these components (means and variances) are after feeding the background subtractor a given number of frames?
The documentation here:
https://docs.opencv.org/3.4.3/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html#adbb1d295befaff88a54a929e50aaf879
Does not seem to discuss doing this.
This information must be contained somewhere in the background subtractor object. Does anyone know how to get to it?
Thanks!
Edit: A little more searching has led me to believe that the cv2.Algorithm class is required to read the parameters from the BackgroundSubtractorMOG2 object. I think the two questions posed here:
http://answers.opencv.org/question/28008/how-to-derive-from-algorithm/
Reading algorithm parameters from file in OpenCV
are similar to what I am asking, but I am unable to interpret the answers. I thought the solution would be something along the lines of:
Parameters = cv2.Algorithm.read('name_of_backgroundsubtractorMOG2_object')
but this returns an error of: 'Required argument 'fn' (pos 1) not found'
Edit 2: Unfortunately I think this question has been answered here:
Save opencv BackgroundSubtractorMOG to file?
Short answer: It cannot be done! Sad!
Related
As I am working on my project that is to detect FOD (Foreign Object Debirs) that is found on the runway. FOD include anything like nuts, bolts, screws, locking wires, plastic debris, stones etc. that has the potential to cause damage to the aircraft. Now I have searched on the Internet to find any image dataset but no dataset is available related to FOD. Now my question is kindly guide me that how can I make my own dataset of images that can then be used for training purpose.
Kindly guide me in making image dataset for both classification and detection purposes. And also the data pre-processing that will be required. Thanks and waiting for the reply!
Although the question is a bit vague regarding your requirements and the specs of your machine, I'll try to answer it. You'll need object detection to do your task. There are many models available which you can use like Yolo, SSD, etc..
To create your own dataset, you can follow these steps:
Take lots of images of your objects of interest in various conditions, viewpoints and backgrounds. (Around 2000 per class should be good enough).
Now annotate (or mark) where your object is in the image. If you're using Yolo, make use of Yolo-mark for annotating. There should be other similar tools for SSD and other models.
Now you can begin training.
These steps should get you started or at least point you in the right direction.
You can build your own dataset with this code. I wrote it, and it works correctly.
You need to import the libraries and add your DATADIR.
if __name__ == "__main__":
for category in CATEGORIES:
path = os.path.join(DATADIR, category)
class_num = CATEGORIES.index(category)
for img in os.listdir(path):
try:
img_array = cv2.imread(os.path.join(path,img))
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
training_data.append([new_array, class_num])
except Exception as e:
pass
for features, label in training_data:
x_train.append(features)
y_train.append(label)
#create pikle
pickle_out = open("x_train.pickle", "wb")
pickle.dump(x_train, pickle_out)
pickle_out.close()
pickle_out = open("y_train.pickle", "wb")
pickle.dump(y_train, pickle_out)
pickle_out.close()
In case if you're starting completely from scratch, you can use "Dataset Directory", available on Play store. The App helps you in creating custom datasets using your mobile. You'll have to sign in to your Google drive such that your dataset is stored in Drive rather on your mobile. Additionally, It also contains Labelling the entity for classification and Regression predictive models.
Currently, the App supports Binary Image Classification and Image Regression.
Hope this Helped!
Download Link :
https://play.google.com/store/apps/details?id=com.applaud.datasetdirectory
Could someone point me to examples on how to configure the encogmodel with selectmethod? This is an overloaded method with the first one providing just taking inputs as dataset and method. The second one however allows the following:
dataset
methodtype
methodArgs
trainingType
trainingArgs
I am unable to get this working as the following error appears "Layer can't have zero neurons, Unknown architecture element:". Any help is appreciated. thank you.
Also, some insight on how to dump the weights in this approach? When the model is built via building the network (BasicNetwork), it is possible to dump the weights as network.flat approach. In this encogmodel driven approach, how do we dump the weights, gradients etc? thank you
There are three examples for EncogModel, you can find them here:
If that does not help, let me know more specifically what you are trying to do, or provide some code that is not working, and I update this to a more specific answer.
The weights can be directly accessed by BasicNetwork.dumpWeights, BasicNetwork.dumpWeightsVerbose(), or more directly with BasicNetwork.getWeight
I am working on post processing of disparity map.
My disparity image, even though it is WLS filtered, has too many 'holes'.
This is what i get for now. Rectified, but in fish eye way. Anyway rectified for sure, but have many holes. Disparity matching algorithm is SGBM. WLS filter sigma is 2.1, lambda is 30000. Black regions are holes.
I am referring official opencv site which says Disparity map post-filtering and it is using DisparityWLSFilter extensively. But I wonder how it works internally and want to read theoretical paper regarding this implementation. I want to know what Sigma and Lambda does, and how it will filter my image.
And, is there any other good disparity filter that i can use? WLS filter cannot fill the 'holes' effectively. Or, any algorithm that is easy to use or easy to implement, or library that is not GPL?
Self reply.
Got answer from Opencv.
Orig question is HERE.
Reply says
References have been added here, documentation reference
cc #sbokov
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
Check out the comments here, and the code here. That should answer some of your questions. To see how the code author has come up with this method perhaps should contact him directly as there is no reference for that in the code comments.
I am trying to determine when a food packaging have error or not error. Example
the logo " McDonald's " have error misprints or not, as the wrong label, wrong color..( i can not post picture )
What should I do, please help me!!
It's not a trivial task by any stretch of the imagination. Two images of the same identical object will always be different according to lightning conditions, perspective, shooting angle, etc.
Basically you need to:
1. Process the 2 images into "digested" data - dominant color, shapes, etcw
2. Design and run your own similarity algorithm between the 2 objects
You may want to look at Feature detectors in OpenCV: Surf, SIFT, etc.
Along a result I just found your question, so I think I come too late.
If not I think your problem car easily be resolved, it exists since years and is called Sikuli .
While it's for testing purposes, I have been using it in the same way as you need : compare a reference and a production image. Based on OpenCV it does it very well.
I'm working with openCV and I'm a newbie in this field. I'm researching about Camshift. I want to extend this method by using multiple histograms. It means when tracking an object has many than one apperance (ex: rubik cube with six apperance), if we use only one histogram, Camshift will most likely fail.
I know calcHist function in openCV (http://docs.opencv.org/modules/imgproc/doc/histograms.html#calchist) has a parameter is "accumulate", but I don't know how to use and when to use (apply for camshiftdemo.cpp in opencv samples folder). This function can help me solve this problem? Or I have to use difference solution?
I have an idea, that is: create an array histogram for object, for every appearance condition that strongly varies in color, we pre-compute and store all to this array. But when we compute new histogram? It means that the pre-condition to start compute new histogram is what?
And what happend if I have to track multiple object has same color?
Everybody please help me. Thank you so much!