I have to find multiple objects of one type in an image and for this purpose, I am using OpenCV (Python). First I started with Template matching which doesn't work well if the orientation of the image is changed.
So, I used the method described here. But, unfortunately, it is not working as expected and also, everytime I rerun my code, the output (i.e. the bounding box around the object detected) keeps changing.
Here is my output:
Output(Run 1)
Output(Run 2)
Output(Run 3)
Output(Run 4)
Output(Run 5)
Output(Run 6)
I have tweaked the values and I am still not getting the desired output. Why is it that 2/5 objects of the same type is detected, but not the rest?
I have been thinking about ways to fix it for weeks now. I also tried bf.knnMatch instead of flann.knnMatch and SURF/SIFT instead of ORB, but still no luck.
Do you guys have any idea on how can I fix this? Any suggestions will be appreciated.
Note: I have not made any changes in the code given in the link
Related
I am using opencv and openvino and am trying to figure out when I have a face detected, use the cv2.rectangle and have my coordinates sent but only on the first person bounded by the box so it can move the motors because when it sees multiple people it sends multiple coordinates and thus causing the servo and stepper motors to go crazy. Any help would be appreciated. Thank you
Generally, each code would run line by line. You'll need to create a proper function for each scenario so that the data could be handled and processed properly. In short, you'll need to implement error handling and data handling (probably more than these, depending on your software/hardware design). If you are trying to implement multiple threads of executions at the same time, it is better to use multithreading.
Besides, you are using 2 types of motors. Simply taking in all data is inefficient and prone to cause missing data. You'll need to be clear about what servo motor and stepper motor tasks are, the relations between coordinates, who will trigger what, if something fails or some sequence is missing then do task X, etc.
For example, the sequence of Data A should produce Result A but it is halted halfway because Data B went into the buffer and interfered with Result A and at the same time screwed Result B which was anticipated to happen. (This is what happened in your program)
It's good to review and design your whole process by creating a coding flowchart (a diagram that represents an algorithm). It will give you a clear idea of what should happen for each sequence of code. Then, design a proper handler for each situation.
Can you share more insights of your (pseudo-)code, please?
It sounds easy - you trigger a face-detection inference-request and you get a list/vector with all detected faces (the region-of-interest for each detected face) (including false-positive and false-positives, requiring some consistency-checks to filter those).
If you are interested in the first detected face only - then it could be to just process the first returned result from the list/vector.
However, you will see that sometimes the order of results might change, i.e. when 2 faces A and B were detected, in the next run it could still return faces, but B first and then A.
You could add object-tracking on top of face-detection to make sure you always process the same face.
(But even that could fail sometimes)
I'm quite new to Meshlab, and unsurprisingly ran into a few obstacles. Anyhow, I have been looking for a solution online but didn't find one.
I have a very large .ply-file containing a mesh that I made in CloudCompare from a 3D-scan of a rock art cave and I want to transform the vertex colour into a texture.
However, when I am trying to use the "Texture: Vertex Color to Texture"-Filter, type in the path that I want my texture to be saved at, and hit "apply", it displays an error message saying
"path in texture file not allowed".
I imagine this refers to the path I tried to save my texture at, neither name nor path contain a space, i've tried working around it somehow by saving it elsewhere but didn't manage to get it to work.
Also, i was going to reduce vertices count afterwards, but I need to extract a rather high res texture first.
Anyone ever encountered this or has an idea as to why this might happen?
Thanks in advance!
type in any word into that box and DONT type in a path.
Example: my path in that box was F:\Personal\3D_scans\textures\tree.png
After literally 2 hours i got exhausted and deleted it and wrote: "ffffffffffffff"
It worked, and i realised it's looking for a name of the texture you want to give
Had the same problem today, I managed to solve the issue by checking "run in compatibility mode" with windows 8 (I am on windows 10). Maybe that will help!
I'm trying to get Autoware to send out a tracked-object list for the LGSVL Simulation. I turn on Yolo3, Euclidian Cluster detection, then pixel_cloud_fusion. When I do, it constantly states that it's looking for TF and Intrinsics data. Looking further, this seems to be a "camera_info" topic that is missing. So I made one up just trying to get it working (not sure if LGSVL has any kind of native support??). I used a bunch of 1s for the matrices and "plumb bob" for the type and matched the width/height to the published camera images. Once I send it, however, I get the error:
[pixel_cloud_fusion] "camera" passed to lookupTransform argument target_frame does not exist
I have no idea what this means and the text does not appear in the Autoware software. Am I doing something wrong? Is there another topic I'm lacking?
P.S Maybe someone with 1500 rep should create an Autoware tag
It seems like this might be an issue with the TF tree being incomplete. For loopup transform to work it needs a well defined TF tree to whatever other fixed frame. To add your camera to the TF tree you should be able to use the static transform publisher.
I am trying to determine when a food packaging have error or not error. Example
the logo " McDonald's " have error misprints or not, as the wrong label, wrong color..( i can not post picture )
What should I do, please help me!!
It's not a trivial task by any stretch of the imagination. Two images of the same identical object will always be different according to lightning conditions, perspective, shooting angle, etc.
Basically you need to:
1. Process the 2 images into "digested" data - dominant color, shapes, etcw
2. Design and run your own similarity algorithm between the 2 objects
You may want to look at Feature detectors in OpenCV: Surf, SIFT, etc.
Along a result I just found your question, so I think I come too late.
If not I think your problem car easily be resolved, it exists since years and is called Sikuli .
While it's for testing purposes, I have been using it in the same way as you need : compare a reference and a production image. Based on OpenCV it does it very well.
I have run the following model using lmerTest and using lme4:
model2 = lmer(log(RT)~Group*A*B*C+(1|item)+(1+A+B+C|subject),data=dt)
Using lmerTest I get the following error when typing the summary() command:
> summary(model1)
Error in `colnames<-`(`*tmp*`, value = c("Estimate", "Std. Error", "df", :
length of 'dimnames' [2] not equal to array extent
I saw this has already been an issue for other users and that one user was able to bypass the issue running lsmeans().
When I tried lsmeans, I got the error:
Error in asMethod(object) : not a positive definite matrix.
I did not see any NAs when looking into the covariance matrix.
Note that I am able to run this model if I simply inverse the contrasts in the Group factor.
I have difficulties understanding why this is the case.
When I run the same model using lme4 and not lmerTest, I am able to get all the outputs of summary() but no p-values (as expected). pvals.fnc is discontinued in lme4 and I have not found an alternative yet. Plus it would be nice to have the p-values estimated in the same way for model2 as for the other models for which I was successfully able to use lmerTest.
Does anyone know what I should do at this point? Any help would be much appreciated!
If A or B or C are factors then you might get errors - such models are not yet supported by the lmerTest package (we will put the warning message together with the restrictions for such models in the help page)