Is there any method in Opencv using which when we click on a particular location of an image, it gives pixel location as well as B,G,R values. Thank You!
There is an answered similar post here.
Basically, you need to use setMouseCallback() and create your own callback function.
You can start from here to get the mouse location: http://www.wisegai.com/2012/10/29/using-mouse-callbacks-with-opencv-and-the-cvhighgui-module/
(I don't know in which coordinate system will be the click, i.e. of the window, of the image or of the screen)
And these might help for the second part.
How to read the screen pixels?
Screen Capture Specific Window
In term of coordinates it might have an impact the window frame, you'll have to try or google a little bit further.
Related
I am trying to detect an object in a video. i am using SURF as feature detection and descriptor extractor, and BRUTFORCE as matcher. i tested my work with faces, i captured a picture of me and when i run the camera and direct it toward me, my face gets detected and a rectangle is drawn around it. i tried to make another test, i captured an image of my mouse and resized it, and when i run the cam, it is not getting detected
the problems i am facing are:
1-is the size of the query/object image matters in such cases,? i am asking this question because the image i captured of my self is bigger than the one of the mouse, and the face is getting detected and the mouse not.
2-regardless of which image i am using as a query/object iamge, how to display camera preview of only the train/scene image without the query/object image. i am asking this question because, what i am getting is something as shown in the below posted images, while what i want to do is something as it is shown here, i checked the code in that link, it is in C++ but i followed the same thing and also the tutorial uses 'drawMatches' method which has a peer in java which is Features2D.DrawMatches() and both of them returns a Mat object with the query/object image on the left side and the train/scene image on the right side as also shown in the image i posted below.
what i want to do is, to display on the the camera output without the query/object image, i want the area designated for the camera output is to show only the train/scene image captured from the camera.
please let me know how to solve this issues, i want to do something as shown in the tutorial i cited in the link.
1 - size matters but in your case, I think the most crucial problem is "textureness". SURF detect the interest points where the "texture gradient" is strong. In the case of your mouse, the gradient is mainly smooth, except aroud the logo (fujitsu), the button and at the border of the image. In the tutorial you point to, you notice it uses a very textured object to demonstrate the effect.
2 - to the best of my knowledge, there is fully automatic method to do what you want, but it can be done with a few steps. Basically, you must determine the surrounding box of your object then draw it. To draw, the easier is to use cv::rectangle but you can be more precise with four (or more....) cv::line. To determine the surrounding box, you can estimate the extreme points among the filtered matches.
Good luck!
I am working on a small education demo which should measure height and width of the object using iOS camera.
EDIT:
I have a new theory to measure the width of an object.
In above image, if i can get Angle α and Angle ß, i can get width of the unknown side by using trigonometry formulas. I have the values of b1 and b2 already.
OLD:
Right now, i am focusing on measuring length only.
As per my knowledge it should be 3 step process.
User snaps one end of the object.
User snaps other end of the object.
User snaps center of the object. (Suggest me a better way for these please)
I get the approximate measurements using above process, but for the 3rd step, in which user snaps the center of the object. I want to show pointer location on screen (as camera overlay) to help user determine the center of the object.
This is how i am doing it right now.
How can i draw pointer location for 3rd step?
Note: Please suggest alternative/best way to make it possible. I would love another suggestions. Thanks.!!
First of all I must appreciate your work you have done till now. Another good thing is the way of explaining, salute!!!!!
After reading of your question, I feel that you dont need code, you can do it. I think you need direction only.
As per your explanation, you want to record angle of rotation of the device.
If you want to measure angle of rotation, you have to use compas readings. But compas readings will change if user tild the device. So you have to use accelerometer to measure tilding of device.
In short you have to make some combination and equation of both compas and accelerometer readings. Use compas to measure angle and use accelerometer to measure tilding of device.
If you want further information to implement it, you can ask me.
Hope this will help you....
Ok, I'm not quite sure if this is something I can ask here so no need to shoot me down. Just tell me and i'll delete the question :)
I had this idea of making my own clock using a touch screen and program it myself.
While thinking about this I thought of all these different styles to show the current time.
Of all the styles I came up with there was one that I found the most fun which is a clock displaying the time Rorschach style. And no not just a random smudge and guess what time it is but more like Rorschach in watchmen.
He has a mask with inkblots that constantly change shape (really cool if you ask me).
So what I had in mind is inkblots that change shape according to the digit it represents.
When the time changes
12:49:58 -> 12:49:59 the 2nd second digit will transform from 8 -> 9.
So now back to the original problem:
Before attempting to get this type of clock running I want to try to give a blob a certain shape and make it transform into another shape.
I searched on google but without any luck so I was hoping there was someone here that could point me in the right direction for making a random blob and transform it into another shape in an animation.
For example:
Draw square -> animate to circle
Any tips and tricks are welcome :)
In order to get the most simpliest animation of digit transformation you could store all posible digits in one image (verticaly) and then only partially show that image in your component. So when you want to do transformation between one digit to another you simply slide the image up and down.
Now if you are using FireMonkey you could create a 3D viewport and inside it create a cylindical object onto which will you render your texture with digits. So now you only rotate the cilinder in order to show the corect digit.
I'm showing an image using cv::imshow("binary1", binary1);. I want to put a marker on the image to the check the pixel locations. How can I put marker on the image for a particular row and column value?
It's difficult to understand what is it that you want to do, but I wrote a code a while back that displays the RGB color of a pixel along with it's coordinates on the title of the window. Move the mouse pointer over the image and you'll see it change.
It uses a Qt window, though. You can check cvImage.
I want to make a screen where i have to show the map with the all received lat and long. and i want to show this map in 200*200 resolution only.
Can i make the point bitmap as focusable so i can be click on it and show other screen.
How to show more than one location in Blackberry MapField?
hope this link helps you in solving your problem;Do you have any idea how to display a small text box kind of thing when multiple points are selected?