draw line over my screen to indicate mouse movement - opencv

I'm working with opencv and I want to do the following procedure:
Every second I check the difference between the current position of the mouse and the position it was 1 second ago.
Then I want to draw a line in my screen, from the position 1 second ago and the current position. This line should stay on the screen for 1 second until the difference is evaluated again, so the line should be renderer in the new position.
I've already done all the logics, position calculations and everything. What I want to know is how to draw a line in my screen. I wish to use my PC normally while I can see those lines, that means clicking with the mouse, keyboard events and opening windows should not stop the lines from being renderer on my screen.
How can I do that?

That isn't trivial.
You need some GUI toolkit that can show a window with the following properties:
full-screen
no window decoration (title bar...)
always-on-top
"intangible", i.e. allowing mouse and keyboard events to pass through to whatever other windows are underneath it
a working alpha channel (per-pixel transparency)
That's the general idea. Pick a popular GUI toolkit, then google around. You'll likely find ways to do each and all of those points.
I don't think OpenCV does this. Its GUI facilities are a convenience, not flexible to this degree.

Related

Konva object snapping with transformer jitters

I'm trying to make an editor using Konva.js.
In the editor I show a smaller draw area which becomes the final image. For this I'm using a group with a clipFunc. This gives a better UX since the transform controls of the transformer can be used "outside" of the canvas (visible part for the user) and allow the user to zoom and drag the draw area around (imagine a frame in Figma).
I want to implement object snapping based on this: https://konvajs.org/docs/sandbox/Objects_Snapping.html. (just edges and center for now) However I want it to be able to work when having multiple elements selected in my Transformer.
The strategy I'm using is basically calculating the snapping based on the .back element created by the transformer. When I know how much to snap I apply it to every node within the transformer.
However when doing it, the items starts jittering when moving the cursor close to the snapping lines.
My previous implementation was having the draw area fill the entire Stage, which I did manage to get working with the same strategy. (no jitter)
I don't really know what the issue is and I hope some of you guys can point me in the right direction.
I created a codepen to illustrate my issue: https://codesandbox.io/s/konva-transformer-snapping-1vwjc2?file=/src/index.ts

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

Make game panel

1 )I am currently making 2d tank game. I need to make panel where I could draw information about tanks etc. What is the best way to do that? Do I need to make 2 viewports (1 for game and 1 for panel) or what?
You could make a fixed area to show the information within your gui/hud. Or you could popup the text when you hover over a text. So locate where the mouse is at and if it hovers over a sprite start drawing the text at the cursor location.

Show popup window for visible spots on globe

I have a globe, similar to http://mrdoob.github.com/three.js/examples/webgl_trackballcamera_earth.html.
My globe has some spots on different locations (f.ex. Paris, Rome or London). Whenever one of these spots come into view, a popup window with additional information to that location should popup appear, and again disappears when that spot rotates out of view, quite similar to http://workshop.chromeexperiments.com/cloudglobe/.
You need 3D coordinates of those points and you need to transform(rotate or whatever) them with globe. Then use this code http://www.opengl.org/wiki/GluProject_and_gluUnProject_code to get screen space coordinates of those points. After that it is simple question of HTML, CSS and some javascript. You know where they are on screen so, for example, you can put some absolute positioned divs with text. But you will need to check on which side of globe those spots are - use rotation phase of the sphere or z-coordinate of the point or simple do color-based picking to see if this spot is visible.

Mouse position issues in XNA 3.1

I am using XNA 3.1 for the a game development, I am having little issue with the mouse position, I am also providing the screen shots of the issue, for now in the code below I am trying to display text "Start" exactly at mouse Position but the location of the the text is around 200 - 250 pixels away from the cursor instead to be at the same point where the cursor is on game window.
void MenuMainMenuDraw()
{
// Main Menu After Draw
// Menu Option After Draw
MouseState ms = Mouse.GetState();
spriteBatch.DrawString(fontMenu, "Start"
, new Vector2(ms.X, ms.Y)
, Color.Red);
spriteBatch.DrawString(fontMenu, "Options"
, new Vector2((float)MENU_GLOBAL.MENU_POSITION_X, (float)MENU_GLOBAL.OPTION_POSITION_Y)
, Color.Red);
spriteBatch.DrawString(fontMenu, "Leader's Board"
, new Vector2((float)MENU_GLOBAL.MENU_POSITION_X, (float)MENU_GLOBAL.HIGHSCORE_POSITION_Y)
, Color.Red);
}
Regards
MGD
There are a few possible causes that I can think of. I can't see anything directly wrong with the code snippet that you posted, so if none of these things resolve it, post more of your code.
Does the same problem occur when you create a new SpriteFont using the default font face? It may be a problem with the spacing in the font you're currently using.
In your spriteBatch.Begin(...) code are you supplying a transformMatrix? If so, try just using spriteBatch.Begin() with no arguments. Are you doing anything unusual like applying an Effects to your SpriteBatch drawing?
Are you drawing to a render target that is then redrawn to the screen?
I've had mouse position issues that seemed strange if my game window was actually larger than the resolution of the computer screen that I was running the game on. Its a quick check but it just might help. (Also remember that XNA draws images from the center point, but I believe by default draws test from the text's upper left corner). Hope that helps

Resources