Mouse position issues in XNA 3.1 - xna

I am using XNA 3.1 for the a game development, I am having little issue with the mouse position, I am also providing the screen shots of the issue, for now in the code below I am trying to display text "Start" exactly at mouse Position but the location of the the text is around 200 - 250 pixels away from the cursor instead to be at the same point where the cursor is on game window.
void MenuMainMenuDraw()
{
// Main Menu After Draw
// Menu Option After Draw
MouseState ms = Mouse.GetState();
spriteBatch.DrawString(fontMenu, "Start"
, new Vector2(ms.X, ms.Y)
, Color.Red);
spriteBatch.DrawString(fontMenu, "Options"
, new Vector2((float)MENU_GLOBAL.MENU_POSITION_X, (float)MENU_GLOBAL.OPTION_POSITION_Y)
, Color.Red);
spriteBatch.DrawString(fontMenu, "Leader's Board"
, new Vector2((float)MENU_GLOBAL.MENU_POSITION_X, (float)MENU_GLOBAL.HIGHSCORE_POSITION_Y)
, Color.Red);
}
Regards
MGD

There are a few possible causes that I can think of. I can't see anything directly wrong with the code snippet that you posted, so if none of these things resolve it, post more of your code.
Does the same problem occur when you create a new SpriteFont using the default font face? It may be a problem with the spacing in the font you're currently using.
In your spriteBatch.Begin(...) code are you supplying a transformMatrix? If so, try just using spriteBatch.Begin() with no arguments. Are you doing anything unusual like applying an Effects to your SpriteBatch drawing?
Are you drawing to a render target that is then redrawn to the screen?

I've had mouse position issues that seemed strange if my game window was actually larger than the resolution of the computer screen that I was running the game on. Its a quick check but it just might help. (Also remember that XNA draws images from the center point, but I believe by default draws test from the text's upper left corner). Hope that helps

Related

draw line over my screen to indicate mouse movement

I'm working with opencv and I want to do the following procedure:
Every second I check the difference between the current position of the mouse and the position it was 1 second ago.
Then I want to draw a line in my screen, from the position 1 second ago and the current position. This line should stay on the screen for 1 second until the difference is evaluated again, so the line should be renderer in the new position.
I've already done all the logics, position calculations and everything. What I want to know is how to draw a line in my screen. I wish to use my PC normally while I can see those lines, that means clicking with the mouse, keyboard events and opening windows should not stop the lines from being renderer on my screen.
How can I do that?
That isn't trivial.
You need some GUI toolkit that can show a window with the following properties:
full-screen
no window decoration (title bar...)
always-on-top
"intangible", i.e. allowing mouse and keyboard events to pass through to whatever other windows are underneath it
a working alpha channel (per-pixel transparency)
That's the general idea. Pick a popular GUI toolkit, then google around. You'll likely find ways to do each and all of those points.
I don't think OpenCV does this. Its GUI facilities are a convenience, not flexible to this degree.

Scaling SVG past unknown threshold causes elements to disappear

I'm using openseadragon with the excellent svg overlay plugin.
On Chrome, the app behaves as expected: users can tap to zoom in until a table rendered in SVG is fully visible, the note on the table is legible.
Here's the link to the demo. Zoom out to see the SVG version of the table appear, overlaying the fuzzy raster version of the background.
On Safari on iOS or OSX when zooming past a seemingly arbitrary threshold the table and everything on it start to disappear. The point of disappearance seems to depend on other factors I don't understand, hence this question for insight. For example, a orange circle drawn with two.js will disappear when the scale transform is precisely 51201 (at 51200 the circle is there). For the more complex table SVG, elements on the table will disappear at different scale levels, between ~23000 to 50000. Sometimes they'll disappear and then reappear upon a slight zoom in. Sometimes they'll disappear on zoom and then reappear as I pan around, the objects nearing the edge of the viewport.
IE 11 has a very similar issue.
Has anyone dealt with this before or solved it?
That's a really slick project!
In my experience, that kind of problem with SVG disappearing has to do with extreme amounts of zoom. The good news is you should be able to work around it by changing your viewport coordinates. By default the width of the image is a viewport value of 1, but you can set your image to be width 10,000 or some such, which will look exactly the same on the screen, but it means that the SVG thinks it's zoomed out a lot at first, so when you zoom in you can go a lot further.
If you're using two.js, another possible fix would be to switch over to canvas rendering and use https://github.com/altert/OpenSeadragonCanvasOverlay.
Btw, I'd love to share your project when it's done... please file a ticket at https://github.com/openseadragon/site-build/issues when you're ready and we can add it to http://openseadragon.github.io/examples/in-the-wild/.

Cross Viewport oclussion culling

I'm currently working on a XNA project where I need to create a Picture in picture Overlay to displaying a 3D scene from multiple angles. currently I'm trying to use 2 viewports to do this. The main one fills whole screen and is working as desired. The second is placed in one of the corners of the first (overlapping that corner) and is less than a 5th of the size as the first. apart from the size and placement of the viewports the only thing really difference between the 2 is the placement of their cameras with-in the scene.
As long as the second viewport is drawn second and there are no objects close to the camera of the first viewport in the overlapping corner this actually works greate. However if there is an object close to the camera and in the corner of the first viewport objects seem to experience occlusion culling as a result of the first viewport's object. The occluding object of the first viewport is not shown in the second viewports space though.
My question is how would one prevent the "cross viewport culling" from happening? I've searched all over and the closes threads I could find suggest drawing the second viewport to a RenderTarget2D and using a SpriteBatch to display the resulting texture. Though doing so does fix the occlusion issue it does mayhem on the z ordering, CCW culling, and my water effects all of which I've never had issue with using the default render target.
This issue, at least in my case, Seems to have been resolved by simply clearing the depth buffer between drawing the 2 Viewports. I did this by adding the following line between the function calls that draws the individual Viewports
GraphicsDevice.Clear(ClearOptions.DepthBuffer, Color.Black, 1, 0);
I ran in to this solution while trying to use the Stencil buffer to prevent the Main viewport from drawing under the second, but everything I did before this discovery did not really have a noticeable effect. After I removed all the Stencil code and it looks like I'm getting the desired effect. Sorry but I can't really explain how or why this works without messing up the first Viewport cause I have no clue myself.

Why is object.y not positioning the image in Corona SDK?

displaycontent = display.newImageRect (rawdata[currentpath][3], screenW*1.1, ((screenW*1.1/1654)*rawdata[currentpath][6]))
displaycontent.anchorY = 0
displaycontent.y = screenH*0.78
My program loads an image from a database to be displayed on the mobile phone's screen, everything works correctly apart from being able to position it with the y coordinates.
The only thing that changes its position is the anchor point 0 puts the top of the image in the centre of the screen, and values from 0.1 - 1 all position it higher. Changing the y position via object.y has zero effect regardless of what I set it as.
(the size settings probably look a bit weird in the first line, but this is because the images are different sizes and need to show the correct proportions on different screen types).
Btw I am using a tabbar widget as the UI (in case that is relevant)
Any help would be appreciated.
Edit: I am aware that displaycontent is bad name for a variable because of its similarity to things like display.contentCenterY for example, this will be changed to prevent any confusion when I look over the code in future.
I went through my code and tried disabling sections to find the culprit and a content mask was preventing me from setting the position of the loaded images within it.
I will look over my masking code and fix it (should be straight forward now I know where the problem started).
If anyone else has a similar problem (where an image or object wont position itself on given coordinates) check your content mask as that may be the issue!

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

Resources