Change mouse position when I move my player - xna

I've set my camera following my player, but I have a problem.
When my player moves, the vector changes obviously but not my mouse position.
I have to calculate the mouse angles of where I shoot my bullets and directions so it becomes very weird if my mouse position does not change but my vector does.
Really have no idea what to do, even after searching all over the internet.
Code for when I shoot:
v.X = (float)Math.Sqrt((5 * 5) / (1 + ((dy * dy) / (dx * dx))));
v.Y = (float)Math.Sqrt(5 * 5 - v.X * v.X);
if(Mouse.GetState().X > vector.X && Mouse.GetState().Y > vector.Y )
{
bullets.Add(new Bullet(bulletTexture, vector.X, vector.Y, v.X, v.Y, 1));
}
if (Mouse.GetState().X < vector.X && Mouse.GetState().Y > vector.Y)
{
bullets.Add(new Bullet(bulletTexture, vector.X, vector.Y, -v.X, v.Y, 1));
}
if (Mouse.GetState().X > vector.X && Mouse.GetState().Y < vector.Y)
{
bullets.Add(new Bullet(bulletTexture, vector.X, vector.Y, v.X, -v.Y, 1));
}
if (Mouse.GetState().X < vector.X && Mouse.GetState().Y < vector.Y)
{
bullets.Add(new Bullet(bulletTexture, vector.X, vector.Y, -v.X, -v.Y, 1));
}
Player facing where the mouse shoots:
mosAngle = (float)Math.Atan((Mouse.GetState().Y - vector.Y ) / (Mouse.GetState().X - vector.X));
if(Mouse.GetState().LeftButton == ButtonState.Pressed)
{
if (Mouse.GetState().Y < vector.Y && Math.Abs(mosAngle) > Math.PI / 4)
{
texture = backAnim;
}
else if (Mouse.GetState().Y > vector.Y && Math.Abs(mosAngle) > Math.PI / 4)
{
texture = frontAnim;
}
else if (Mouse.GetState().X > vector.X)
{
texture = rightAnim;
}
else
{
texture = leftAnim;
}
Camera class:
class Camera
{
private Matrix transform;
private Viewport view;
private Vector2 centre, camPos;
public Matrix Transform
{
get { return transform; }
}
public Camera(Viewport view)
{
this.view = view;
}
public void Update(GameTime gameTime)
{
KeyboardState keyboardState = Keyboard.GetState();
if (keyboardState.IsKeyDown(Keys.D))
{
camPos.X += 2f;
}
if (keyboardState.IsKeyDown(Keys.A))
{
camPos.X -= 2f;
}
if (keyboardState.IsKeyDown(Keys.W))
{
camPos.Y -= 4f;
}
if (keyboardState.IsKeyDown(Keys.S))
{
camPos.Y += 4f;
}
centre = new Vector2(camPos.X, camPos.Y);
transform = Matrix.CreateScale(new Vector3(1, 1, 0)) * Matrix.CreateTranslation(new Vector3(-centre.X, -centre.Y, 0));
}
I'm not sure what I should do in order for the code for shooting and facing the same direction where player shoots to work.

When my player moves, the vector changes obviously but not my mouse position.
That is because the mouse position is always the position within the window and not - like your player - in the game world.
So you need to deal with two different positions like mouseWindowPosition (which you get from your mouseState and mouseWorldPosition which you get by adding the mouseWindowPosition and your camera position.

Related

iOS drag and drop within parent bounds

I'm try to implement drag and drop for some UIImageViews within a UIView. It is currently working, with the exception that you can drag the image views out of the parent and off the screen. I know I have to check to see if the views go outside the parent bounds.. but im struggling to achieve it! Does anyone have any experience with this? For the Windows Phone client, all that is needed is the following line;
var dragBehavior = new MouseDragElementBehavior {ConstrainToParentBounds = true};
The current gesture code I have is here;
private UIPanGestureRecognizer CreateGesture(UIImageView imageView)
{
float dx = 0;
float dy = 0;
var panGesture = new UIPanGestureRecognizer((pg) =>
{
if ((pg.State == UIGestureRecognizerState.Began || pg.State == UIGestureRecognizerState.Changed) && (pg.NumberOfTouches == 1))
{
var p0 = pg.LocationInView(View);
if (dx == 0)
dx = (float)p0.X - (float)imageView.Center.X;
if (dy == 0)
dy = (float)p0.Y - (float)imageView.Center.Y;
float newX = (float)p0.X - dx;
float newY = (float)p0.Y - dy;
var p1 = new PointF(newX, newY);
imageView.Center = p1;
}
else if (pg.State == UIGestureRecognizerState.Ended)
{
dx = 0;
dy = 0;
}
});
return panGesture;
}
I resolved this with the following code;
var panGesture = new UIPanGestureRecognizer((pg) =>
{
if ((pg.State == UIGestureRecognizerState.Began || pg.State == UIGestureRecognizerState.Changed) && (pg.NumberOfTouches == 1))
{
var p0 = pg.LocationInView(View);
if (dx == 0)
dx = (float)p0.X - (float)imageView.Center.X;
if (dy == 0)
dy = (float)p0.Y - (float)imageView.Center.Y;
float newX = (float)p0.X - dx;
float newY = (float)p0.Y - dy;
var p1 = new PointF(newX, newY);
// If too far right...
if (p1.X > imageView.Superview.Bounds.Size.Width - (imageView.Bounds.Size.Width / 2))
p1.X = (float) imageView.Superview.Bounds.Size.Width - (float)imageView.Bounds.Size.Width / 2;
else if (p1.X < 0 + (imageView.Bounds.Size.Width / 2)) // If too far left...
p1.X = 0 + (float)(imageView.Bounds.Size.Width / 2);
// If too far down...
if (p1.Y > imageView.Superview.Bounds.Size.Height - (imageView.Bounds.Size.Height / 2))
p1.Y = (float)imageView.Superview.Bounds.Size.Height - (float)imageView.Bounds.Size.Height / 2;
else if (p1.Y < 0 + (imageView.Bounds.Size.Height / 2)) // If too far up...
p1.Y = 0 + (float)(imageView.Bounds.Size.Height / 2);
imageView.Center = p1;
}
else if (pg.State == UIGestureRecognizerState.Ended)
{
// reset offsets when dragging ends so that they will be recalculated for next touch and drag that occurs
dx = 0;
dy = 0;
}
});

Find the precise centroid (as an MKMapPoint) of an MKPolygon

This means excluding the area(s) of any interiorPolygons.
Once one has the centroid of the outer points polygon, how does one (i.e., in the form of an Objective-C example) adjust the centroid by the subtractive interiorPolygons? Or is there a more elegant way to compute the centroid in one go?
If you help get the code working, it will be open sourced (WIP here).
Might be helpful:
http://www.ecourses.ou.edu/cgi-bin/eBook.cgi?topic=st&chap_sec=07.2&page=case_sol
https://en.wikipedia.org/wiki/Centroid#Centroid_of_polygon
Thinking about it today, it makes qualitative sense that adding each interior centroid weighted by area to the exterior centroid would arrive at something sensible. (A square with an interior polygon (hole) on the left side would displace the centroid right, directly proportional to the area of the hole.)
Not to scale:
- (MKMapPoint)calculateCentroid
{
switch (self.pointCount) {
case 0: return MKMapPointMake(0.0,
0.0);
case 1: return MKMapPointMake(self.points[0].x,
self.points[0].y);
case 2: return MKMapPointMake((self.points[0].x + self.points[1].x) / 2.0,
(self.points[0].y + self.points[1].y) / 2.0);
}
// onward implies pointCount >= 3
MKMapPoint centroid;
MKMapPoint *previousPoint = &(self.points[self.pointCount-1]); // for i=0, wrap around to the last point
for (NSUInteger i = 0; i < self.pointCount; ++i) {
MKMapPoint *point = &(self.points[i]);
double delta = (previousPoint->x * point->y) - (point->x * previousPoint->y); // x[i-1]*y[i] + x[i]*y[i-1]
centroid.x += (previousPoint->x + point->x) * delta; // (x[i-1] + x[i]) / delta
centroid.y += (previousPoint->y + point->y) * delta; // (y[i-1] + y[i]) / delta
previousPoint = point;
}
centroid.x /= 6.0 * self.area;
centroid.y /= 6.0 * self.area;
// interiorPolygons are holes (subtractive geometry model)
for (MKPolygon *interiorPoly in self.interiorPolygons) {
if (interiorPoly.area == 0.0) {
continue; // avoid div-by-zero
}
centroid.x += interiorPoly.centroid.x / interiorPoly.area;
centroid.y += interiorPoly.centroid.y / interiorPoly.area;
}
return centroid;
}
in Swift 5
private func centroidForCoordinates(_ coords: [CLLocationCoordinate2D]) -> CLLocationCoordinate2D? {
guard let firstCoordinate = coordinates.first else {
return nil
}
guard coords.count > 1 else {
return firstCoordinate
}
var minX = firstCoordinate.longitude
var maxX = firstCoordinate.longitude
var minY = firstCoordinate.latitude
var maxY = firstCoordinate.latitude
for i in 1..<coords.count {
let current = coords[i]
if minX > current.longitude {
minX = current.longitude
} else if maxX < current.longitude {
maxX = current.longitude
} else if minY > current.latitude {
minY = current.latitude
} else if maxY < current.latitude {
maxY = current.latitude
}
}
let centerX = minX + ((maxX - minX) / 2)
let centerY = minY + ((maxY - minY) / 2)
return CLLocationCoordinate2D(latitude: centerY, longitude: centerX)
}

Keep moving images bouncing within frame width and height

I am trying to figure out how to keep an image within the frame width and height. Right now it just wraps around. I would preferably like to create something that stays within the frame and bounces around inside.
-(void) moveButterfly {
bfly.center = CGPointMake(bfly.center.x + bfly_vx, bfly.center.y + bfly_vy);
if(bfly.center.x > frameWidth)
{
bfly.center = CGPointMake(0, bfly.center.y + bfly_vy);
}
else if (bfly.center.x < 0)
{
bfly.center = CGPointMake(frameWidth, bfly.center.y + bfly_vy);
}
if(bfly.center.y > frameHeight)
{
bfly.center = CGPointMake(bfly.center.x + bfly_vx, 0);
}
else if (bfly.center.y < 0)
{
bfly.center = CGPointMake(bfly.center.x + bfly_vx, frameHeight);
}
}
-(void)moveButterfly{
static int dx = 1;
static int dy = 1;
if (bfly.frame.origin.x >= self.view.bounds.size.width - bfly.bounds.size.width) {
dx = -dx;
}
if (bfly.frame.origin.y >= self.view.bounds.size.height - bfly.bounds.size.height) {
dy = -dy;
}
if (bfly.frame.origin.x <= 0) {
dx = -dx;
}
if (bfly.frame.origin.y <= 0) {
dy = -dy;
}
CGPoint point = bfly.center;
point.x += dx;
point.y += dy;
bfly.center = point;
}
Keep Calling this function using NSTimer at the rate you want to update position. Here dx and dy is the velocity at which butterfly will move.

determine finger rotation direction

I'm trying to determine which way a user is rotating their finger around the screen in a circular motion. I'm currently trying to use the cross product and using the z component to determine which way the user is rotating. This is producing results which are working for the bottom half of the and are reversed on the top half of the rotation.
Can anyone shed some light on what I'm doing incorrectly?
if ( ( Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Moved && GameStateManager.CurrentGameState == GameState.Minigame) )
{
Vector3 touchPos = Vector3.zero;
if( Camera.mainCamera != null )
{
touchPos = Camera.mainCamera.ScreenToWorldPoint( new Vector3( Input.GetTouch(0).position.x, Input.GetTouch(0).position.y, Camera.mainCamera.nearClipPlane ));
}
if( prevTouchPos == Vector2.zero )
{
prevTouchPos = new Vector2( touchPos.x, touchPos.y );
return;
}
//need angle between last finger position and this finger position
Vector2 prevVec = new Vector2( prevTouchPos.x - transform.position.x, prevTouchPos.y - transform.position.y );
Vector2 currVec = new Vector2( touchPos.x - transform.position.x, touchPos.y - transform.position.y );
float ang = Vector2.Angle(prevVec, currVec);
Vector3 cross = Vector3.Cross( new Vector3(prevVec.x, prevVec.y, 0), new Vector3(currVec.x, currVec.y, 0));
Debug.Log(cross.normalized);
if (cross.z < 0)
{
Debug.Log("Prev Vec: " + prevVec);
Debug.Log("Curr Vec: " + currVec);
Debug.Log("ROTATE RIGHT");
transform.Rotate( 0, 0, ang);
}
else
{
Debug.Log("Prev Vec: " + prevVec);
Debug.Log("Curr Vec: " + currVec);
Debug.Log("ROTATE LEFT");
transform.Rotate( 0, 0, -ang);
}
//Debug.Log(ang);
//Debug.Log( "TouchPos: " + touchPos );
prevTouchPos = new Vector2( touchPos.x, touchPos.y);
}
I've done stuff similar to this in the past, it looks like your pretty close, here is the code that i've used, which determines if a user has done a full 360 in one direction or another. Note this is in C++, but the idea should help.
thumbDir.Cross( thumbCur, thumbPrev );
float z = clamp( thumbDir.z, -1.0f, 1.0f );
float theta = asin( z );
if ( z > 0.0f )
{
// clockwise
if ( fDegrees > 0.0f )
fDegrees = -theta; // Reset angle if changed
else
fDegrees -= theta;
}
else if ( z < 0.0f )
{
// counter-clockwise
if ( fDegrees < 0.0f )
fDegrees = -theta; //Reset angle if changed
else
fDegrees -= theta;
}

Scale of UIPinchGestureRecognizer in horizontal and vertical directions separately

When using UIPinchGestureRecognizer what is the best way to detect/read the pinch scale in horizontal and vertical directions individually? I saw this post
UIPinchGestureRecognizer Scale view in different x and y directions
but I noticed there were so many going back and forth for such a seemingly routine task that I am not sure that is the best answer/way.
If not using UIPinchGestureRecognizer altogether for this purpose is the answer, what's the best way to detect pinch scale in two different directions?
Basically do this,
func _mode(_ sender: UIPinchGestureRecognizer)->String {
// very important:
if sender.numberOfTouches < 2 {
print("avoided an obscure crash!!")
return ""
}
let A = sender.location(ofTouch: 0, in: self.view)
let B = sender.location(ofTouch: 1, in: self.view)
let xD = fabs( A.x - B.x )
let yD = fabs( A.y - B.y )
if (xD == 0) { return "V" }
if (yD == 0) { return "H" }
let ratio = xD / yD
// print(ratio)
if (ratio > 2) { return "H" }
if (ratio < 0.5) { return "V" }
return "D"
}
That function will return H, V, D for you .. horizontal, vertical, diagonal.
You would use it something like this ...
func yourSelector(_ sender: UIPinchGestureRecognizer) {
// your usual code such as ..
// if sender.state == .ended { return } .. etc
let mode = _mode(sender)
print("the mode is \(mode) !!!")
// in this example, we only care about horizontal pinches...
if mode != "H" { return }
let vel = sender.velocity
if vel < 0 {
print("you're squeezing the screen!")
}
}
In my C# I do the following
private double _firstDistance = 0;
private int _firstScaling = 0;
private void PinchHandler(UIPinchGestureRecognizer pinchRecognizer)
{
nfloat x1, y1, x2, y2 = 0;
var t1 = pinchRecognizer.LocationOfTouch(0, _previewView);
x1 = t1.X;
y1 = t1.Y;
var t2 = pinchRecognizer.LocationOfTouch(1, _previewView);
x2 = t2.X;
y2 = t2.Y;
if (pinchRecognizer.State == UIGestureRecognizerState.Began)
{
_firstDistance = Math.Sqrt(Math.Pow((x2 - x1), 2) + Math.Pow((y2 - y1), 2));
_firstScaling = _task.TextTemplates[_selectedTextTemplate].FontScaling;
}
if (pinchRecognizer.State == UIGestureRecognizerState.Changed)
{
var distance = Math.Sqrt(Math.Pow((x2 - x1), 2) + Math.Pow((y2 - y1), 2));
var fontScaling = Convert.ToInt32((distance - _firstDistance) / _previewView.Frame.Height * 100);
fontScaling += _firstScaling;
_task.TextTemplates[_selectedTextTemplate].FontScaling = fontScaling;
UpdateBitmapPreview();
}
}
I calculate the distance between the two points when pinch "began" and hold that value in two privates. Then I calculate a scaling (fontScaling) based on the first measured distance and the second one (in "changed").
I use my own view (_previewView) to set as base (100%), but you can use View.Bounds.height instead or the width for that matter. in my case, I always have a square view, so height == width in my app.

Resources