Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have four images; each is 350 pixels wide; I have the text North on the first, East on the second, South on the third and West on the fourth. I put them into a scrollView in a single image and I link said scrollView to the direction I am pointing my iPad. Now as I pan around I want the scroll to pan around the images, so if I am looking north and turn east I see north slide of the screen and east onto it.
Now I tried to do this using the compass, and it works reasonably well, but how to code crossing the boundary between 0 degrees and 360 degrees is a challenge. I tried using yaw so that I am looking at two 180 degree circles, but working out how to make that work has evaded me too. I google this and find quaternion and euler angles, but nothing that makes enough sense to me...
Need some direction/help.
You don't really need a scroll view for this, since you are controlling the scroll offset by device heading rather than by letting the user swipe the screen. So you could just use a plain old UIView as the parent of the image views. You would just update view.bounds.origin instead of scrollView.contentOffset to “scroll”. (Under the hood, the scroll view's contentOffset is the same as its bounds.origin.)
Anyway, lay out your images like this:
+-------+-------+-------+-------+-------+
| North | East | South | West | North |
+-------+-------+-------+-------+-------+
Note that the North image is used twice.
When you get a new heading, update the offset of the scroll view like this:
let range = CGFloat(350 * 4)
let offset = range * (CGFloat)direction / 360
scrollView.contentOffset = CGPointMake(offset, 0)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to make a volume knob like this:
https://s-media-cache-ak0.pinimg.com/736x/29/b5/85/29b58559e3d8a09dcd7e1a47c700ca76.jpg
But I only want to use one finger, UIGestureRotate is great but only supports rotation with two fingers and I don't seem to find the solution for UIPanGestureRecognizer to rotate as I want it to? Is there a simple way to simulate the second finger in rotategesture or do I have to calculate pan?
If you ignore gesture recognizers and override touches handlers on a view, you can use math to do this. With a little trig, take an arbitrary anchor point along an axis (say 9 o'clock on the knob), and your touches' locations on the plane with (0, 0) at the center of your knob. Say your anchor point is at (-3, 0) and your touch is at (-1, 1) for the sake of simplicity. Take the arctan of the change in y over the change in x, in this case arctan of 1/2, which is roughly 0.464 radians, or roughly 26.585 degrees. You would rotate your knob 26.6 degrees clockwise (in this case) from your anchor point to have your knob follow your touch. In Swift:
//Assume you have an anchor point "anchor" and a touch location "loc"
let radiansToRotate = atan2(loc.y - anchor.y, loc.x - anchor.x)
let rotateAnimation = CABasicAnimation(keyPath: "transform.rotation")
rotateAnimation.fromValue = 0.0
rotateAnimation.toValue = radiansToRotate
rotateAnimation.duration = 1.0 //the longer the duration, the "heavier" your knob
That rotate animation will make the knob "follow" your finger, the desired effect. For completeness' sake, in a very basic form, atan2 is a math function that protects you from undefined arctan values. You'll have to worry about rotation direction, possibly, if your values fluctuate wildly, so you may consider using a key frame animation for this (I'll leave this to you).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was playing with iOS built-in Compass app and the UI do make me curious.
Here is the interesting part:
The color of text (even the circle) can be partially and dynamically changed.
I have made a lot of search, but the results are all about attributed string. How to implement the effect like this?
Edited:
I have tried to add two UILabels (whiteLabel and blackLabel) as whitelabel at the bottom and blackLabel at the top with the same frame. Then I set the circle as the mask of blackLabel.
The problem is 'whiteLabel' is totally covered by blackLabel, and if the circle do not intersect with 'blackLabel', both labels are not visible.
I imagine that there are two "14" labels in the same place. The bottom one is white and unmasked, and the top one is black and has a layer mask that contains two circles, so it's only visible where the circles are.
Achieving this has most probably nothing to do with NSAttributedStrings, like Woodstock said.
I'd say it's the UILabel's layer that is recolored live, depending on what other layer it intersects with, and the overlaying area of said intersection.
Once you figure those common points, you just apply a mask that inverts colors from there.
Now it's a little bit more complicated than that since there appears to be two circles (hence two layers to find intersections with), but in the end, it's "just a list of coordinates" that the label's coordinates intersects or not.
That could be an interesting exercise ; it would probably take me a decent amount of tries to mimic that behaviour, but I'm pretty confident my reasoning is on point. (get it? :o)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to setup a custom color picker. I have setup a circular image with a picker bug. But, I want the bug to only move over the image (there are no colours outside the image), or, another way, only in the circle of the image size.
How can I limit the bug position?
I understand that you want to keep your picker inside the circle image.
To do that just simply grab center point (p1) of your circle image and center point (p2) of currnet position of the picker. Calculate distans between both:
float distance = sqrt((p2.x-p1.x)*(p2.x-p1.x)+(p2.y-p1.y)*(p2.y-p1.y));
And if the distance is more that circle radius stop move your picker:
if(distance <= radius)
return YES;// Let the picker move, it's inside the circle image
else
return NO; // Picker outside the circle image
Hope this is what you are about.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Would someone be able to give me an example how to change the texture drawn when the user pressed the keyboard ?
Basically I have 5 sprite images stood still, up, down, left, right
My first attempt was to make boolean conditions in the update method such as if keys.left is pressed the rectangle moves left and for the draw method to draw the character moving left, my problem is the texture of him stood still doesnt disappear and overlaps the same when moving in different directions.
I've tried else statements etc but I'm stuck on basic movements.
So this is very pseudo code, but this is a basic approach how to do that
First we create class which contains all our informations about the player. You can add Health, Score etc aswell.
class Player
{
Texture2D Sprite;
Vector2 Position;
Vector2 Velocity;
static const float PlayerSpeed = 5;
}
Important here is the Position (top left of the sprite), the Velocity(the amount of change every second) and the sprite which is just our texture we want to use. Of course it would be better to use only one player texture and modify the source rect accordingly.
Now our input handling.
void OnKeyboard(GameTime aGameTime, KeyArg aKey)
{
if(aKey == Keys.Left)
{
mPlayer.Velocity = new Vector2(-Player.PlayerSpeed, 0);
mPlayer.Sprite = TextureManager.GetTexture("left_player");
}
else if(aKey == Keys.Right)
{
mPlayer.Velocity = new Vector2(Player.PlayerSpeed, 0);
mPlayer.Sprite = TextureManager.GetTexture("right_player");
}
mPlayer.Position += aGameTime.EllapsedMiliseconds * mPlayer.Velocity;
}
Here we just check which key was pressed, modify the velocity and change the current sprite for our player. The final line is the most important one, this modifies the players position using the velocity modified by the elapsed time of the frame. That way you have a steady movement instead despite any frame rate inconsistencies.
void Render()
{
Sprite.Draw(mPlayer.Sprite, mPlayer.Position);
}
Finally the rendering, how to render a sprite should be clear, here you just use the set sprite and the position.
There is a lot of room for improvements, for example minimizing texture switches, handling sprites with alpha, and the most important one proper handling of the keyboard. You need to steadily adjust the position, but depending on how you implement it, the movement might be bound to the key repeat rate, which may not be desired.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am starting a development for an iPad application to help surveyors perform building/site surveys. The basic scenario is that the application will present a plan of a selected building or site (which can be manipulated in the usual iPad way - zoom, rotate, pan). The surveyor (user) will then be able to select pins to drop on the plan which will have pop-outs for entering survey details against the pin. Think iOS Maps app but with a plan/diagram instead of a map and (I guess) using the images co-ordinates rather than geo info.
I have the plan loading into a UIImageView and the various gestures are all working smoothly and I'm now at the big issue of how I'm going to have the pins 'attached' or locked to the plan as it gets zoomed/rotated/panned (and saved/restored). I have played around with a few ideas such as adding pins as UIImageView objects to the base plan image with addsubview. This works in that the pins are locked into position on the base image but then are also scaled and rotated along with the base image when gestures are used. The kind of questions I then have with this approach is... How to have track the pin locations in relation to the base image (plan/diagram) while the base image may be zoomed and rotated? How to keep the pins from scaling (always the same size)? Retain the pins orientation?. I am concerned that I may go down a track that ends up with performance issues or display issues when dealing with many pins (100s).
Again, with iOS Maps application (or google maps) in mind, I am seeking some guidance as to best approach for this at higher level but obviously the more detail and specifics the better!
Many thanks in advance.
Michael.
This problem was solved in another question I subsequently asked which can be found here... iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative
Basically, I used a scrollView instead and used the following calculations for positioning x, y...
newX = (pin.baseX * scrollView.zoomScale) +
(((pin.frame.size.width * scrollView.zoomScale) -
pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) +
((pin.frame.size.height * scrollView.zoomScale) -
pin.frame.size.height);
Note: 'pin' is a custom object that inherits from UIImageView which has properties 'baseX' and 'baseY' where I store the original x/y coordinates of the pin at zoomScale 1.0. See the link above for sample of my full implementation. Thanks.
You have to be working out the coordinates for the pins in a mathematical way rather then using pixels for example if your image was 100 x 100 and you wanted to put the pin in the centre point of the map rather then setting the x and y coordinates to (50, 50) you set them to (image.size.width/2,image.size.height/2) because if you were to then scale the view down the pin would remain in the centre point.
Obviously this is a fairly basic example but the same logic can be applied wherever you put the pin on the image.
Hope this helps.
OR
You could work out your new coordinates by how much the image is being scaled. If the image was again 100 x 100 and you scaled it down to 50 x 50 and you wanted your pin to remain in the same place you would do (x / (100/50), y / (100/50))
The formula would be:
x = x / (original width / new width)
y = y / (original height / new height)