Move image view by touch, but a little more specific - ios

I really would like to know how to make an image view move when I touch the screen. Though, I would like specific coordinates to be specific directions.
Imagine breaking up the screen into 9 identical squares, the middle being my image view, the rest being the direction in which they are pointing.
If you see this please don't flag it down, tell me why you want to flag it down and I'll fix it. I've spent over 6 hours trying to find out how to do this. Thanks for your time and consideration.

Related

CoreMotion to Move Slider

I have a slider at the bottom of my screen. It has a fixed Y but can move either way in the X direction. I wanted to use CoreMotion to move the slider back and forth when the phone is tilted. I got the sense that I might need a gravity behavior and DeviceMotion, but I'm not quite sure exactly how to do this. Somebody please help. To give a picture of what things look like, I have provided an image below. Ignore the ball. It currently has no relevance.

Strategy for scrolling a small area of content in SpriteKit

I'm creating an adventure game in Swift and allow the player to view their inventory. I have a very small set of items you can acquire (only about 25 items for your inventory) and I'd like them to display about 5-6 at a time in a rectangle. My thought was the player can scroll through them by swiping horizontally, and it will take them through the whole list, only ever showing 5-6 at a time across. The entire area is roughly 1/4 of the size of the screen.
I was looking at something like this https://github.com/crashoverride777/Swift-SpriteKit-UIScrollView-Helper but when I tried it, it seems to be suited to a giant area (the entire screen) and the items then scroll off the screen when you scroll. I played with the content size thinking of it as a "viewport" but didn't have ay luck.
In my case, I want the items to scroll only within the confines of a
300 x 150 rectangle or so. (so the item does not go beyond the width of the box containing it).
I couldn't really figure out a reliable way of doing this and wanted to ask someone if they've done something similar and how they achieved it. What's a good strategy for this? Perhaps a camera + pan using SKCameraNode?
Thanks so much!
I think I can do it using a cropping mask - an initial test worked. Let me post something once I can figure it out but wanted to let anyone know in case they were wondering.

how do i make image show up on screen when touched on screen?

I am making a game with corona sdk and I want that when someone touches and slide the screen a wood plank(image) is made till the point it is slided to(when he lets go off the screen)
please help
take a look at this question In Corona SDK, how do I limit the number of lines drawn to one?
it is almost what you want, but instead of line you need to draw image of wood.

Drawing World Map - Performance & Interaction - iOS

I’d like to use a Shapefile to generate an interactive world map. I was able to import the data and use CG Paths to draw the map into one large view.
The map needs to support panning, zooming and touch interaction. For that, I've created a UIScrollView and placed the MapView (large view with all of the countries drawn) into it.
I need to improve two aspects of it:
Performance / rendering
I have drawn the map much larger than the screen size, in order to make it look reasonable when I zoom in. There are a few problems with this. First, when I'm zoomed out, I need the border stroke/line to be wider so they are visible. When I zoom in, I'd like the stroke to be a thinner. Also, when I zoom in, I can still see that the map is a blurry. I don't want to increase the view size too much.
How can I make the map look crisp when I'm zoomed in? I attempted to redraw the map on zoom in, but it takes far too long. Can I somehow only re render onscreen stuff?
Touch Interaction
I need to be able to have a touch event for every different country.
Possible approach?
I was thinking of trying to separate every country onto it’s own view. That should make touches easy to handle. Then I’m thinking I can possibly redraw the appropriate views that are on screen/zoomed to.
I've played with an app that does something similar ("World Maps"), and I can see that when you pan or zoom, the map is blurry for a second but then becomes clear. What is going on there?
use mapkit and provide custom tiles. dont reinvent the wheel and try to write yet another map framework
I think that you have to create different scaled area image from the big map image. how to say... imagine the google map, how it works... I think that provide many different zoom factor image for every area of the world... then merged display on the screen while user need show it ...
use code implemented the map effect is impossible on current iPhone device, out of the ability of the iOS device

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

Resources