Detect touch coordinates on a sinle UIImageview - ios

I have a single imageview with a static country map divided into regions of that country. What I'd like to do is detect the touch location on the image and provide content about the corresponding region. Since region borders are not linear, how can I save each region's area? Do I need several imageviews (or even custom UIButtons with those images) each belonging to one region or is it possible the way I'd like?
This is the first thing that came to my mind so maybe there is a simpler and better way which I'd love to hear about and I couldn't know how to search for this so apologies if there is a duplicate. And of course I'd appreciate the help.
Thanks

One simple & exact way would be to use Ole Begemann's OBShapedButtons - one for each state.
This will allow you to detect exactly which state was selected. Simply put image of each state in separate buttons (with transparent surroundings) and align buttons next to eachother so that state-borders allign one to another.
Buttons will detect the location of the press and if one button was pressed on its transparent region the touch will be passed along until the correct button gets it.

The simplest way to do something like what you want would be to just put some UIButtons over the top of your UIImageView, making their type custom (so they are transparent). Generally fill in the area of each country with UIButtons. If you test your app, you will probably notice that people will touch the center of each country, so I wouldn't worry about getting 100% coverage. Depending on the shape of the countries, one or two square UIButtons would probably be enough.
If you did want to go the 100% coverage route, you could embed each country in a separate UIImageView subclass, and when you detect a touch anywhere, go through each country image and see if the point touched isn't transparent. When you hit that (a non-transparent pixel) you have found the country. See this post for relevant code: How to get the color of a pixel in an UIView?

Related

Determining the "Region" Touched in a UIImageView

I'm trying to determine the region or area of an image that a user touches.
For example, I would like to have a CGRect to define the location of each letter so I can determine when the user touches the "O" in one of the following images…
My first idea was to use locationInView to give me the absolute coordinates and adjust them to the relative size of the ImageView. However, since I'm using AspectFit for the content mode, the relative location of a given region changes with every screen size and orientation because of the image padding on the top/bottom and sides.
In a perfect solution, I would also like to embed this image in a ScrollView so it can be zoomed. But, I can forgo that if necessary.
I don't work with gestures and images very often, so I may be missing something obvious. Any help or ideas you provide will be greatly appreciated.
There is the easy way and the hard way. The easy way it to break this image down into smaller images, one for each letter. The problem with this approach is making them line up as the single image did. If you take this approach you can use CGRectContainsPoint to see which image you tapped. Or you can make each of them buttons or give each image a gesture recognizer to identify what was pressed. If your design is requires you to use a single image then you'll have to make a hitmap describing each location. Each letter will require its own CGMutablePathRef with a CGPathMoveToPoint and a few CGPathAddLineToPoint to describe its shape and location. The method CGPathContainsPoint will tell you which one you hit.

Custom shaped buttons objective-c

I'm building app for kids with lots of interactive image objects (click on pic & get result)
For example: I have cat on my screen and with click on it I need to produce sound.
After having read lots of info I got the idea that it's almost impossible to transform certain parts of image to clickable objects.
While there are lots of games on the app store which contains lots of custom shaped clickable objects. How did they manage to do this?
What I need read to get the answer? Thank you in advance.
UPDATE: I can have my clipart images as vector-graphic, e.g. .svg file. Will it make the situation simpler?
Implement each image as a UIButton and set the background of each button as the image desired. (setBackgroundImage)
Links:
https://developer.apple.com/library/ios/documentation/UIKit/Reference/UIButton_Class/
Using images is one option, but if you want taps within a certain non rectangular shape, check out OBShapedButton
If you really want super accurate touches, you could construct a bezierpath in the shape of the image then use
[UIBezierPath containtsPoint:]
https://developer.apple.com/library/ios/documentation/uikit/reference/UIBezierPath_class/Reference/Reference.html#//apple_ref/occ/instm/UIBezierPath/containsPoint%3a
You could also possibly get the pixel colour underneath the touch, but that wouldn't give much room for error if the user tapped near the edge:
https://stackoverflow.com/a/7101544/78496

iOS: Segmented touchmap over image

I've been trying to find a way to solve this problem, and haven't been able to find anything useful, so forgive me if this is a duplicate of something I couldn't find.
I have, essentially, a large complicated image in the style of a stained glass window in a scroll view so that I can pan and zoom around it. Each of the individual segments of the window has some information associated with it. What I need to be able to do is tap on any of the segments and determine which segment was tapped so that I can display the information. What I'm not sure of is how to do the mapping between touch points and segments. Most of the segments aren't even regular polygon shapes let alone orthogonal squares, so I can't think of a straightforward way to determine which segment I've tapped.
If anybody has any ideas as to how I might go about implementing this, it would be most appreciated!
Cheers
Put each individual segment in a different layer. Now you can do hit-testing on what layer was tapped. Your test must be designed so that if a layer was tapped but on a transparent area (i.e. outside its segment), your test will fall through to the next layer behind it. Thus the test will succeed if and when you discover a layer's non-transparent region under the tap. Since it is one segment per layer, the segment is the one corresponding to that layer.

Drawing World Map - Performance & Interaction - iOS

I’d like to use a Shapefile to generate an interactive world map. I was able to import the data and use CG Paths to draw the map into one large view.
The map needs to support panning, zooming and touch interaction. For that, I've created a UIScrollView and placed the MapView (large view with all of the countries drawn) into it.
I need to improve two aspects of it:
Performance / rendering
I have drawn the map much larger than the screen size, in order to make it look reasonable when I zoom in. There are a few problems with this. First, when I'm zoomed out, I need the border stroke/line to be wider so they are visible. When I zoom in, I'd like the stroke to be a thinner. Also, when I zoom in, I can still see that the map is a blurry. I don't want to increase the view size too much.
How can I make the map look crisp when I'm zoomed in? I attempted to redraw the map on zoom in, but it takes far too long. Can I somehow only re render onscreen stuff?
Touch Interaction
I need to be able to have a touch event for every different country.
Possible approach?
I was thinking of trying to separate every country onto it’s own view. That should make touches easy to handle. Then I’m thinking I can possibly redraw the appropriate views that are on screen/zoomed to.
I've played with an app that does something similar ("World Maps"), and I can see that when you pan or zoom, the map is blurry for a second but then becomes clear. What is going on there?
use mapkit and provide custom tiles. dont reinvent the wheel and try to write yet another map framework
I think that you have to create different scaled area image from the big map image. how to say... imagine the google map, how it works... I think that provide many different zoom factor image for every area of the world... then merged display on the screen while user need show it ...
use code implemented the map effect is impossible on current iPhone device, out of the ability of the iOS device

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

Resources