Determining the "Region" Touched in a UIImageView - ios

I'm trying to determine the region or area of an image that a user touches.
For example, I would like to have a CGRect to define the location of each letter so I can determine when the user touches the "O" in one of the following images…
My first idea was to use locationInView to give me the absolute coordinates and adjust them to the relative size of the ImageView. However, since I'm using AspectFit for the content mode, the relative location of a given region changes with every screen size and orientation because of the image padding on the top/bottom and sides.
In a perfect solution, I would also like to embed this image in a ScrollView so it can be zoomed. But, I can forgo that if necessary.
I don't work with gestures and images very often, so I may be missing something obvious. Any help or ideas you provide will be greatly appreciated.

There is the easy way and the hard way. The easy way it to break this image down into smaller images, one for each letter. The problem with this approach is making them line up as the single image did. If you take this approach you can use CGRectContainsPoint to see which image you tapped. Or you can make each of them buttons or give each image a gesture recognizer to identify what was pressed. If your design is requires you to use a single image then you'll have to make a hitmap describing each location. Each letter will require its own CGMutablePathRef with a CGPathMoveToPoint and a few CGPathAddLineToPoint to describe its shape and location. The method CGPathContainsPoint will tell you which one you hit.

Related

Interact with complex figure in iOS

I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.

iOS: Segmented touchmap over image

I've been trying to find a way to solve this problem, and haven't been able to find anything useful, so forgive me if this is a duplicate of something I couldn't find.
I have, essentially, a large complicated image in the style of a stained glass window in a scroll view so that I can pan and zoom around it. Each of the individual segments of the window has some information associated with it. What I need to be able to do is tap on any of the segments and determine which segment was tapped so that I can display the information. What I'm not sure of is how to do the mapping between touch points and segments. Most of the segments aren't even regular polygon shapes let alone orthogonal squares, so I can't think of a straightforward way to determine which segment I've tapped.
If anybody has any ideas as to how I might go about implementing this, it would be most appreciated!
Cheers
Put each individual segment in a different layer. Now you can do hit-testing on what layer was tapped. Your test must be designed so that if a layer was tapped but on a transparent area (i.e. outside its segment), your test will fall through to the next layer behind it. Thus the test will succeed if and when you discover a layer's non-transparent region under the tap. Since it is one segment per layer, the segment is the one corresponding to that layer.

Drawing World Map - Performance & Interaction - iOS

I’d like to use a Shapefile to generate an interactive world map. I was able to import the data and use CG Paths to draw the map into one large view.
The map needs to support panning, zooming and touch interaction. For that, I've created a UIScrollView and placed the MapView (large view with all of the countries drawn) into it.
I need to improve two aspects of it:
Performance / rendering
I have drawn the map much larger than the screen size, in order to make it look reasonable when I zoom in. There are a few problems with this. First, when I'm zoomed out, I need the border stroke/line to be wider so they are visible. When I zoom in, I'd like the stroke to be a thinner. Also, when I zoom in, I can still see that the map is a blurry. I don't want to increase the view size too much.
How can I make the map look crisp when I'm zoomed in? I attempted to redraw the map on zoom in, but it takes far too long. Can I somehow only re render onscreen stuff?
Touch Interaction
I need to be able to have a touch event for every different country.
Possible approach?
I was thinking of trying to separate every country onto it’s own view. That should make touches easy to handle. Then I’m thinking I can possibly redraw the appropriate views that are on screen/zoomed to.
I've played with an app that does something similar ("World Maps"), and I can see that when you pan or zoom, the map is blurry for a second but then becomes clear. What is going on there?
use mapkit and provide custom tiles. dont reinvent the wheel and try to write yet another map framework
I think that you have to create different scaled area image from the big map image. how to say... imagine the google map, how it works... I think that provide many different zoom factor image for every area of the world... then merged display on the screen while user need show it ...
use code implemented the map effect is impossible on current iPhone device, out of the ability of the iOS device

Detect touch coordinates on a sinle UIImageview

I have a single imageview with a static country map divided into regions of that country. What I'd like to do is detect the touch location on the image and provide content about the corresponding region. Since region borders are not linear, how can I save each region's area? Do I need several imageviews (or even custom UIButtons with those images) each belonging to one region or is it possible the way I'd like?
This is the first thing that came to my mind so maybe there is a simpler and better way which I'd love to hear about and I couldn't know how to search for this so apologies if there is a duplicate. And of course I'd appreciate the help.
Thanks
One simple & exact way would be to use Ole Begemann's OBShapedButtons - one for each state.
This will allow you to detect exactly which state was selected. Simply put image of each state in separate buttons (with transparent surroundings) and align buttons next to eachother so that state-borders allign one to another.
Buttons will detect the location of the press and if one button was pressed on its transparent region the touch will be passed along until the correct button gets it.
The simplest way to do something like what you want would be to just put some UIButtons over the top of your UIImageView, making their type custom (so they are transparent). Generally fill in the area of each country with UIButtons. If you test your app, you will probably notice that people will touch the center of each country, so I wouldn't worry about getting 100% coverage. Depending on the shape of the countries, one or two square UIButtons would probably be enough.
If you did want to go the 100% coverage route, you could embed each country in a separate UIImageView subclass, and when you detect a touch anywhere, go through each country image and see if the point touched isn't transparent. When you hit that (a non-transparent pixel) you have found the country. See this post for relevant code: How to get the color of a pixel in an UIView?

Possible to ignore pan gestures on transparent parts of UIImageViews?

I'm working on an app that lets the user stack graphics on top of each other.
The graphics are instantiated as a UIImageView, and is transparent outside of the actual graphic. I'm also using pan gestures to let the user drag them around the screen.
So when you have a bunch of graphics of different sizes and shapes on top of one another, you may have the illusion that you are touching a sub-indexed view, but you're actually touching the top one because some transparent part of it its hovering over your touch point.
I was wondering if anyone had ideas on how we could accomplish ONLY listening to the pan gesture on the solid part of the imageview. Or something that would tighten up the user experience so that whatever they touched was what they select. Thanks
Create your own subclass of UIImageView. In your subclass, override the pointInside:withEvent: method to return NO if the point is in a transparent part of the image.
Of course, you need to determine if a point is in a transparent part. :)
If you happen to have a CGPath or UIBezierPath that outlines the opaque parts of your image, you can do it easily using CGPathContainsPoint or -[UIBezierPath containsPoint:].
If you don't have a handy path, you will have to examine the image's pixel data. There are many answers on stackoverflow.com already that explain how to do that. Search for get pixel CGImage or get pixel UIImage.

Resources