I'm working on an app that lets the user stack graphics on top of each other.
The graphics are instantiated as a UIImageView, and is transparent outside of the actual graphic. I'm also using pan gestures to let the user drag them around the screen.
So when you have a bunch of graphics of different sizes and shapes on top of one another, you may have the illusion that you are touching a sub-indexed view, but you're actually touching the top one because some transparent part of it its hovering over your touch point.
I was wondering if anyone had ideas on how we could accomplish ONLY listening to the pan gesture on the solid part of the imageview. Or something that would tighten up the user experience so that whatever they touched was what they select. Thanks
Create your own subclass of UIImageView. In your subclass, override the pointInside:withEvent: method to return NO if the point is in a transparent part of the image.
Of course, you need to determine if a point is in a transparent part. :)
If you happen to have a CGPath or UIBezierPath that outlines the opaque parts of your image, you can do it easily using CGPathContainsPoint or -[UIBezierPath containsPoint:].
If you don't have a handy path, you will have to examine the image's pixel data. There are many answers on stackoverflow.com already that explain how to do that. Search for get pixel CGImage or get pixel UIImage.
Related
I'm working on a drawing app, where user should be able to both fill locked areas and simply draw lines on finger moves.
Locked areas are provided as SVGs (paths), so I'm using SVGKit library to render them on the screen (as CAShapeLayers within a view). Then basically use fillColor on proper layer to fill it on touch.
However, for lines drawing then Core Graphics comes into play (CGContextStrokePath), and lines are always drawn below everything contained within CALayers hierarchy. So basically below filled areas.
What I'm trying to reach is a system where last applied drawing is always on top. So that applying fill overrides any lines in the area, and next drawing a line shows it above filled zone.
Seems that CGLayer's z-index is less than CALayer's one, and I need some other approach for my goal...
CAShapeLayer is designed to hold CGPath instances and render them, so I'd add a CAShapeLayer at the point in your layer-hierarchy where you want it to appear, and modify the CGPath property on that.
I'm trying to determine the region or area of an image that a user touches.
For example, I would like to have a CGRect to define the location of each letter so I can determine when the user touches the "O" in one of the following images…
My first idea was to use locationInView to give me the absolute coordinates and adjust them to the relative size of the ImageView. However, since I'm using AspectFit for the content mode, the relative location of a given region changes with every screen size and orientation because of the image padding on the top/bottom and sides.
In a perfect solution, I would also like to embed this image in a ScrollView so it can be zoomed. But, I can forgo that if necessary.
I don't work with gestures and images very often, so I may be missing something obvious. Any help or ideas you provide will be greatly appreciated.
There is the easy way and the hard way. The easy way it to break this image down into smaller images, one for each letter. The problem with this approach is making them line up as the single image did. If you take this approach you can use CGRectContainsPoint to see which image you tapped. Or you can make each of them buttons or give each image a gesture recognizer to identify what was pressed. If your design is requires you to use a single image then you'll have to make a hitmap describing each location. Each letter will require its own CGMutablePathRef with a CGPathMoveToPoint and a few CGPathAddLineToPoint to describe its shape and location. The method CGPathContainsPoint will tell you which one you hit.
I've been trying to find a way to solve this problem, and haven't been able to find anything useful, so forgive me if this is a duplicate of something I couldn't find.
I have, essentially, a large complicated image in the style of a stained glass window in a scroll view so that I can pan and zoom around it. Each of the individual segments of the window has some information associated with it. What I need to be able to do is tap on any of the segments and determine which segment was tapped so that I can display the information. What I'm not sure of is how to do the mapping between touch points and segments. Most of the segments aren't even regular polygon shapes let alone orthogonal squares, so I can't think of a straightforward way to determine which segment I've tapped.
If anybody has any ideas as to how I might go about implementing this, it would be most appreciated!
Cheers
Put each individual segment in a different layer. Now you can do hit-testing on what layer was tapped. Your test must be designed so that if a layer was tapped but on a transparent area (i.e. outside its segment), your test will fall through to the next layer behind it. Thus the test will succeed if and when you discover a layer's non-transparent region under the tap. Since it is one segment per layer, the segment is the one corresponding to that layer.
On my table view, I want to display small circles of certain colors that will provide context for the information. The circles should be in the location that the image would usually be in (the left hand side). Is there an easy way to do this? I was thinking that I could create a new image view and simply draw on it using some drawing routines. The problem is I don't know any of these drawing routines, or at least I don't know how to use them outside of a drawRect function.
Well, the easiest way would be to include the different images in your bundle and conditionally set them to the cell's imageView's image property in cellForRowAtIndexPath:.
However, if you're looking for alternative's you could subclass UITableViewCell, and use CAShapeLayers to draw them programmatically and add them to the cells layer in what ever position you want.
Here's an example of how to use CAShapeLayer to draw a circle:
iPhone Core Animation - Drawing a Circle
I'm working on yet another drawing app with canvas that is many times bigger than screen.
I need some advice/direction on how to that.
Basically what i want is to scroll around this big canvas, drawing only in visible region.
I was thinking of two approaches:
Have 64x64 (or whatever) "tiles" to draw on, and then on scroll just load new tiles.
Record all user strokes (points) and on scroll calculate which are in specified region, and draw them, using only screen-size canvas.
If this matters, i'm using cocos2d for prototype.
Forget the 2000x200 limitation, I have an open source project that draws 18000 x 18000 NASA images.
I suggest you break this task into two parts. First, scrolling. As was suggested by CodaFi, when you scroll you will provide CATiledLayers. Each of those will be a CGImageRef that you create - a sub image of your really huge canvas. You can then easily support zooming in and out.
The second part is interacting with the user to draw or otherwise effect the canvas. When the user stops scrolling, you then create an opaque UIView subclass, which you add as a subview to your main view, overlaying the view hosting the CATiledLayers. At the moment you need to show this view, you populate it with the proper information so it can draw that portion of your larger canvas properly (say a circle at this point of such and such a color, etc).
You would do your drawing using the drawRect: method of this overlay view. So as the user takes action that changes the view, you do a "setDisplayInRect:" as needed to force iOS to call your drawRect:.
When the user decides to scroll, you need to update your large canvas model with whatever changes the user has made, then remove the opaque overlay, and let the CATiledLayers draw the proper portions of the large image. This transition is probably the most tricky part of the process to avoid visual glitches.
Supposing you have a large array of object definitions used for your canvas. When you need to create a CGImageRef for a tile, you scan through it looking for overlap between the object's frame and the tile's frame, and only then draw those items that are required for that tile.
Many mobile devices don't support textures over 2048x2048. So I would recommend:
make your big surface out of large 2048x2048 tiles
draw only the visible part of the currently visible tile to the screen
you will need to draw up to 4 tiles per frame, in case the user has scrolled to a corner of four tiles, but make sure you don't draw anything extra if there is only one visible tile.
This is probably the most efficient way. 64x64 tiles are really too small, and will be inefficient since there will be a large repeated overhead for the "draw tile" calls.
There is a tiling example in Apples ScrollViewSuite Doesn't have anything to do with the drawing part but it might give you some ideas about how to manage the tile part of things.
You can use CATiledLayer.
See WWDC2010 session 104
But for cocos2d, it might not work.