I created an app where your goal is to dodge rocks, but when the character hits the frame of the UIImage of one of the rock obstacles(it is an irregular shaped rock) it gets destroyed. This creates the problem of dying when you hit the UIImage frame of the image and not the actual rock. Is there a way to make the UIImage not have a square shaped frame? Or do you know of any other way to solve this problem?
Related
I'm trying to make a ViewController that presents info from a webpage like this:
However, I'm confused on one thing. How did they get the imageView to display an image that's cut off at the corner, i.e. not rectangular? Do you think they created that player card in Photoshop and used it as the background for the imageView image, or did they create it programmatically?
I wonder because the image is behind the picture of the bear, so I imagine if they created the background in Photoshop, how would they get the image behind the bear head? They can't have just created the card with the player's picture as part of it, then loaded the whole image because if they traded the player, because I'm sure they pull the player info and picture from the web so they can have a card for all players, even if they trade or acquire a new player mid-season, without having to update the app (and add the finished image to images.xcassets).
This can be composed from two CALayers at runtime. Put the picture on the bottom layer; the picture can come from anywhere - the web, the bundle, etc. the image source could be dynamic.
Put another CALayer on top, with the frame rendered with opaque colors, and a transparent cut-out for the picture in the middle:
There are a bunch of ways to do this. A simple and flexible way to do it is to create a CAShapeLayer that's the same size as the image view, with it's origin at 0,0, and add it as the UIImageView's layer's mask.
You'd create a filled UIBezierPath that maps out the part of the image you want to show, and install the bezier path's CGPath into the mask layer's path property.
The result would be that the image view is cropped so that only the part inside the shape is drawn.
I am drawing, or should I say "stamping", an image using the CGContextDrawImage method in Objective C. The image gets drawn to points that are determined by touch movements. Basically I'm stamping an image to create a "brush" effect. Looks something like this:
I am happy with the results, however when the touch moment slows down the image gets drawn on top of its self and ruins the alpha value I want. Is there a blend technique in which the opacity of the image would not stack on top of each other? Or should I just look at changing my points such that they are not so close together when the movement slows down?
Thanks in advance.
I am trying to develop a drawing app on iOS, drawing features are drawing a line, straight lines and shapes (square & ellipses). I can draw them by using CGContext CGContextMoveToPoint, CGContextAddEllipseInRect and CGContextAddRect, but the main catch is I need to make these drawings movable. My approach was drawing the lines/shapes on the UIImage one by one and after each drawing the lines/shapes gets accumulated in a main UIImage to be rendered together just like this tutorial: http://www.raywenderlich.com/18840/how-to-make-a-simple-drawing-app-with-uikit.
How can I make the lines/shapes that the user draws on the UIImage movable? I am thinking that I can make a UIImageView for each drawing and use the UIImageViews to be movable. Will this be a good approach or can anybody suggest a better approach to achieving this?
I'm working on a educational app involving complex scripts in which I paint parts of different 'letters' different colours. UILabel is out of the question, so I've drilled down into Core Text and am having a surprisingly successful go of painting glyphs in CALayers.
What I haven't managed to do is animate the size of my custom drawn text. Basically I have text on 'tiles' (CALayers) that move around the screen. The moving around is okay, but now I want to zoom in on the ones that users press.
My idea is to try to cache a 'full resolution' tile and then draw it to scale during an animation of an image bounds. So far I've tried to draw and cache and then redraw such a tile in the following way:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGContextRef context = UIGraphicsGetCurrentContext();
//do some drawing...
myTextImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then in [CALayer drawInContext:(CGContextRef)context],
I call [myTextImage drawAtPoint:CGPointZero].
When I run the app, the console shows <Error>: CGContextDrawImage: invalid context 0x0. Meanwhile I can perfectly while just continue to draw text in context in the same method even after that error is logged.
So I have two questions: (1) Why isn't this working? Should I be using CGBitmap instead?
And more important: (2) Is there a smarter way of solving the overall problem? Maybe storing my text as paths and then somehow getting CAAnimation to draw it at different scales as the bounds of the enclosing CALayer change?
Okay, this is much easier than I thought. Simply draw the text in the drawInContext: of a CALayer inside of a UIView. Then animate the view using the transform property, and the text will shrink or expand as you like.
Just pay attention to scaling so that the text doesn't get blocky. The easiest way to do that is to make sure the transform scale factors do not go above 1. In other words, make the 'default' 1:1 size of your UIView the largest size you ever want to display it.
I have this app using which one can draw basic shapes like rectangle, eclipse, circle, text etc.
I also allow free form drawing, which is stored as set-of-points, on the canvas.
Also a user can resize and move around these objects by operating on the selection handles that appear when an object is selected.
In addition the user should be able to zoom and pan the canvas.
I need some inputs on how to efficiently implement this drawing functionality.
I have following things in mind -
Use UIView's InvalidateRect and drawRect
Have a UIView for the main canvas and for each inserted object - invalidate the correspoding rect and redraw all the objects which intersects that rect in the drawRect function of the UIView.
Have a UIView and use CALayer ?
every one keep mentioning about the CALayer , I dont have much idea on this, before I venture into this I wanted a quick input on whether this route is worth taking.
like, https://developer.apple.com/library/ios/#qa/qa1708/_index.html
Have a UIImageView as canvas and when drawing each object, we do this
i) Draw the object into offscreen CGContext, basically, create a new CGContext by using UIGraphicsBeginImageContext, draw the shape, extract the image out of this CG context and use that as source of UIImageView's image property, but here how do I invalidate only a part of the UIImageView so that only that area gets refreshed.
Could you please suggest what is the best approach?
Is there any other efficient way to get this done?
Thanks.
Using a UIImage is more efficient for rendering multiple objects. But Using a CALayer is more efficient when moving and modifying a single object because you don't have to modify the other objects. So I think the best approach is to use a UIImage for general drawing and a CALayer for the shape that is being modified. In other words:
use a CALayer to draw the shape being added or modified, but don't draw it on the UIImage
use a UIImage to draw the other shapes
But OpenGL is still the most efficient solution, but don't bother with that if you don't have too many objects to draw.
If you want to draw polygons, you'll have to use Quartz framework, and have your drawing methods based on CALayer. It doesn't really matter which view you'll put your CALayers in, UIImageView or UIView. I'll say UIView since you won't be needing UIImageView's properties or methods for drawing.