I am trying to draw a chainsaw (just the blade) with ios core graphics, but getting stuck at a point, so far I've drawn something like this:
It looks ugly but I was just trying to see if I can draw one, and then do proper finishing later. The issue is that I can draw the teeth on the top and bottom flat sides but I have no idea how to draw the teeth around the curved corners. I've drawn the ellipse myself so I know where the coordinates are for the flat surfaces, but I don't know how to calculate the round corners. My questions are:
Is there an easy way to draw the teeth on the rounded corners ?
Is there a totally different and much better way to draw something like this in core graphics ?
The last question is regarding animating the chainsaw. I was hoping that if I can finish the drawing, then I can use a timer and redraw the teeth again with an offset, and then alternate between the two drawings to give a moving effect. Would that be the right way to go, or is it not worth doing such animation using core graphics and using something like an animated gif would be a better way ?
I am new to core graphics so don't know much details. I can imagine that there are multiple ways to achieve what I am doing, but what I mean when I say "is it the right way to do this" is it one of the right ways to do this, or I am going down a completely wrong path. Thanks !
(1) Drawing teeth around the rounded corner is a matter of identifying the points of the teeth. Consider that the inner-facing points of three teeth will fall along the rounded corner at angles: 1/8pi, 3/8pi, 5/8pi and 7/8pi.
The outer-facing points of those same three teeth will fall on a circle concentric to the rounded corner with a larger radius (larger by the height of a tooth). Those will fall at 1/4pi, 1/2pi and 3/4pi. The same idea can be reflected to the x<0 rounded corner on the left side. (or maybe not, maybe that side will be the chain saw's rectangular motor).
(2) I can't think of a totally different, much better way to do the drawing, except to point out that it could be done more realistically with an image (at least the static part).
(3) Probably wouldn't use a timer explicitly. I think the right way to go would be to place the chain on it's own CAShapeLayer. Have two (or more) chain paths (offset by some small phase shift in the placement of the teeth points). Add a repeating CABasicAnimation to the layer which alternates between the two paths.
I'm ok with iOS drawing. I've had no problem drawing circles, lines, etc onto a view. In my latest project I would like to restrict my drawing to an irregular area on my view. Basically I have a paper doll outline (jpg) of a person. I want to be able to draw within that outline but have drawing stop when I reach the border. I'm honestly not really sure what my approach can be to accomplish this. Do I have to do hit testing to see if I'm within this irregular region? I don't think that is realistic if I start with a JPG. Do I need to use a "special" color outside my region and test for that color under my brush? I'm worried that won't be accurate as I'm using a big fat fuzzy brush to draw.
Is it possible to restrict drawing within an irregular boundary?
Of course it's possible!
If you are drawing with CoreGraphics (Quartz), you could use a clipping path, or a bitmap mask.
If you are using CoreAnimation, then try a mask layer.
(It sounds like a bitmap mask is what you want, since you're talking about using an arbitrary JPEG image.)
As a learning project I'm attempting to re-create the procedurally generated hills from Tiny Wings using the HTML5 canvas. My goal is to generate textures like the hill in this picture:
Thus far, I have a seamless repeating texture that I've generated. It looks a little like this:
As you can see, this is part way there, however in Tiny Wings, the sinusoid patterns are often rotated on an angle. My question is this: Is it possible to take a seamlessly repeating pattern, rotate it, then clip it to a rectangle and still have a seamlessly repeating pattern?
I originally thought this was trivial, that any rotated repeating pattern clipped to it's original dimensions would still repeat. However my investigations lead me to believe this is not the case.
If what I'm describing isn't possible, how would I use a rotated version of the image I have generated as the pattern / fill for a shape? So far the only solution I can think of is to use a canvas clip region. Are there any other ways to accomplish this?
Related Questions:
html5 canvas shapes fill
HTML5 Canvas - Fill circle with image
To achieve what is in the image from tiny wings using the shape(texture) you supplied.
draw your texture-shape vertically to screen (it looks like it has been skew'ed not rotated)
apply a few semi-transparent hill shaped lines with a wide stroke width to create the phong shading effect.
clip the texture-shape with the shape of the hill.
apply a semi-transparent grunge texture to the whole canvas.
I have an application which requires that a solid black outline be drawn around a partly-transparent UIImage. Not around the frame of the image, but rather around all the opaque parts of the image itself. I.e., think of a transparent PNG with an opaque white "X" on it -- I need to outline the "X" in black.
To make matters trickier, AFTER the outline is drawn, the opacity of the original image will be adjusted, but the outline must remain opaque -- so the outline I generate has to include only the outline, and not the original image.
My current technique is this:
Create a new UIView which has the dimensions of the original image.
Duplicate the UIImage 4 times and add the duplicates as subviews of the UIView, with each UIImage offset diagonally from the original location by a couple pixels.
Turn that UIView into an image (via the typical UIGraphicsGetImageFromCurrentImageContext method).
Using CGImageMaskCreate and CGImageCreateWithMask, subtract the original image from this new image, so only the outline remains.
It works. Even with only the 4 offset images, the result looks quite good. However, it's horribly inefficient, and causes a good solid 4-second delay on an iPhone 4.
So what I need is a nice, speedy, efficient way to achieve the same thing, which is fully supported by iOS 4.0.
Any great ideas? :)
I would like to point out that whilst a few people have suggested edge detection, this is not an appropriate solution. Edge detection is for finding edges within image data where there is no obvious exact edge representation in the data.
For you, edges are more well defined, you are looking for the well defined outline. An edge in your case is any pixel which is on a fully transparent pixel and next to a pixel which is not fully transparent, simple as that! iterate through every pixel in the image and set them to black if they fulfil these conditions.
Alternatively, for an anti-aliased result, get a boolean representation of the image, and pass over it a small anti-aliased circle kernel. I know you said custom filters are not supported, but if you have direct access to image data this wouldn't be too difficult to implement by hand...
Cheers, hope this helps.
For the sake of contributing new ideas:
A variant on your current implementation would use CALayer's support for shadows, which it calculates from the actual pixel contents of the layer rather than merely its bounding rectangle, and for which it uses the GPU. You can try amping up the shadowOpacity to some massive value to try to eliminate the feathering; failing that you could to render to a suitable CGContext, take out the alpha layer only and manually process it to apply a threshold test on alpha values, pushing them either to fully opaque or fully transparent.
You can achieve that final processing step on the GPU even under ES 1 through a variety of ways. You'd use the alpha test to apply the actual threshold, you could then, say, prime the depth buffer to 1.0, disable colour output and the depth test, draw the version with the shadow at a depth of 0.5, draw the version without the shadow at a depth of 1.0 then enable colour output and depth tests and draw a solid black full-screen quad at a depth of 0.75. So it's like using the depth buffer to emulate stencil (since the GPU Apple used before the ES 2 capable device didn't support a stencil buffer).
That, of course, assumes that CALayer shadows appear outside of the compositor, which I haven't checked.
Alternatively, if you're willing to limit your support to ES 2 devices (everything 3GS+) then you could upload your image as a texture and do the entire process over on the GPU. But that would technically leave some iOS 4 capable devices unsupported so I assume isn't an option.
You just need to implement an edge detection algorithm, but instead of using brightness or color to determine where the edges are, use opacity. There are a number of different ways to go about that. For example, you can look at each pixel and its neighbors to identify areas where the opacity crosses whatever threshold you've set. Whenever you need to look at every pixel of an image in MacOS X or iOS, think Core Image. There's a helpful series of blog posts starting with this one that looks at implementing a custom Core Image filter -- I'd start there to build an edge detection filter.
instead using UIView, i suggest just push a context like following:
UIGraphicsBeginImageContextWithOptions(image.size,NO,0.0);
//draw your image 4 times and mask it whatever you like, you can just copy & paste
//current drawing code here.
....
outlinedimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
this will be much faster than your UIView.
I want to make an image appear distorted as if raindrops are on the screen. Image of water droplet effect over check pattern http://db.tt/fQkx9bzh
Any idea how I could do this using OpenGL or CoreImage?
I am able get an image with the depth of the raindrop shapes if that helps. Otherwise, I'm really not sure how to do this especially as they are not perfectly circular and I have almost no experience with OpenGL or Core Image (although I can set up the buffers and stuff and do some simple drawing).
I'd use the elevation of the drop (that is, distance from the surface) as control texture for a Bulge effect. Use the barycenter of the drop as the centerpoint for the effect.