Tim Omernick from ngmoco recently gave a talk at Stanford and demonstrated an interesting fireworks app for the iPhone that he posted up the code for here:
gamemakers.ngmoco.com/post/111712416/stanford-university-and-apple-were-kind-enough-to
I can get the app to run when I specify the EAGLView's parent class as a UIView in its header file. However, I want to be able to display the fireworks over an image and so when I tried to specify the parent class as a UIImageView, the background picture I specify seems to hide the firework animation.
Basically, I want to be able to display a UIImage and a EAGLView at the same time. Is this possible? Thanks
I suggest you take the time to learn some OpenGL. It's pretty basic once you understand how it works.
Here's an overview of what you'll need to do.
Create a texture (must be power-of-two size) to hold your image
Upload image pixel data to texture
On each frame:
Bind the texture
Take a quad made out of two triangles and corresponding texture coords and render it
Render the fireworks as before
Related
I'm trying to create an iOS recoloring app (this is my reference), and i need to know how recolor some portion of the image when user taps on a given area. All the loaded pictures will be black/white initially.
Is there any prebuilt library? Or which graphics framework should i use?
Any help will be appreciated.
If what you are looking for is adding/replacing the colour within a certain shape and edges are really important (as in the example) then you should be looking into vectorised drawing.
What this means is every shape in your image would have an actual object representation in your code, and you could easily interact with that object to do whatever you want (i.e. tap gestures to change colour, zoom etc.).
This however, means that you can't simply use .jpeg images, and you need to use images in vector format, such as .svg or CorelDraw.
As a reference, check out SVGKit, which is an excellent library for working with SVG images.
I need to create button like this
and the change the background programmatically like this
and like this
I can not use images for different states of a button because each time I have different text on it.
What to start from ? I tried to understand CoreGraphics and CoreAnimation but there is too small amount of examples and tutorials so my attempts didn't give me any success.
You can, and should, use an image for this. UIKit has a method resizableImageWithCapInsets that creates resizable images. You feed it a minimum sized image and the system stretches it to fit the desired size. It looks like your image is fixed in height, which is good since you can't do smooth gradients with this technique.
UIButtons are composed of a background image and title text, so you can use an image for the background shapes (setBackgroundImage(_:forState:)), and then change the text using setTitle(_:forState:).
However, you can still use Core Graphics for this, and there are benefits to doing so, such as the fact that it reduces the number of rendered assets in your app bundle. For this, probably the best approach is to create a CAShapeLayer with a path constructed from a UIBezierPath, and then render it into a graphics context. From this context, you can pull out a UIImage instance, and treat it just the same as an image loaded from a JPEG or PNG asset (that is, set it as the buttons background image using setBackgroundImage(_:forState:)).
I have a viewController with a UIImageView. The imageView is to be loaded with a different random picture from a given array when the viewController is displayed. Above the UIImageView I would like to implement a filter similar to one I found in photoshop but with my own custom modification for a clear window to the image below. Basically, what I am looking to do is display a random image behind a blurred filter but I would like a part of the blur filter to have a custom shaped window to the image below it where the image can be seen clearly. The rest of the image would still be blurred out. I have read apples documentation for applying filters to images but none of them suit my needs. Pretty new to development and haven't written any code for this feature yet. I'm more looking to see if it can be done and if so, could you point me in the direction of where I can research to find the answers I'm looking for? cheers!
I would recommend that you take the input image, pass it through a CIGaussianBlur, then I'd draw the image applying an image mask (using CIBlendWithMask or a CGPath.)
I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.
I have already tried this solution CGImage (or UIImage) from a CALayer
However I do not get anything.
Like the question says, I am trying to get an UIImage from the preview layer of the camera. I know I can either capture a still image or use the outputsamplebuffer but my session quality video is set to photo so either of these 2 aproaches are slow and will give me a big image.
So what I thought could work is to get the image directly from the preview layer, since this has exactly the size I need and the operations have already been made on it. I just dont know how to get this layer to draw into my context so that I can get it as an UIImage.
Perhaps another solution would be to use OpenGL to get this layer directly as a texture?
Any help will be appreciated, thanks.
Quoting Apple from this Technical Q&A:
A: Starting from iOS 7, the UIView class provides a method
-drawViewHierarchyInRect:afterScreenUpdates:, which lets you render a snapshot of the complete view hierarchy as visible onscreen into a
bitmap context. On iOS 6 and earlier, how to capture a view's drawing
contents depends on the underlying drawing technique. This new method
-drawViewHierarchyInRect:afterScreenUpdates: enables you to capture the contents of the receiver view and its subviews to an image
regardless of the drawing techniques (for example UIKit, Quartz,
OpenGL ES, SpriteKit, AV Foundation, etc) in which the views are
rendered
In my experience regarding AVFoundation is not like that, if you use that method on view that host a preview layer you will only obtain the content of the view without the image of the preview layer. Using the -snapshotViewAfterScreenUpdates: will return a UIView that host a special layer. If you try to make an image from that view you won't see nothing.
The only solution I know are AVCaptureVideoDataOutput and AVCaptureStillImageOutput. Each one has its own limit. The first one can't work simultaneously with a AVCaptureMovieFileOutput acquisition, the latter makes the shutter noise.