I use GPUImage (https://github.com/BradLarson/GPUImage) in my iOS project and really like it.
Now I use it to process image with a filter (only color changes, no scaling/transform), and use the layer from the GPUImageView output to do something else, so my chain looks like:
GPUImagePicture -> (Color Filter) -> GPUImageView.
Now I want to change the output to be tiled images, where the rendered result will be used as a pattern. I had considered few ways to do it:
Just use Quartz2D to generate tiled images to GPUImagePicture, then process it (so the result will also be tiled). But since GPUImagePicture will redraw using Quartz2D again, it could be less efficient. Am I right?
Modify or subclass GPUImageView to generate tiled result using OpenGL. It could be hard and I cannot figure out a good way to implement it.
Which will be better and is there any other way to do it?
Related
I need to create button like this
and the change the background programmatically like this
and like this
I can not use images for different states of a button because each time I have different text on it.
What to start from ? I tried to understand CoreGraphics and CoreAnimation but there is too small amount of examples and tutorials so my attempts didn't give me any success.
You can, and should, use an image for this. UIKit has a method resizableImageWithCapInsets that creates resizable images. You feed it a minimum sized image and the system stretches it to fit the desired size. It looks like your image is fixed in height, which is good since you can't do smooth gradients with this technique.
UIButtons are composed of a background image and title text, so you can use an image for the background shapes (setBackgroundImage(_:forState:)), and then change the text using setTitle(_:forState:).
However, you can still use Core Graphics for this, and there are benefits to doing so, such as the fact that it reduces the number of rendered assets in your app bundle. For this, probably the best approach is to create a CAShapeLayer with a path constructed from a UIBezierPath, and then render it into a graphics context. From this context, you can pull out a UIImage instance, and treat it just the same as an image loaded from a JPEG or PNG asset (that is, set it as the buttons background image using setBackgroundImage(_:forState:)).
I'm using GPUImage to apply filters and chain filters on the images. I'm using UISlider to change the value of the filters and applying the filters continuously on the image as slider's values changes. So that user can see what's the output as he changes the value.
This is causing very slow processing and sometimes UI hangs or event app crashes on receiving low memory warning.
How can I achieve fast filter implementation using GPUImage. I have seem some Apps which are applying filters on the go and their UI doesn't even hang for second.
Thanks,
Here's the sample code which I'm using as slider's value changes.
- (IBAction) foregroundSliderValueChanged:(id)sender{
float value = ([(UISlider *)sender maximumValue] - [(UISlider *)sender value]) + [(UISlider *)sender minimumValue];
[(GPUImageVignetteFilter *)self.filter setVignetteEnd:value];
GPUImagePicture *filteredImage = [[GPUImagePicture alloc]initWithImage:_image];
[filteredImage addTarget:self.filter];
[filteredImage processImage];
self.imageView.image = [self.filter imageFromCurrentlyProcessedOutputWithOrientation:_image.imageOrientation];
}
You haven't specified how you set up your filter chain, what filters you use, or how you're doing your updates, so it's hard to provide all but the most generic advice. Still, here goes:
If processing an image for display to the screen, never use a UIImageView. Converting to and from a UIImage is an extremely slow process, and one that should never be used for live updates of anything. Instead, go GPUImagePicture -> filters -> GPUImageView. This keeps the image on the GPU and is far more efficient, processing- and memory-wise.
Only process as many pixels as you actually will be displaying. Use -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on the first filter in your chain to reduce its resolution to the output resolution of your GPUImageView. This will cause your filters to operate on image frames that are usually many times smaller than your full-resolution source image. There's no reason to process pixels you'll never see. You can then pass in a 0 size to these same methods when you need to finally capture the full-resolution image to disk.
Find more efficient ways of setting up your filter chain. If you have a common set of simple operations that you apply over and over to your images, think about creating a custom shader that combines these operations, as appropriate. Expensive operations also sometimes have a cheaper substitute, like how I use a downsampling-then-upsampling pass for GPUImageiOSBlur to use a much smaller blur radius than I would with a stock GPUImageGaussianBlur.
I'm considering building an app that would make heavy use of a flood fill / paint bucket feature. The images I'd be coloring are simply like coloring book pages; white background, black borders. I'm debating which is better to use UIImage (by manipulating pixel data) or drawing the images with Core Graphics and changing the fill color on touch.
With UIImage, I'm unable to account for retina images properly; it destroys the image when I write the context into a new UIImage, but I can probably figure out. I open to tips though...
With CoreGraphics, I have no idea how to calculate which shape to fill when a user touches an area and then actually filling that area. I've looked but I have not turned up a successful search.
Overall, I believe the optimal solution is using CoreGraphics, since it'll be lighter overall and I won't have to keep several copies of the same image for different sizes.
Thoughts? Go easy on me! It's my first app and first SO question ;)
I'd suggest using Core Graphics.
Instead of images, define the shapes using CGPath or NSBezierPath, and use Core Graphics to stroke and/or fill the shapes. Filling shapes is then as easy as switching drawing mode from just stroking to stroking and filling.
Creating even more complex shapes is made much easier with the "PaintCode" app (which lets you draw and creates the path code for you).
As your first app, I would suggest something with a little less custom graphics fiddling, though.
I am building a 3D image viewer which has Three.JS plane geometries as placeholders with the images as their textures.
Now I want to add a black border around the image. The only way I have found yet to implement this is to add a new black plane geometry behind the image to be displayed. But this required whole-sale changes to my framework which I want to avoid.
WebGL's texture loading function gl.texImage2D has a parameter for border. But I couldn't find this exposed anywhere through Three.js and doubt that it even works the way I think it does.
Is there an easier way to add borders around textures?
You can use a temporary regular 2D canvas to render your image and apply any kind of editing/effects there, like paint borders and such. Then use that canvas image as a texture. Might be a bit of work, but you will gain a lot of flexibility styling your borders and other stuff.
I'm not near my dev machine and won't be for a couple of days, so I can't look up an example of my own. This issue contains some code to get you started: https://github.com/mrdoob/three.js/issues/868
How can I change hue of an UIImage programmatically only in few parts? I have followed this link
How to programmatically change the hue of UIImage?
and used the same code in my application. It's working fine but the complete image hue is getting changed. According to my requirement I want to change only the tree color in the above snap. How can I do that?
This is a specific case of a more general problem of using masking. I assume you have some way of knowing what pixels are in the "tree" part, and which ones are not. (If not, that's a whole other question/problem).
If so, first draw the original to the result context, then create a mask (see here: http://mobiledevelopertips.com/cocoa/how-to-mask-an-image.html), and draw the changed-hue version with the mask representing the tree active.
I recommend you take a look at the CoreImage API and the CIColorCube or CIColorMap filter in particular. Now how to define the color cube or color map is where the real magic lies. You'll need to transform tree tones (browns, etc), though this will obviously transform all browns, not just your tree.