convert PIX object to UIImage - ios

I am trying to apply a threshold using the leptonica image library with the function
l_int32 pixOtsuAdaptiveThreshold()
I believe I have succesfully used this function but as you can see, it returns an int. I am not sure how to go from here and convert this into a UIImage OR convert the PIX object I passed in into an UIImage? Basically I just want a UIImage back after applying the threshold.
The API for this function can be found here: http://tpgit.github.io/Leptonica/binarize_8c.html#aaef1d6ed54b87144b98c72f675ad7a4c
Does anyone know what I must do to get a UIImage back?
Thanks!

here is another user converting data to image...
Reading binary image data from a web service into UIImage
but if you can i would look into brad larsons awesome gpu image filters may be better suited for you https://github.com/BradLarson/GPUImage very easy to use
added answer for adding GPUImage framework: sorry i cant help with the first as for if you would like to continue using the second if you simply need a desired threshold effect you can use gpuimage as a frame work and after setup for adaptive threshold i simply used a switch case for different effects or call init how ever you want and i used a slider to get effects control or select a predetermined value but the code ends up as easy as this
case GPUIMAGE_ADAPTIVETHRESHOLD:
{
self.title = #"Adaptive Threshold";
self.filterSettingsSlider.hidden = NO;
[self.filterSettingsSlider setMinimumValue:1.0];
[self.filterSettingsSlider setMaximumValue:20.0];
[self.filterSettingsSlider setValue:1.0];
UIImage *newFilteredImage = [[[GPUImageAdaptiveThresholdFilter alloc] init] imageByFilteringImage:[self.originalImageView image] ];
self.myEditedImageView = newFilteredImage;
}; break;

Related

GPUImage filter chaining multiple arbitrary GPUImageFilters

We are building a few photo adjustment tools, most are structured this way:
original image
GPUImageLookupFilter creates a new image
GPUImageAlphaBlendFilter then blends the image generated by GPUImageLookupFilter with the original image
Each adjustment has code like this:
var blendFilter = GPUImageAlphaBlendFilter()
var mix = getValue() // assume this exists
blendFilter.mix = mix
var styledImageSource = getImageSource() // assume returns a GPUImagePicture
styledImageSource.addTarget(blendFilter, atTextureLocation: 1)
blendFilter.useNextFrameForImageCapture()
styledImageSource.processImage()
return blendFilter
At the end, if there's only one blend, we run this code:
var originalImage = getOriginalImage() // assume gets an UIImage that we want to filter
filter = getBlendFilter() // assume this calls the above and gets a blendFilter
var finalImage = filter.imageByFilteringImage(originalImage)
The above finalImage is great and blended.
But now, we have a bunch of different adjustment tools that do various things. Let's assume some are blending, but they need to do do a last imageByFilteringImage. We want to chain these up, so that we don't create images between each one, and would like to chain them instead.
Here's a few concepts we've tried:
Loop through each one and save an intermediate UIImage between each, that works but is slow and kills memory after a few adjustments.
Try filter chaining a lot of the blend filters together. It only applies the last filter to the final image. We're unable to figure out to "save" an intermediate image in between each.
I'm thinking there's another way, maybe Filter Groups?
How would we best go about this?
Thanks!

How to layer a distinct alpha channel animation on top of video?

I'm developing an iphone app and am having issues with the AVFoundation API; I'm used to lots of image manipulation, and just kind of figured I would have access to an image buffer; but the video API is quite different.
I want to take a 30 frame/sec animation which is generated as PNGs with transparency channel, and overlay it onto an arbitrary number of video clips composited inside of a AVMutableComposition.
I figured that the AVMutableVideoComposition would be a good way to go about it; but as it turns out, the animation tool, AVVideoCompositionCoreAnimationTool, requires a special kind of CALayer animation. It supports an animation with basic stuff like a spatial transform, scaling, fading, etc -- but my animation is already complete as a series of PNGS.
Is this possible with AVFoundation? If so, what is the recommended process?
I think you should work with UIImageView and animationImages:
UIImageView *anImageView = [[UIImageView alloc] initWithFrame:frame];
NSMutableArray *animationImages = [NSMutableArray array];
for (int i = 0; i < 500; i++) {
[animationImages addObject:[UIImage imageNamed:[NSString stringWithFormat:#"image%d", i]]];
}
anImageView.animationImages = animationImages;
anImageView.animationDuration = 500/30;
I would use AVVideoCompositing protocol along with AVAsynchronousVideoCompositionRequest. Use [AVAsynchronousVideoCompositionRequest sourceFrameByTrackID:] to get the CVPixelBufferRef for the video frame. Then Create a CIImage with the appropriate png based on the timing you want. Then render the video frame onto a GL_TEXTURE, render the CIImage to a GL_TEXTURE, then draw these all into your destination CVPixelBufferRef and you should get the effect you are looking for.
Something Like:
CVPixelBufferRef foregroundPixelBuffer;
CIImage *appropriatePNG = [CIImage imageWithURL:pngURL];
[someCIContext render:appropriatePNG toCVPixelBuffer:foregroundPixelBuffer]
CVPixelBufferRef backgroundPixelBuffer = [asynchronousVideoRequest sourceFrameByTrackID:theTrackID];
//... GL Code for rendering using CVOpenGLESTextureCacheRef
You will need to comp each set of input images (background and foreground) down to a single pixel buffer and then encode those buffers into a H.264 video a frame at a time. Just be aware that this will not be super fast since there is a lot of memory writing and encoding time to create the H.264. You can have a look at AVRender to see a working example of the approach described here. If you would like to roll your own impl then take a look at this tutorial which includes source code that can help you get started.

GPUImage performance

I'm using GPUImage to apply filters and chain filters on the images. I'm using UISlider to change the value of the filters and applying the filters continuously on the image as slider's values changes. So that user can see what's the output as he changes the value.
This is causing very slow processing and sometimes UI hangs or event app crashes on receiving low memory warning.
How can I achieve fast filter implementation using GPUImage. I have seem some Apps which are applying filters on the go and their UI doesn't even hang for second.
Thanks,
Here's the sample code which I'm using as slider's value changes.
- (IBAction) foregroundSliderValueChanged:(id)sender{
float value = ([(UISlider *)sender maximumValue] - [(UISlider *)sender value]) + [(UISlider *)sender minimumValue];
[(GPUImageVignetteFilter *)self.filter setVignetteEnd:value];
GPUImagePicture *filteredImage = [[GPUImagePicture alloc]initWithImage:_image];
[filteredImage addTarget:self.filter];
[filteredImage processImage];
self.imageView.image = [self.filter imageFromCurrentlyProcessedOutputWithOrientation:_image.imageOrientation];
}
You haven't specified how you set up your filter chain, what filters you use, or how you're doing your updates, so it's hard to provide all but the most generic advice. Still, here goes:
If processing an image for display to the screen, never use a UIImageView. Converting to and from a UIImage is an extremely slow process, and one that should never be used for live updates of anything. Instead, go GPUImagePicture -> filters -> GPUImageView. This keeps the image on the GPU and is far more efficient, processing- and memory-wise.
Only process as many pixels as you actually will be displaying. Use -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on the first filter in your chain to reduce its resolution to the output resolution of your GPUImageView. This will cause your filters to operate on image frames that are usually many times smaller than your full-resolution source image. There's no reason to process pixels you'll never see. You can then pass in a 0 size to these same methods when you need to finally capture the full-resolution image to disk.
Find more efficient ways of setting up your filter chain. If you have a common set of simple operations that you apply over and over to your images, think about creating a custom shader that combines these operations, as appropriate. Expensive operations also sometimes have a cheaper substitute, like how I use a downsampling-then-upsampling pass for GPUImageiOSBlur to use a much smaller blur radius than I would with a stock GPUImageGaussianBlur.

How can I emboss a UIImage?

I have UITableView with the cells having a image. Now I'm trying to make the image look like a UITabBarItem image when it is selected. I was going to follow this little tutorial to clip the images to a gradient. http://mobiledevelopertips.com/cocoa/how-to-mask-an-image.html
I wanted to emboss the clip image to give it more life but haven't been able to find a simple explanation of how to do so with a UIImage.
I found this but I had a hard time understanding the process of embossing. http://javieralog.blogspot.com/2012/01/nice-emboss-effect-using-core-graphics.html
If I can get any help with or a lead it would be greatly appreciated.
In addition to the Core Graphics implementations and NYXImagesKit, I have an emboss filter in my open source GPUImage framework. To emboss a UIImage, you can simply use the following code:
GPUImageEmbossFilter *embossFilter = [[GPUImageEmbossFilter alloc] init];
embossFilter.intensity = 2.0;
UIImage *embossedImage = [embossFilter imageByFilteringImage:inputImage];
I wrote up a method titled Cocoa Touch - Adding texture with overlay view. You might find that helpful. It does require that you have an overlay view in gray-scale that would generate the "emboss" look. If you know Photoshop or other image editing programs, you may be able to create an appropriate overlay to meet your needs.
After a long time digging through google searches. I believe I found what I need.
This is a set of categories that allow you to manipulate UIImages easily.
http://www.cocoaintheshell.com/2012/01/nyximagesutilities-nyximageskit/

How to implement fast image filters on iOS platform

I am working on iOS application where user can apply a certain set of photo filters. Each filter is basically set of Photoshop actions with a specific parameters. This actions are:
Levels adjustment
Brightness / Contrast
Hue / Saturation
Single and multiple overlay
I've repeated all this actions in my code using arithmetic expressions looping through the all pixels in image. But when I run my app on iPhone 4, each filter takes about 3-4 sec to apply which is quite a few time for the user to wait. The image size is 640 x 640 px which is #2x of my view size because it's displayed on Retina display. I've found that my main problem is levels modifications which are calling the pow() C function each time I need to adjust the gamma. I am using floats not doubles of course because ARMv6 and ARMv7 are slow with doubles. Tried to enable and disable Thumb and got the same result.
Example of the simplest filter in my app which is runs pretty fast though (2 secs). The other filters includes more expressions and pow() calls thus making them slow.
https://gist.github.com/1156760
I've seen some solutions which are using Accelerate Framework vDSP matrix transformations for fast image modifications. I've also seen OpenGL ES solutions. I am not sure that they are capable of my needs. But probably it's just a matter of translating my set of changes into some good convolution matrix?
Any advice would be helpful.
Thanks,
Andrey.
For the filter in your example code, you could use a lookup table to make it much faster. I assume your input image is 8 bits per color and you are converting it to float before passing it to this function. For each color, this only gives 256 possible values and therefore only 256 possible output values. You could precompute these and store them in an array. This would avoid the pow() calculation and the bounds checking since you could factor them into the precomputation.
It would look something like this:
unsigned char table[256];
for(int i=0; i<256; i++) {
float tmp = pow((float)i/255.0f, 1.3f) * 255.0;
table[i] = tmp > 255 ? 255 : (unsigned char)tmp;
}
for(int i=0; i<length; ++i)
m_OriginalPixelBuf[i] = table[m_OriginalPixelBuf[i]];
In this case, you only have to perform pow() 256 times instead of 3*640*640 times. You would also avoid the branching caused by the bounds checking in your main image loop which can be costly. You would not have to convert to float either.
Even a faster way may be to precompute the table outside the program and just put the 256 coefficients in the code.
None of the operations you have listed there should require a convolution or even a matrix multiply. They are all pixel-wise operations, meaning that each output pixel only depends on the single corresponding input pixel. You would need to consider convolution for operations like blurring or sharpening where multiple input pixels affect a single output pixel.
If you're looking for the absolute fastest way to do this, you're going to want to use the GPU to handle the processing. It's built to do massively parallel operations, like color adjustments on single pixels.
As I've mentioned in other answers, I measured a 14X - 28X improvement in performance when running an image processing operation using OpenGL ES instead of on the CPU. You can use the Accelerate framework to do faster on-CPU image manipulation (I believe Apple claims around a ~4-5X boost is possible here), but it won't be as fast as OpenGL ES. It can be easier to implement, however, which is why I've sometimes used Accelerate for this over OpenGL ES.
iOS 5.0 also brings over Core Image from the desktop, which gives you a nice wrapper around these kind of on-GPU image adjustments. However, there are some limitations to the iOS Core Image implementation that you don't have when working with OpenGL ES 2.0 shaders directly.
I present an example of an OpenGL ES 2.0 shader image filter in my article here. The hardest part about doing this kind of processing is getting the OpenGL ES scaffolding set up. Using my sample application there, you should be able to extract that setup code and apply your own filters using it. To make this easier, I've created an open source framework called GPUImage that handles all of the OpenGL ES interaction for you. It has almost every filter you list above, and most run in under 2.5 ms for a 640x480 frame of video on an iPhone 4, so they're far faster than anything processed on the CPU.
As I said in a comment, you should post this question on the official Apple Developer Forums as well.
That aside, one real quick check: are you calling pow( ) or powf( )? Even if your data is float, calling pow( ) will get you the double-precision math library function, which is significantly slower than the single-precision variant powf( ) (and you'll have to pay for the extra conversions between float and double as well).
And a second check: have you profiled your filters in Instruments? Do you actually know where the execution time is being spent, or are you guessing?
I actually wanted to do all this myself but I found Silverberg's Image Filters. You could apply various instagram type image filters on your images. This so much better than other image filters out there - GLImageProcessing or Cimg.
Also check Instagram Image Filters on iPhone.
Hope this helps...
From iOS 5 upwards, you can use the Core Image filters to adjust a good range of image parameters.
To adjust contrast for example, this code works like a charm:
- (void)setImageContrast:(float)contrast forImageView:(UIImageView *)imageView {
if (contrast > MIN_CONTRAST && contrast < MAX_CONTRAST) {
CIImage *inputImage = [[CIImage alloc] initWithImage:imageView.image];
CIFilter *exposureAdjustmentFilter = [CIFilter filterWithName:#"CIColorControls"];
[exposureAdjustmentFilter setDefaults];
[exposureAdjustmentFilter setValue:inputImage forKey:#"inputImage"];
[exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:contrast] forKey:#"inputContrast"]; //default = 1.00
// [exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:1.0f] forKey:#"inputSaturation"]; //default = 1.00
// [exposureAdjustmentFilter setValue:[NSNumber numberWithFloat:0.0f] forKey:#"inputBrightness"];
CIImage *outputImage = [exposureAdjustmentFilter valueForKey:#"outputImage"];
CIContext *context = [CIContext contextWithOptions:nil];
imageView.image = [UIImage imageWithCGImage:[context createCGImage:outputImage fromRect:outputImage.extent]];
}
}
N.B. Default value for contrast is 1.0 (maximum value suggested is 4.0).
Also, contrast is calculated here on the imageView's image, so calling this method repeatedly will cumulate the contrast. Meaning, if you call this method with contrast value 2.0 first and then again with contrast value 3.0, you will get the original image with contrast value increased by 6.0 (2.0 * 3.0) - not 5.0.
Check the Apple documentation for more filters and parameters.
To list all available filters and parameters in code, just run this loop:
NSArray* filters = [CIFilter filterNamesInCategories:nil];
for (NSString* filterName in filters)
{
NSLog(#"Filter: %#", filterName);
NSLog(#"Parameters: %#", [[CIFilter filterWithName:filterName] attributes]);
}
This is an old thread, but I got to it from another link on SO, so people still read it.
With iOS 5, Apple added support for Core Image, and a decent number of Core image filters. I'm pretty sure all the ones the OP mentioned are available
Core Image uses OpenGL shaders under the covers, so it's really fast. It's much easier to use than OpenGL however. If you aren't already working in OpenGL, and just want to apply filters to CGImage or UIIMage objects, Core Image filters are the way to go.

Resources