GPUImage filter chaining multiple arbitrary GPUImageFilters - ios

We are building a few photo adjustment tools, most are structured this way:
original image
GPUImageLookupFilter creates a new image
GPUImageAlphaBlendFilter then blends the image generated by GPUImageLookupFilter with the original image
Each adjustment has code like this:
var blendFilter = GPUImageAlphaBlendFilter()
var mix = getValue() // assume this exists
blendFilter.mix = mix
var styledImageSource = getImageSource() // assume returns a GPUImagePicture
styledImageSource.addTarget(blendFilter, atTextureLocation: 1)
blendFilter.useNextFrameForImageCapture()
styledImageSource.processImage()
return blendFilter
At the end, if there's only one blend, we run this code:
var originalImage = getOriginalImage() // assume gets an UIImage that we want to filter
filter = getBlendFilter() // assume this calls the above and gets a blendFilter
var finalImage = filter.imageByFilteringImage(originalImage)
The above finalImage is great and blended.
But now, we have a bunch of different adjustment tools that do various things. Let's assume some are blending, but they need to do do a last imageByFilteringImage. We want to chain these up, so that we don't create images between each one, and would like to chain them instead.
Here's a few concepts we've tried:
Loop through each one and save an intermediate UIImage between each, that works but is slow and kills memory after a few adjustments.
Try filter chaining a lot of the blend filters together. It only applies the last filter to the final image. We're unable to figure out to "save" an intermediate image in between each.
I'm thinking there's another way, maybe Filter Groups?
How would we best go about this?
Thanks!

Related

What is the proper way to update a Geometry surface in iOS SceneKit?

I am aware that you can set the surface of a sphere or a plane as an image by doing something like this when you first add it to the scene.
UIImage image = UIImage.FromFile("firstimage.jpg");
Sphere.FirstMaterial.Diffuse.Contents = image;
But what about when you want to update it on user action... like say on button click. Right now I do something like this.
UIImage newImage = UIImage.FromFile("secondimage.jpg");
//digs through the scnview and the scene, into it's child nodes, finds the geometry and replaces the image
mySCNView.Scene.RootNode.ChildNodes[0].Geometry.FirstMaterial.Diffuse.Contents = newImage;
But this seems kind of messy and actually takes about 2 or 3 seconds to complete. I'm not particularly worried about the array index hardcode.
Is there a more efficient way to go about this kind of update?
P.S. I apologize for some funny looking lines of code, this is actually being written in Xamarin.iOS, but feel free to answer in Swift or C and I'll do the translation.
It sounds like your bottleneck is the transfer of the image onto the GPU. So how about preloading it? Instantiate the material before you need it. Then swap it out upon user action.

how to merge two views into one single view iOS objective-c (as shown in the image)?

I want to make a play button like the following image.
I managed to make something similar to this.
Questions:
In the first image ,the circular play button and the rectangular background of the play label are are of a single image or view.I managed to make a similar one using two views as shown below.Using corner radius property of the view layers.The issue is that ,when we apply the alpha values to both the views,One view appears to overlap the other and that area appears to be dark.SO they both appears to be two different views.
How can I resolve this issue?
You have to merge your two images into one. Assuming that you have your two images already created (let's assume .png-s somewhere in your project), I would try something along these lines:
// "Load" original images
let image1 = UIImage(named: "img1.png")
let image2 = UIImage(named: "img2.png")
// Crate size for the resulting image and create context
let size: CGSize = CGSizeMake(100, 100);
UIGraphicsBeginImageContext(size);
// Bottom image - draw in context
image1?.drawInRect(CGRectMake(0,0,size.width,size.height))
// Top image - draw in context
image2?.drawInRect(CGRectMake(0,0,size.width,size.height))
// Create new image with context
let newImage = UIGraphicsGetImageFromCurrentImageContext()
// End context
UIGraphicsEndImageContext()
After that adjust the alpha of the resulting image. If the original images already have an alpha < 1 this might not work, but worth checking.
Therefore you will have to rearrange your code as I don't really think that views (or CLLayers for that matters) support blend modes in iOS.

GPUImage performance

I'm using GPUImage to apply filters and chain filters on the images. I'm using UISlider to change the value of the filters and applying the filters continuously on the image as slider's values changes. So that user can see what's the output as he changes the value.
This is causing very slow processing and sometimes UI hangs or event app crashes on receiving low memory warning.
How can I achieve fast filter implementation using GPUImage. I have seem some Apps which are applying filters on the go and their UI doesn't even hang for second.
Thanks,
Here's the sample code which I'm using as slider's value changes.
- (IBAction) foregroundSliderValueChanged:(id)sender{
float value = ([(UISlider *)sender maximumValue] - [(UISlider *)sender value]) + [(UISlider *)sender minimumValue];
[(GPUImageVignetteFilter *)self.filter setVignetteEnd:value];
GPUImagePicture *filteredImage = [[GPUImagePicture alloc]initWithImage:_image];
[filteredImage addTarget:self.filter];
[filteredImage processImage];
self.imageView.image = [self.filter imageFromCurrentlyProcessedOutputWithOrientation:_image.imageOrientation];
}
You haven't specified how you set up your filter chain, what filters you use, or how you're doing your updates, so it's hard to provide all but the most generic advice. Still, here goes:
If processing an image for display to the screen, never use a UIImageView. Converting to and from a UIImage is an extremely slow process, and one that should never be used for live updates of anything. Instead, go GPUImagePicture -> filters -> GPUImageView. This keeps the image on the GPU and is far more efficient, processing- and memory-wise.
Only process as many pixels as you actually will be displaying. Use -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on the first filter in your chain to reduce its resolution to the output resolution of your GPUImageView. This will cause your filters to operate on image frames that are usually many times smaller than your full-resolution source image. There's no reason to process pixels you'll never see. You can then pass in a 0 size to these same methods when you need to finally capture the full-resolution image to disk.
Find more efficient ways of setting up your filter chain. If you have a common set of simple operations that you apply over and over to your images, think about creating a custom shader that combines these operations, as appropriate. Expensive operations also sometimes have a cheaper substitute, like how I use a downsampling-then-upsampling pass for GPUImageiOSBlur to use a much smaller blur radius than I would with a stock GPUImageGaussianBlur.

convert PIX object to UIImage

I am trying to apply a threshold using the leptonica image library with the function
l_int32 pixOtsuAdaptiveThreshold()
I believe I have succesfully used this function but as you can see, it returns an int. I am not sure how to go from here and convert this into a UIImage OR convert the PIX object I passed in into an UIImage? Basically I just want a UIImage back after applying the threshold.
The API for this function can be found here: http://tpgit.github.io/Leptonica/binarize_8c.html#aaef1d6ed54b87144b98c72f675ad7a4c
Does anyone know what I must do to get a UIImage back?
Thanks!
here is another user converting data to image...
Reading binary image data from a web service into UIImage
but if you can i would look into brad larsons awesome gpu image filters may be better suited for you https://github.com/BradLarson/GPUImage very easy to use
added answer for adding GPUImage framework: sorry i cant help with the first as for if you would like to continue using the second if you simply need a desired threshold effect you can use gpuimage as a frame work and after setup for adaptive threshold i simply used a switch case for different effects or call init how ever you want and i used a slider to get effects control or select a predetermined value but the code ends up as easy as this
case GPUIMAGE_ADAPTIVETHRESHOLD:
{
self.title = #"Adaptive Threshold";
self.filterSettingsSlider.hidden = NO;
[self.filterSettingsSlider setMinimumValue:1.0];
[self.filterSettingsSlider setMaximumValue:20.0];
[self.filterSettingsSlider setValue:1.0];
UIImage *newFilteredImage = [[[GPUImageAdaptiveThresholdFilter alloc] init] imageByFilteringImage:[self.originalImageView image] ];
self.myEditedImageView = newFilteredImage;
}; break;

Manipulating a subsection of an image in MATLAB

I have a task where I need to track a series of objects across several frames, and compose the background from the image. The issue arises because one of the objects does not move until near the end, so I'm forced to take a shoddy average of the image. However, if I can blur out the objects, I think I'll be able to improve the background average.
I can identify a subsection of the image where the object is, an m by m array. I just need the ability to blur out this section with a filter. However, imfilter uses a fullsized array (image) as its input, so I cannot simply move along this array pixel by pixel in a for loop. But, if I try removing the image to take an image, I cannot put it back in without using another for loop, which would be computational expensive.
Is there a method of mapping a blur to a subsection of an image using MATLAB? Can this be done without using two for loops?
Try this...
sub_image = original_image(ii:jj,mm:nn)
blurred_sub_image = imfilter(sub_image, etc)
original_iamge(ii:jj,mm:nn) = blurred_sub_image
In short, you don't need to use a for loop to address a subsection of an image. You can do it directly, both for reading and writing.

Resources