GPUImageView flashes between filtered and unfiltered view when adding a GPUImageFilter - ios

I'm trying to display a GPUImageView with a live camera feed (using GPUImageStillCamera or GPUImageVideoCamera) and display a series of filters below it. When a filter is tapped, I want it to apply that filter to the live feed so that the GPUImageView shows a live, filtered feed of the camera input. I have all of it set up, but for some reason when I tap on pretty much any included GPUImageOutput filter (Vignette, Smooth Toon, Emboss, etc), the video feed flashes like crazy. It seems like its alternating between the filtered view and the unfiltered view. When i switch to a different filter, i can tell that the filter is working properly for a tiny tiny fraction of a second before it switches to a different filter.
The grayscale and sepia filters don't flash but instead only show at half-strength. I've tried setting the intensity to 1.0 (and a bunch of other values) for the sepia filter, but the grayscale one doesn't have any settings to change and it seems like some things are gray but there's still color. I tried to take a screenshot of the grayscale view but when i look at the screenshots, the image is either properly grayscaled or not grayscaled at all, even though its not what i see on my actual device. My theory is that its switching between the filtered view and the non-filtered view really fast, therefore creating the illusion of a grayscale filter at 50% strength.
I have no idea why this would be happening, because the standard GPUImage example projects work just fine, and I'm not doing much differently in my project.
If anyone could help me out or at least point me in the right direction, it would be very much appreciated. I have been trying to debug this issue for 3 days straight and I simply cannot figure it out.
EDIT: when I call capturePhotoAsImageProcessedUpToFilter on my GPUImageStillCamera, it returns nil for both the UIImage and for the NSError in the completion block (even though the GPUImageStillCamera is not nil. Not sure if this is related, but I figured it was worth mentioning.
EDIT 2: I just realized it was returning a nil image because no filters were set. But if that's the case, how do you take a photo without having any filters active? And does that possibly have anything to do with my original issue? I set a grayscale filter (and I'm still seeing the half-strength version of it), and the image returned in the completion block is the actual proper grayscale image, despite the fact that the live feed looks different.

You probably have two inputs targeting your view.
Can't tell without seeing your code, but I found that drawing a graph with all my input / outputs really helped debugging filters.

Related

GPUImage Different Preview to Output

For the first time when using a different GPUImage filter I am seeing strange performance where GPUImage is showing a fairly big difference between the live preview and outputted photo.
I am currently experiencing this with GPUImageSobelEdgeDetectionFilter as follows;
On the left hand side I have a screenshot of the device screen and on the right, the outputted photo. It seems significantly reduce the thickness and sharpness of the detected lines outputting a very different picture.
I have tried having SmoothlyScaleOutput on and off, but as I am not currently scaling the image this should not be effecting it.
The filter is set up like so;
filterforphoto = [[GPUImageSobelEdgeDetectionFilter alloc] init];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setShouldSmoothlyScaleOutput:NO];
[stillCamera addTarget:filterforphoto];
[filterforphoto addTarget:primaryView];
[stillCamera startCameraCapture];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setEdgeStrength:1.0];
And the photo is taken like so;
[stillCamera capturePhotoAsImageProcessedUpToFilter:filterforphoto withCompletionHandler:^(UIImage *processedImage, NSError *error){
Does anyone know why GPUImage is interpreting the live camera so differently to the outputted photo? Is it simply because the preview is of a much lower quality than the final image and therefore does look different on a full resolution image?
Thanks,
(p.s. Please ignore the slightly different sizing on the left and right image, I didn't quite light them up as well as I could have)
The reason is indeed because of the different resolution between the live preview and the photo.
The way that the edge detection filters (and others like them) work is that they sample the pixels immediately on either side of the pixel currently being processed. When you provide a much higher resolution input in the form of a photo, this means that the edge detection occurs over a much smaller relative area of the image. This is also why Gaussian blurs of a certain pixel radius appear much weaker when applied to still photos vs. a live preview.
To lock the edge detection at a certain relative size, you can manually set the texelWidth and texelHeight properties on the filter. These values are 1/width and 1/height of the target image, respectively. If you set those values based on the size of the live preview, you should see a consistent edge size in the final photo. Some details may be slightly different, due to the higher resolution, but it should mostly be the same.

Compare drawing with original

For the last week I've been attempting to write some code in which I'd be able to trace an image and compare it with the original by pixel color count. So far the only two definite thing working is the drawing and pixel count (which obviously returns 0 since the rest isn't working)
To make the UIImageView of the drawing into a PNG for comparison I take a screenshot.However, I'm trying to call the saved drawing (screenshot) immediately after the save for a pixel color count and comparison. This whole process is not working whatsoever.
The pixel count code asks for a PNG at the moment,so I'm trying to work around that.
Is there maybe an easier way to do this whole process instead of first taking a screenshot, then calling it and then getting the pixel color count? I'm trying to make the code as simple as possible by comparing only two colors.
Any help would be appreciated. I would post code, but I'd rather get a fresh take on this since I've tried many different ways already. Thanks in Advance.

Replace particular color of image in iOS

I want to replace the particular color of an image with other user selected color. While replacing color of image, I want to maintain the gradient effect of that original color. for example see the attached images.
I have tried to do so with CoreGraphics & I got success to replace color. But the replacing color do not maintain the gradient effect of the original color in the image.
Can someone help me on this? Is the CoreGraphics is right way to do this?
Thanks in advance.
After some struggling almost with the same problem (but with NSImage), made a category for replacing colors in NSImage which uses ColorCube CIFilter.
https://github.com/braginets/NSImage-replace-color
inspired by this code for UIImage (also uses CIColorCube):
https://github.com/vhbit/ColorCubeSample
I do a lot of color transfer/blend/replacement/swapping between images in my projects and have found the following publications very useful, both by Erik Reinhard:
Color Transfer Between Images
Real-Time Color Blending of Rendered and Captured Video
Unfortunately I can't post any source code (or images) right now because the results are being submitted to an upcoming conference, but I have implemented variations of the above algorithms with very pleasing results. I'm sure with some tweaks (and a bit of patience) you might be able to get what you're after!
EDIT:
Furthermore, the real challenge will lie in separating the different picture elements (e.g. isolating the wall). This is not unlike Photoshop's magic wand tool which obviously requires a lot of processing power and complex algorithms (and is still not perfect).

OpenCv Issue of Image Subtraction?

i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)

Best way to get photoshop to optimise 35 related pictures for fast transmission

I have 35 pictures taken from a stationary camera aimed at a lightbox in which an object is placed, rotated at 10 degrees in each picture. If I cycle through the pictures quickly, the image looks like it is rotating.
If I wished to 'rotate' the object in a browser but wanted to transmit as little data as possible for this, I thought it might be a good idea to split the picture into 36 pictures, where 1 picture is any background the images have in common, and 35 pictures minus the background, just showing the things that have changed.
Do you think this approach will work? Is there a better route? How would I achieve this in photoshop?
Hmm you'd probably have to take a separate picture of just the background, then in the remaining pictures, use Photoshop to remove the background and keep only the object. I guess if the pictures of the background have transparency in the place where the background was this could work.
How are you planning to "rotate" this? Flash? JavaScript? CSS+HTML? Is this supposed to be interactive or just a repeating movie? Do you have a sample of how this has already been done? Sounds kinda cool.
If you create a multiple frame animated GIF in Photoshop you can control the quality of the final output, including optimization that automatically converts the whole sequence to indexed color. The result is that your background, though varied, will share most of the same color space, and should be optimized such that it won't matter if it differ slightly in each frame. (Unless your backgrounds are highly varied between photos, though by your use of a light box, they shouldn't be.) Photoshop will let you control the overall output resolution, and color remapping, which will affect the final size.
Update: Adobe discontinued ImageReady in Photoshop CS3+, I am still using CS2 so I wasn't aware of this until someone pointed it out.
Unless The background is much bigger than the gif in the foreground i doubt that you would benefit greatly from using separate transparent images. Even if they are smaller in size,
Would the difference be large enough to improve the speed, taken into consideration the average speed with which pages are loaded?

Resources