GPUImage Different Preview to Output - ios

For the first time when using a different GPUImage filter I am seeing strange performance where GPUImage is showing a fairly big difference between the live preview and outputted photo.
I am currently experiencing this with GPUImageSobelEdgeDetectionFilter as follows;
On the left hand side I have a screenshot of the device screen and on the right, the outputted photo. It seems significantly reduce the thickness and sharpness of the detected lines outputting a very different picture.
I have tried having SmoothlyScaleOutput on and off, but as I am not currently scaling the image this should not be effecting it.
The filter is set up like so;
filterforphoto = [[GPUImageSobelEdgeDetectionFilter alloc] init];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setShouldSmoothlyScaleOutput:NO];
[stillCamera addTarget:filterforphoto];
[filterforphoto addTarget:primaryView];
[stillCamera startCameraCapture];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setEdgeStrength:1.0];
And the photo is taken like so;
[stillCamera capturePhotoAsImageProcessedUpToFilter:filterforphoto withCompletionHandler:^(UIImage *processedImage, NSError *error){
Does anyone know why GPUImage is interpreting the live camera so differently to the outputted photo? Is it simply because the preview is of a much lower quality than the final image and therefore does look different on a full resolution image?
Thanks,
(p.s. Please ignore the slightly different sizing on the left and right image, I didn't quite light them up as well as I could have)

The reason is indeed because of the different resolution between the live preview and the photo.
The way that the edge detection filters (and others like them) work is that they sample the pixels immediately on either side of the pixel currently being processed. When you provide a much higher resolution input in the form of a photo, this means that the edge detection occurs over a much smaller relative area of the image. This is also why Gaussian blurs of a certain pixel radius appear much weaker when applied to still photos vs. a live preview.
To lock the edge detection at a certain relative size, you can manually set the texelWidth and texelHeight properties on the filter. These values are 1/width and 1/height of the target image, respectively. If you set those values based on the size of the live preview, you should see a consistent edge size in the final photo. Some details may be slightly different, due to the higher resolution, but it should mostly be the same.

Related

How to manipulate an image to have the depth effect on iOS16

What kind of images (other than photos taken on iPhone), can get the depth effect on iOS16?
What should be modified or added to a photo like this:
So it could have the depth effect that photos taken on iPhone have, like this.
Strangly, even though the image of the fish is not appropriate for depth effect, the subject can be easily extracted with a long press, like it is described here. The following subject is received:
As far as I can tell, if you can extract the subject, you can use it for depth on the lockscreen. I've noticed that it will only cover up a small percentage of the time (e.g. just the bottom of the numbers in 10:41) before disabling depth effect for a photo. Try zooming and panning the fish so that it just barely covers the time. I don't think it will let the subject cover the complete height of the time.

ARKit cannot detect reference image

I am trying to detect an image with my app. I added this one to my ARResources Assests:
It is a JPG with a white background.
But XCode is complaining when I try to scan it with this error:
Error Domain=com.apple.arkit.error Code=300 "Invalid reference image." UserInfo={NSLocalizedFailureReason=One or more reference images have insufficient texture: Group (3), NSLocalizedRecoverySuggestion=One or more images lack sufficient texture and contrast for accurate detection. Image detection works best when an image contains multiple high-contrast regions distributed across its extent., ARErrorItems=(
"Group (3)"
), NSLocalizedDescription=Invalid reference image.}
I don't quite get it. Why can it not detect my image? What do I have to change? I already set the width and height in the inspector and made sure the image has high resolution (4096x2731)
ARKit's AI sees images in black-and-white spectrum, so the main idea behind image detection is rather a rich number of visual details (massive white BG area isn't a good idea, as well as repetitive texture pattern), high contrast, well-lit surrounding environment, calibrated screen (no bluish or yellowish tint). Also, there's no need to use Hi-Rez pictures for image detection – even 400-pixels-wide picture is enough.
Apple's recommendations can give you some additional info on this topic.

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

GPUImageView flashes between filtered and unfiltered view when adding a GPUImageFilter

I'm trying to display a GPUImageView with a live camera feed (using GPUImageStillCamera or GPUImageVideoCamera) and display a series of filters below it. When a filter is tapped, I want it to apply that filter to the live feed so that the GPUImageView shows a live, filtered feed of the camera input. I have all of it set up, but for some reason when I tap on pretty much any included GPUImageOutput filter (Vignette, Smooth Toon, Emboss, etc), the video feed flashes like crazy. It seems like its alternating between the filtered view and the unfiltered view. When i switch to a different filter, i can tell that the filter is working properly for a tiny tiny fraction of a second before it switches to a different filter.
The grayscale and sepia filters don't flash but instead only show at half-strength. I've tried setting the intensity to 1.0 (and a bunch of other values) for the sepia filter, but the grayscale one doesn't have any settings to change and it seems like some things are gray but there's still color. I tried to take a screenshot of the grayscale view but when i look at the screenshots, the image is either properly grayscaled or not grayscaled at all, even though its not what i see on my actual device. My theory is that its switching between the filtered view and the non-filtered view really fast, therefore creating the illusion of a grayscale filter at 50% strength.
I have no idea why this would be happening, because the standard GPUImage example projects work just fine, and I'm not doing much differently in my project.
If anyone could help me out or at least point me in the right direction, it would be very much appreciated. I have been trying to debug this issue for 3 days straight and I simply cannot figure it out.
EDIT: when I call capturePhotoAsImageProcessedUpToFilter on my GPUImageStillCamera, it returns nil for both the UIImage and for the NSError in the completion block (even though the GPUImageStillCamera is not nil. Not sure if this is related, but I figured it was worth mentioning.
EDIT 2: I just realized it was returning a nil image because no filters were set. But if that's the case, how do you take a photo without having any filters active? And does that possibly have anything to do with my original issue? I set a grayscale filter (and I'm still seeing the half-strength version of it), and the image returned in the completion block is the actual proper grayscale image, despite the fact that the live feed looks different.
You probably have two inputs targeting your view.
Can't tell without seeing your code, but I found that drawing a graph with all my input / outputs really helped debugging filters.

Best way to get photoshop to optimise 35 related pictures for fast transmission

I have 35 pictures taken from a stationary camera aimed at a lightbox in which an object is placed, rotated at 10 degrees in each picture. If I cycle through the pictures quickly, the image looks like it is rotating.
If I wished to 'rotate' the object in a browser but wanted to transmit as little data as possible for this, I thought it might be a good idea to split the picture into 36 pictures, where 1 picture is any background the images have in common, and 35 pictures minus the background, just showing the things that have changed.
Do you think this approach will work? Is there a better route? How would I achieve this in photoshop?
Hmm you'd probably have to take a separate picture of just the background, then in the remaining pictures, use Photoshop to remove the background and keep only the object. I guess if the pictures of the background have transparency in the place where the background was this could work.
How are you planning to "rotate" this? Flash? JavaScript? CSS+HTML? Is this supposed to be interactive or just a repeating movie? Do you have a sample of how this has already been done? Sounds kinda cool.
If you create a multiple frame animated GIF in Photoshop you can control the quality of the final output, including optimization that automatically converts the whole sequence to indexed color. The result is that your background, though varied, will share most of the same color space, and should be optimized such that it won't matter if it differ slightly in each frame. (Unless your backgrounds are highly varied between photos, though by your use of a light box, they shouldn't be.) Photoshop will let you control the overall output resolution, and color remapping, which will affect the final size.
Update: Adobe discontinued ImageReady in Photoshop CS3+, I am still using CS2 so I wasn't aware of this until someone pointed it out.
Unless The background is much bigger than the gif in the foreground i doubt that you would benefit greatly from using separate transparent images. Even if they are smaller in size,
Would the difference be large enough to improve the speed, taken into consideration the average speed with which pages are loaded?

Resources