How to manipulate an image to have the depth effect on iOS16 - ios

What kind of images (other than photos taken on iPhone), can get the depth effect on iOS16?
What should be modified or added to a photo like this:
So it could have the depth effect that photos taken on iPhone have, like this.
Strangly, even though the image of the fish is not appropriate for depth effect, the subject can be easily extracted with a long press, like it is described here. The following subject is received:

As far as I can tell, if you can extract the subject, you can use it for depth on the lockscreen. I've noticed that it will only cover up a small percentage of the time (e.g. just the bottom of the numbers in 10:41) before disabling depth effect for a photo. Try zooming and panning the fish so that it just barely covers the time. I don't think it will let the subject cover the complete height of the time.

Related

AVCaptureVideoPreviewLayer issues with Video Gravity and Face Detection Accuracy

I want to use AVFoundation to set up my own camera feed and process the live feed to detect smiles.
A lot of what I need has been done here: https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
This code was written a long time back, so I needed to make some modifications to use it the way I want to in terms of appearance.
The changes I made are as follows:
I enabled auto layout and size classes so as I wanted to support different screen sizes. I also changed the dimensions of the preview layer to be the full screen.
The session Preset is set to AVCaptureSessionPresetPhoto for iPhone and iPad
Finally, I set the video gravity to AVLayerVideoGravityResizeAspectFill (this seems to be the keypoint)
Now when I run the application, the faces get detected but there seems to be an error in the coordinates of where the rectangles are drawn
When I change the video gravity to AVLayerVideoGravityResizeAspect, everything seems to work fine again.
The only problem is then, the camera preview is not the desired size which is the full screen.
So now I am wondering why does this happen. I notice a function in square cam: videoPreviewBoxForGravity which processes the gravity type and seems to make adjustments.
- (CGRect)videoPreviewBoxForGravity:(NSString *)gravity frameSize:(CGSize)frameSize apertureSize:(CGSize)apertureSize
One thing I noticed here, the frame size stays the same regardless of gravity type.
Finally I read somewhere else, when setting the gravity to AspectFill, some part of the feed gets cropped which is understandable, similar to ImageView's scaletoFill.
My question is, how can I make the right adjustments to make this app work for any VideoGravity type and any size of previewlayer.
I have had a look at some related questions, for example CIDetector give wrong position on facial features seems to have a similar issue but it does not help
Thanks, in advance.

GPUImage Different Preview to Output

For the first time when using a different GPUImage filter I am seeing strange performance where GPUImage is showing a fairly big difference between the live preview and outputted photo.
I am currently experiencing this with GPUImageSobelEdgeDetectionFilter as follows;
On the left hand side I have a screenshot of the device screen and on the right, the outputted photo. It seems significantly reduce the thickness and sharpness of the detected lines outputting a very different picture.
I have tried having SmoothlyScaleOutput on and off, but as I am not currently scaling the image this should not be effecting it.
The filter is set up like so;
filterforphoto = [[GPUImageSobelEdgeDetectionFilter alloc] init];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setShouldSmoothlyScaleOutput:NO];
[stillCamera addTarget:filterforphoto];
[filterforphoto addTarget:primaryView];
[stillCamera startCameraCapture];
[(GPUImageSobelEdgeDetectionFilter *)filterforphoto setEdgeStrength:1.0];
And the photo is taken like so;
[stillCamera capturePhotoAsImageProcessedUpToFilter:filterforphoto withCompletionHandler:^(UIImage *processedImage, NSError *error){
Does anyone know why GPUImage is interpreting the live camera so differently to the outputted photo? Is it simply because the preview is of a much lower quality than the final image and therefore does look different on a full resolution image?
Thanks,
(p.s. Please ignore the slightly different sizing on the left and right image, I didn't quite light them up as well as I could have)
The reason is indeed because of the different resolution between the live preview and the photo.
The way that the edge detection filters (and others like them) work is that they sample the pixels immediately on either side of the pixel currently being processed. When you provide a much higher resolution input in the form of a photo, this means that the edge detection occurs over a much smaller relative area of the image. This is also why Gaussian blurs of a certain pixel radius appear much weaker when applied to still photos vs. a live preview.
To lock the edge detection at a certain relative size, you can manually set the texelWidth and texelHeight properties on the filter. These values are 1/width and 1/height of the target image, respectively. If you set those values based on the size of the live preview, you should see a consistent edge size in the final photo. Some details may be slightly different, due to the higher resolution, but it should mostly be the same.

iOS Heavy image switching

I'm developing a app that will showcase products. One of the features of this app is that you will be able to "rotate" the product, using your finger/Pan-Gesture.
I was thinking in implementing this by taking photos of the product from different angles so when you "drag" the image, all I would have to do is switch the image according. If you drag a little, i switch only 1 image... if you drag a lot, i will switch them in cadence making it look like a movie... but i have a concerns and a probable solution:
Is this "performatic"? Since its a art/museum product showcase, the photos will be quite large in size/definition, and loading/switching when "dragged a lot" might be a problem because it would cause "flickering"... And the solution would be: instead of loading pic-by-pic i would put them all inside one massive sheet, and work through them as if they were a sprite...
Is that a good ideia? Or should I stick with the pic-by-pic rotation?
Edit 1: There`s a complicator: the user will be able to zoom in/out and to rotate the product in any axis (X, Y and Z)...
My personal opinion, I don't think this will work the way you hope or the performance and/or aesthetics will not be what you want.
1) Taking individuals shots that you then try to keyframe to based on touch events won't work well because you will have inevitable inconsistencies in 'framing' the shots such that the playback won't be smooth
2) The best way to do this, I suspect, will be to shoot it with video and shoot it with some sort of rig that allows you to keep the camera fixed while rotating the object
3) I'm pretty sure this is how most 'professional' grade product carousel type presentations work
4) Even then you will have more image frames than you need -- not sure whether you plan to embed the images files in app or download on demand -- but that is also a consideration in terms of how much downsampling you'll need to do to reduce frames/file size
Suggestion
Look at shooting these as video (somewhat like described above) and downsampling and removing excess frames using a video editor. Then you could use AVFoundation for playback and use your gestures to 'scrub' into the video frames. I worked on something like this for HTML playback at a large company and I can assure you it was done with video.
Alternatively, if video won't work for you. Your sprite sheet solution might work (consider using SpriteKit). But then keep in mind what I said about trying to keyframe one off camera shots together -- it just won't work well. Maybe a compromise would be to shoot static images but do so by fixing the camera and rotating the objects at very specific increments. That could work as well I suppose but you will need to be very careful about light and other atmospehrics. It doesn't take much variation at all to be detectable to the human eye causing the whole presentation to seem strange. Good luck.
A coder from my company did something like this before using 360 images of an object and it worked just great but it didn't have zoom. Maybe you could add zoom by adding a pinch gesture recognizer and placing the image view into a scroll view to zoom in on the static image.
This scenario sounds like what you really need is a simple 3D model loader library or write it in OpenGL yourself. But this pan and zoom behavior is really basic when you make that jump to 3D so it should be easy to find lots of examples.
All depends on your situation and time constraints :)

Is it possible to match an image with its appearance in a video?

I have a short video of 10 mins. This video is actually an online lecture. When you watch it, you will only see slide show (some slides are annotated).
I have the original slides (pdf or image or ppt or whatever). Is it possible to match each slide with a specific time in video when it appears?
My idea is to take every image and compare it with every video frames of that video and try to match the slide image in video.
How do you think my idea? Is it possible and doable with some algorithm?Can I just substract the video frame with the image (calculate the difference) to see which difference is close to zero? Thanks
If the images are perfectly aligned, then you can use any of simple differencing, sum of squared differences or normalised cross-correlation. However, if they are not aligned, you will need to register the two images first, followed by any of the three mentioned matching methods. Do a google search for image registration. Affine registration might be sufficient for your problem.

Best way to get photoshop to optimise 35 related pictures for fast transmission

I have 35 pictures taken from a stationary camera aimed at a lightbox in which an object is placed, rotated at 10 degrees in each picture. If I cycle through the pictures quickly, the image looks like it is rotating.
If I wished to 'rotate' the object in a browser but wanted to transmit as little data as possible for this, I thought it might be a good idea to split the picture into 36 pictures, where 1 picture is any background the images have in common, and 35 pictures minus the background, just showing the things that have changed.
Do you think this approach will work? Is there a better route? How would I achieve this in photoshop?
Hmm you'd probably have to take a separate picture of just the background, then in the remaining pictures, use Photoshop to remove the background and keep only the object. I guess if the pictures of the background have transparency in the place where the background was this could work.
How are you planning to "rotate" this? Flash? JavaScript? CSS+HTML? Is this supposed to be interactive or just a repeating movie? Do you have a sample of how this has already been done? Sounds kinda cool.
If you create a multiple frame animated GIF in Photoshop you can control the quality of the final output, including optimization that automatically converts the whole sequence to indexed color. The result is that your background, though varied, will share most of the same color space, and should be optimized such that it won't matter if it differ slightly in each frame. (Unless your backgrounds are highly varied between photos, though by your use of a light box, they shouldn't be.) Photoshop will let you control the overall output resolution, and color remapping, which will affect the final size.
Update: Adobe discontinued ImageReady in Photoshop CS3+, I am still using CS2 so I wasn't aware of this until someone pointed it out.
Unless The background is much bigger than the gif in the foreground i doubt that you would benefit greatly from using separate transparent images. Even if they are smaller in size,
Would the difference be large enough to improve the speed, taken into consideration the average speed with which pages are loaded?

Resources