Will running an AVCaptureDevice at a 1920x1080 AVCaptureDeviceFormat consume more battery than a format at say 640x480?
If on say an iPhone Plus, where there exists a resolution exactly equal to the screen resolution, you're at least saving a resizing operation on the GPU (I'm using Metal).
Is upsizing from 640x480 to 1920x1080 a more expensive operation than say, downsizing from 3840x2160 to 1920x1080?
Running an app with a camera background, that also uses the camera to capture media... so trying to make some intelligent battery life decisions on choose formats.
Related
There are videos being recorded in a 16:9 ratio and uploaded to s3 and then download to multiple devices (Desktop, Tablets and Phone). Playback of the video that occurs on the on iOS should ratio of 9:16.
My goal is to crop the video playback in real-time to a 9:16, cutting off the outer edges, but also enlarging it if need be. What is the fastest and most efficient way of accomplishing this with Swift?
My concern is CPU overhead doing this on the phone.
In the process of capturing a light trail photo, I noticed that for fast moving objects, there is slightly more discontinuity between successive frames if I use the sample buffers from AVCaptureVideoDataOutput compared to if I record a movie and extract frames and run the same algo.
Is there a refresh rate/frame rate difference if the two modes are used?
A colleague who has experience in professional photography claims that there is a visible lag even in Apple's default camera app when comparing the preview in Photo mode and Video mode but it is not something very obvious to me.
Furthermore, I am actually capturing video at a low frame rate (close to highest exposure)
To conclude these experiments, I need to know if there is any definitive proof to confirm or disprove the same
We are building an application that works with a lot of images. We are interested in both Core Image, GPUImage, and UIImage and how it decompresses the images. We are already familiar with the fact that doing decompression of images on a background thread will help remove stutter or jitter in our UI will scrolling. However, we are not so familiar with where this decompression work is happening. We also do some cropping of images using UIImage. So here goes the questions:
Background: We are supporting devices all the way back to iPhone 4, but soon may drop the iPhone 4 in favor of the iPhone 4S as being our oldest device.
1) Is decompression of an image done on the GPU? ... Core Image? GPUImage? UIImage?
2) Can cropping of an image be done on the GPU? ... Core Image? GPUImage? UIImage?
3) Is there a difference in GPU support based on our device profile?
Basically we want to offload as much as we can to the GPU to free up the CPUs on the device. Also, we want to do any operation on the GPU that would be faster to do there instead of on the CPU.
To answer your question about decompression: Core Image, GPUImage, and UIImage all use pretty much the same means of loading an image from disk. For Core Image, you start with a UIImage, and for GPUImage you can see in the GPUImagePicture source that it currently relies on a CGImageRef usually obtained via a UIImage.
UIImage does image decompression on the CPU side, and other libraries I've looked at for improving image loading performance for GPUImage do the same. The biggest bottleneck in GPUImage for image loading is having to load the image into a UIImage, then take a trip through Core Graphics to upload it into a texture. I'm looking into more direct ways to obtain pixel data, but all of the decompression routines I've tried to date end up being slower than native UIImage loading.
Cropping of an image can be done on the GPU, and both Core Image and GPUImage let you do this. With image loading overhead, this may or may not be faster than cropping via Core Graphics, so you'd need to benchmark that yourself for the image sizes you care about. More complex image processing operations, like adjustment of color, etc. generally end up being overall wins on the GPU for most image sizes on most devices. If this image loading overhead could be reduced, GPU-side processing would win in more cases.
As far as GPU capabilities with device classes, there are significant performance differences between different iOS devices, but the capabilities tend to be mostly the same. Fragment shader processing performance can be orders of magnitude different between iPhone 4, 4S, 5, and 5S devices, where for some operations the 5S is 1000x faster than the 4. The A6 and A7 devices have a handful of extensions that the older devices lack, but those only come into play in very specific situations.
The biggest difference tends to be the maximum texture size supported by GPU hardware, with iPhone 4 and earlier devices limited to 2048x2048 textures and iPhone 4S and higher supporting 4096x4096 textures. This can limit the size of images that can be processed on the GPU using something like GPUImage.
The iPhone and iPad have different AvCaptureDevice methods to lock or put on Auto: white balance and exposure settings.
I am trying to understand the mechanism of the "Backside illumination sensor" on the back camera and now the front camera and whether it adjusts the white balance and exposure setting and if that is the case, would locking the WB and exposure mode would be interfering with the "Backside illumination sensor" job.
Or, is the "Backside illumination sensor" simply and only boosting the RGB values of the pixels in low light?
Thanks
I think you're a little confused about what the term "backside illuminated sensor" means here. This is a just a type of CMOS sensor used in the new iPhones (and other mobile phones). It claims to have better low-light performance than older CMOS imagers, but it is just what captures the photos and videos, not a separate sensor for detecting light levels. There is a light sensor on the front face of the device, but that's just for adjusting the brightness of the screen in response to lighting conditions.
In my experience, all automatic exposure and gain correction done by the iPhone is based on the average luminance of the scene captured by the camera. When I've done whole-image luminance averaging, I've found that the iPhone camera almost always maintains an average luminance of around 50%. This seems to indicate that it uses the image captured by the sensor to determine exposure and gain settings for the camera (and probably white balance leveling, too).
In my application i should play video in unusual way.
Something like interactive player for special purposes.
Main issues here:
video resolution can be from 200*200px up to 1024*1024 px
i should have ability to change speed from -60 FPS to 60 PFS (in this case video should be played slower or faster depending on selected speed, negative means that video should play in back direction)
i should draw lines and objects over the video and scale it with image.
i should have ability Zoom image and pan it if its content more than screen size
i should have ability to change brightness, contrast and invert colors of this video
Now im doing next thing:
I splited my video to JPG frames
created timer for N times per seconds (play speed control)
each timer tick im drawing new texture (next JPG frame) with OpenGL
for zoom and pan im playing with OpenGL ES transformations (translate, scale)
All looks fine until i use 320*240 px, but if i use 512*512px my play rate is going down. Maybe timer behavour problem, maybe OpenGL. Sometimes, if im trying to open big textures with high play rate (more than 10-15 FPS), application just crash with memory warnings.
What is the best practice to solve this issue? What direction should i dig? Maybe cocos2d or other game engines helps me? Mb JPG is not best solution for textures and i should use PNG or PVR or smth else?
Keep the video data as a video and use AVAssetReader to get the raw frames. Use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange as the colorspace, and do YUV->RGB colorspace conversion in GLES. It will mean keeping less data in memory, and make much of your image processing somewhat simpler (since you'll be working with luma and chroma data rather than RGB values).
You don't need to bother with Cocos 2d or any game engine for this. I strongly recommend doing a little bit of experimenting with OpenGL ES 2.0 and shaders. Using OpenGL for video is very simple and straightforward, adding a game engine to the mix is unnecessary overhead and abstraction.
When you upload image data to the textures, do not create a new texture every frame. Instead, create two textures: one for luma, and one for chroma data, and simply reuse those textures every frame. I suspect your memory issues are arising from using many images and new textures every frame and probably not deleting old textures.
JPEG frames will be incredibly expensive to uncompress. First step: use PNG.
But wait! There's more.
Cocos2D could help you mostly through its great support for sprite sheets.
The biggest help, however, may come from packed textures a la TexturePacker. Using PVR.CCZ compression can speed things up by insane amounts, enough for you to get better frame rates at bigger video sizes.
Vlad, the short answer is that you will likely never be able to get all of these features you have listed working at the same time. Playing video 1024 x 1024 video at 60 FPS is really going to be a stretch, I highly doubt that iOS hardware is going to be able to keep up with those kind of data transfer rates at 60FPS. Even the h.264 hardware on the device can only do 30FPS at 1080p. It might be possible, but to then layer graphics rendering over the video and also expect to be able to edit the brightness/contrast at the same time, it is just too many things at the same time.
You should focus in on what is actually possible instead of attempting to do every feature. If you want to see an example Xcode app that pushes iPad hardware right to the limits, please have a look at my Fireworks example project. This code displays multiple already decoded h.264 videos on screen at the same time. The implementation is built around CoreGraphics APIs, but the key thing is that Apple's impl of texture uploading to OpenGL is very fast because of a zero copy optimization. With this approach, a lot of video can be streamed to the device.