I'm writing a Swift app that sends an iPhone camera video input (frames) through the network, so I can later display them on a macOS app.
Currently, I'm grabbing video frames from an AVCaputreSession, and get a PixelBuffer from the captureOutput method.
Since each frame is huge (RAW pixels) I'm converting the CVPixelBuffer that to a CGImage with VTCreateCGImageFromCVPixelBuffer and later to a UIImage with JPEG compression (50%). I then send that JPEG through the network and display it on the Mac OS app.
As you can see this is far from ideal and runs at ~25 FPS on an iPhone 11. After some research, I came up with GPU Image 2. It seems that I could get the data from the camera and apply something like this (so that the transformation is done in GPU):
camera = try Camera(sessionPreset:AVCaptureSessionPreset640x480)
let pictureOutput = PictureOutput()
pictureOutput.encodedImageFormat = .JPEG
pictureOutput.imageAvailableCallback = {image in
// Send the picture through the network here
}
camera --> pictureOutput
And I should be able to transmit that UIImage and display it on the macOS app. Is there a better way to implement this whole process? Maybe I could use the iPhone's H264 hardware encoding instead of converting images to JPEG, but it seems that it's not that straightforward (and it seems that GPUImage does something like that from what I read).
Any help is appreciated, thanks in advance!
I understand that you want to do this operation in a non-internet environment.
What are your project constraints;
Minimum fps?
Minimum video resolution?
Should sound be transmitted?
What is your network environment?
Minimum iOS and OSX version?
Apart from these, GPUImage is not a suitable solution for you. If you are going to transfer videos, you have to encode H264 or H265 (HEVC) in every moment. In this way, you can transmit video in a performance way.
The solution you are doing now is CMSampleBuffer-> CVPixelBuffer-> JPEG-> Data conversion seriously burden the processor. It also increases the risk of memory leak.
If you can tell a little bit, I would like to help. I have experience with video processing.
Sorry for my english.
Related
We want to share the screen (screenshots) to a browser from an iPad.
At the moment we take screenshots and send them over a WebRTC DataChannel, however that requires to much bandwidth.
Sending 5 frames per second fully compressed and scaled, still requires about 1.5-2mb/s upload speed.
We need to utilize some form of video encoding, so we can lower the bandwidth requirements and let WebRTC handle the flow control, depending on connection speed.
AVAssetWriter takes images and converts them to a .MOV file, however doesn't let us get a stream from it.
Any ideas for us? Pretty stuck at the moment, all ideas appreciated.
Thank for suggesting that this is a duplicate, however that doesn't help me very much. I have already a working solution, however it's not good enough.
Edit:
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, 0.7); //Scaling is slow, but that's not the problem. Network is
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImageJPEGRepresentation(image, 0.0); //Compress alot, 0.0 is max, 1.0 is least
NSString *base64Content = [data base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed];
And then I send that base64 data over a WebRTC DataChannel in 16Kb blocks, as suggested by the docs.
dc.send(...)
I would compress the screenshots using a javascript encoder, i.e. MPEG, then transcode this stream on server-side to VP8 for WebRTC.
However it may do not work properly on old iOS devices, i.e. iPad 2010-2011 due low CPU resources, so even if you encode this stream, it may be choppy and not suitable for a smooth playback.
I'm working on video processing app for the iPhone using OpenCV.
For performance reasons, I wan't to process live video at a relatively low resolution. I'm doing object-detection on each frame in the video. When the objects are found in the low-resolution video frame, I need to acquire that exact same frame at a much higher resolution.
I've been able to semi-accomplish this using a videoDataBufferOutput and a stillImageOutput from AVFoundation, but the still image is not the exact frame that I need.
Are there any good implementations of this or ideas on how to implement it myself?
In AVCaptureSessionPresetPhoto it use small video preview(about 1000x700 for iPhone6) and high resolution photo(about 3000x2000).
So I use modified 'CvPhotoCamera' class to process small preview and take photo of full-size picture. I post this code here: https://stackoverflow.com/a/31478505/1994445
When capturing video with phonegap on iOS the file size for a even a 1min capture is ridiculously large. Well, far to large to upload over a 3G connection reliably. I read that there is a native AVCaptureSession object that will allow the bitrate to be altered in order to reduce the file size. Has anyone implemented this in the phonegap video capture or can give me any pointers?
Found the deatils I needed here:
How can I reduce the file size of a video created with UIImagePickerController?
The important line for Phonegap users is in CDVCapture.m:
pickerController.videoQuality = UIImagePickerControllerQualityTypeHigh;
There are several presets that can be used e.g.
UIImagePickerControllerQualityTypeLow
phonegap provide quality argument when to navigate camera set :
quality:50
this is recommended for ios. if u need to reduce mean reduce quality.It have integer value (0-100) use this .....
I'm currently using AVCaptureSessionpresetPhoto to take my pictures and I'm adding filters to them. Problem is that the resolution is so big that I have memory warnings ringing all over the place. The picture is simply way to large to process. It crashes my app every single time. Is there anyway I can specify the resolution to shoot at?
EDIT**
Photography apps like Instagram or the Facebook Camera app for example can do this without any problems. These applications can take pictures at high resolutions, scale them down and process them without any delay. I did a comparison check, the native iOS camera maintains a much higher quality resolution when compared to pictures taken by other applications. The extreme level of quality isn't really needed required for a mobile platform so it seems as if these images are being taken at lower resolution to allow for faster processing and quick upload times. Thus there must be a way to shoot at a lower resolution. If anyone has a solution to this problem, it would greatly be appreciated!
You need to re-size image after capture image using AVCaptureSession and store it's image after resizing.
You found lots of similar question in to StackOverlow i just putting some link bellow that makes help's you.
One More thing As my suggestion that using SDWebImage for Displaying Images asynchronously Becouse App working smoothly. There are also some other way for example(Grand Central Dispatch (GCD) Reference , NSOperationQueue etc) in iOS for asynchronous Tast
Re-size Image:-
How to resize an image in iOS?
UIImage resizing not working properly
How to ReSize Image with Good Quality in iPhone
How to resize the image programmatically in objective-c in iphone
I have a simple question for anyone who knows the answer to this... I am making a social photo sharing app and I want to save a large enough image in the app so that it can be used in a full screen website app moving forward. Think...Facebook.
I've been playing around with JPEG compression in iOS and also testing sizes and quality with Photoshop CS5. I get really different results with these two. In photoshop, even at high compression, the image is quite clear and retains lots of detail. In iOS, once the compression dips below about 0.5 it looks horrible and blocky. It almost seems like there's a point where the image quality just dips after a certain magic compression number.
With photoshop, I use the "Save for Web" option and with iOS I am using UIImageJPEGRepresentation(image, 0.6). Is there a huge difference in these two? Aren't all JPEGs use the same kind of compression?
I am really not that informed in this world of image processing. Can anyone advice me on what is a good way to have images compressed to a level that preserves quality and stay bandwidth friendly? I want my images to stay about 1280px in length.
Any advice on this or smarter ways to move JPEGS over the network is welcomed. Thank you.
If your app is producing images from an iOS device, you should continue to use UIImageJPEGRepresentation. I don't think it's productive comparing the UIKit JPEG compression to Photoshop's.
I would find a JPEG compression level you're happy with using the available UIKit APIs and go with that. When you're serving up 30+million images a second it might be worth looking at optimisations but until then leave it to UIKit.