Streaming screenshots over WebRTC as a video stream from iOS - ios

We want to share the screen (screenshots) to a browser from an iPad.
At the moment we take screenshots and send them over a WebRTC DataChannel, however that requires to much bandwidth.
Sending 5 frames per second fully compressed and scaled, still requires about 1.5-2mb/s upload speed.
We need to utilize some form of video encoding, so we can lower the bandwidth requirements and let WebRTC handle the flow control, depending on connection speed.
AVAssetWriter takes images and converts them to a .MOV file, however doesn't let us get a stream from it.
Any ideas for us? Pretty stuck at the moment, all ideas appreciated.
Thank for suggesting that this is a duplicate, however that doesn't help me very much. I have already a working solution, however it's not good enough.
Edit:
UIGraphicsBeginImageContextWithOptions(view.frame.size, NO, 0.7); //Scaling is slow, but that's not the problem. Network is
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImageJPEGRepresentation(image, 0.0); //Compress alot, 0.0 is max, 1.0 is least
NSString *base64Content = [data base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed];
And then I send that base64 data over a WebRTC DataChannel in 16Kb blocks, as suggested by the docs.
dc.send(...)

I would compress the screenshots using a javascript encoder, i.e. MPEG, then transcode this stream on server-side to VP8 for WebRTC.
However it may do not work properly on old iOS devices, i.e. iPad 2010-2011 due low CPU resources, so even if you encode this stream, it may be choppy and not suitable for a smooth playback.

Related

Swift send compressed video frames using GPUImage

I'm writing a Swift app that sends an iPhone camera video input (frames) through the network, so I can later display them on a macOS app.
Currently, I'm grabbing video frames from an AVCaputreSession, and get a PixelBuffer from the captureOutput method.
Since each frame is huge (RAW pixels) I'm converting the CVPixelBuffer that to a CGImage with VTCreateCGImageFromCVPixelBuffer and later to a UIImage with JPEG compression (50%). I then send that JPEG through the network and display it on the Mac OS app.
As you can see this is far from ideal and runs at ~25 FPS on an iPhone 11. After some research, I came up with GPU Image 2. It seems that I could get the data from the camera and apply something like this (so that the transformation is done in GPU):
camera = try Camera(sessionPreset:AVCaptureSessionPreset640x480)
let pictureOutput = PictureOutput()
pictureOutput.encodedImageFormat = .JPEG
pictureOutput.imageAvailableCallback = {image in
// Send the picture through the network here
}
camera --> pictureOutput
And I should be able to transmit that UIImage and display it on the macOS app. Is there a better way to implement this whole process? Maybe I could use the iPhone's H264 hardware encoding instead of converting images to JPEG, but it seems that it's not that straightforward (and it seems that GPUImage does something like that from what I read).
Any help is appreciated, thanks in advance!
I understand that you want to do this operation in a non-internet environment.
What are your project constraints;
Minimum fps?
Minimum video resolution?
Should sound be transmitted?
What is your network environment?
Minimum iOS and OSX version?
Apart from these, GPUImage is not a suitable solution for you. If you are going to transfer videos, you have to encode H264 or H265 (HEVC) in every moment. In this way, you can transmit video in a performance way.
The solution you are doing now is CMSampleBuffer-> CVPixelBuffer-> JPEG-> Data conversion seriously burden the processor. It also increases the risk of memory leak.
If you can tell a little bit, I would like to help. I have experience with video processing.
Sorry for my english.

ScreenShot is too large when taken in high resolution iOS devices

When I take a screenshot for a full screen size view in high resolution iOS devices, the result image data is very large. For example, the resolution of iPhoneX is 812*375, and screen scale is 3. Thus, a ARGB image for a full screenshot will take about 812*3 * 375*3 * 4 bytes, aka 10.4MB. So when I use these screenshot images in my app, the memory usage will jump to a high level, and maybe trigger a memory warning.
Here is my code:
if (CGRectIsEmpty(self.bounds)) {
return nil;
}
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, [[UIScreen mainScreen] scale]);
[self drawViewHierarchyInRect:self.bounds
afterScreenUpdates:NO];
UIImage *renderImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Even if I compress the screenshot image, there is still some pulses in memory usages.
So my question is: Is there any good way to take a high resolution screenshot and avoid memory pressure?
I faced the same problems while working with images - memory usage can be extreme causing memory warnings and crashes, especially if using UIImageJPEGRepresentation method on older devices (iPhone 4). So I tried to avoid using this method by saving pictures to Gallery and fetching them afterwards, this does not help much though, memory jumps persist anyway.
I suppose "pulses" are caused by copying the whole data to memory during converting. Possible solution would be to implement custom disk caching and decoding mechanism so the data could be processed in chunks, but still don't know if it worth to do. For me this problem still persists, maybe following list could be helpful.
Also refer to this question.
Other solution is to free view controllers resources in didReceiveMemoryWarning method if possible.

Image size optimization for less data usage

I'm working on a Instagram-like app on iOS, and I'm wondering on how to optimize the file size of each picture so the users will use as least data as possible when fetching all those pictures from my backend. It seems like it will drain data if you download the file just as it is since high resolution pictures are around 1.5MB each. Is there a way to shrink the size of the picture while maintaining the quality of the picture as much as possible?
you can compress the image by saving it into json binary data.
OR simply core binary data.
OR elegantly Swift Realm - Core binary data.
Do you really want to customise image by you own! because there are lots of libraries already available for doing that.
Which are more effective and powerful.
like,
AFNetworking Api
Its is wonderful it could not only compressed images as per UIImageView current available size according to device resolution but also give you image cache flexibility.
Here is the link of Pod File and github
Just try It you will love it
You can compress a UIImage by converting it into NSData
UIImage *rainyImage =[UImage imageNamed:#"rainy.jpg"];
NSData *imgData= UIImageJPEGRepresentation(rainyImage,0.1 /*compressionQuality*/); // Second parameter of this function is the compression Quality
Now use this NSData object to save or convert it into UIImage
To convert it again into UIImage
UIImage *image = [UIImage imageWithData:imgData];
Hope this will resolve your issue.
Image compression will loose in the image quality.

How to transmit small UIImages over network with minimal loss in quality?

I need to send a UIImage over the Internet, but I've been having some issues so far.
I currently take the UIImage, make it a 16th the dimensions, convert it to NSData, and then create an NSString using NSData+base64. I then transmit this as part of a JSON request to a server.
Most of the time, this works perfectly.
However, the file sizes are large. On a cellular connection, an image can take ~10 seconds to load.
How do I transmit a UIImage, over the network, so that it's small and retains quality? Even when I make it a 16th of the previous number of pixels, a photograph can be 1.5MB. I'd like to get it around 300KB.
Compressing it with zip or gzip only seems to result in negligible (<.1MB) savings.
Thanks,
Robert
I'd suggest storing it as a JPEG or a PNG with some compression, Use a method like
UIImageJPEGRepresentation(UIImage *image, CGFloat compressionQuality)
Then post the NSData object you get from that. You should be able to find a good ssize/quality tradeoff by playing with the compressionQuality variable.
You didn't say how you are converting your images to data. You mention “photograph”. If your images are photographs, you should use UIImageJPEGRepresentation, not UIImagePNGRepresentation. You will have to play with the compressionQuality parameter to find a compression level that balances data size and image quality.

iOS faster way to update UIView with series of JPEGs ie MJPEG. (Instruments shows 50% CPU)

I'm receiving a series of JPEGs over the network from a camera (MJPEG). I display the images as I receive them in a UIView. What I'm seeing is that my App is spending 50% of CPU (device and simulator tested) in what appears to me to be the UIView update.
Is there is a less CPU intensive way to do this screen update? Should I process the JPEG in some way before handing it over to UIView?
Receive method:
UIImage *image = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(),^{
[cameraView updateVideoImage:image];
});
Update method:
- (void) updateVideoImage:(UIImage*)image {
myUIView.image = image;
...
update: added better screen capture
update2: Is OpenGL going to provide a quicker surface to render to for JPEG? It's not clear to me from Instruments where the time is being spent, render or decode. I'm going to put together a test case as suggested and work from there.
iOS is optimized for PNG images. While JPEG greatly reduces the size of images for transmission, it is a much more complex format, so it does not surprise me that this rendering is taking a lot of time. People have said there is jpeg hardware assist on the device, but I do not know for sure and even if its there it maybe tuned for certain image types.
So - some suggestions. Devise a test where you take one jpeg you have now, and render it to a context, and baseline this time. Take the same image and open it in Preview, then save it with a slightly different quality value to another file, and try that (Preview will strip out unnecessary "junk" from the image, or even convert it first to a png then back to a jpeg. The idea here is to use an image output from Preview, which is going to be as clean an image as you are going to get. Is the image any better?
You can also try using libjpegturbo, and see if it can render your images faster. You can see that library in action in a github project, PhotoScrollerNetwork. You may find that project of use as it decodes the jpegs (using that library) in real time as they are received, and then supports zoomable viewing using CATiledLayers.

Resources