iOS faster way to update UIView with series of JPEGs ie MJPEG. (Instruments shows 50% CPU) - ios

I'm receiving a series of JPEGs over the network from a camera (MJPEG). I display the images as I receive them in a UIView. What I'm seeing is that my App is spending 50% of CPU (device and simulator tested) in what appears to me to be the UIView update.
Is there is a less CPU intensive way to do this screen update? Should I process the JPEG in some way before handing it over to UIView?
Receive method:
UIImage *image = [UIImage imageWithData:data];
dispatch_async(dispatch_get_main_queue(),^{
[cameraView updateVideoImage:image];
});
Update method:
- (void) updateVideoImage:(UIImage*)image {
myUIView.image = image;
...
update: added better screen capture
update2: Is OpenGL going to provide a quicker surface to render to for JPEG? It's not clear to me from Instruments where the time is being spent, render or decode. I'm going to put together a test case as suggested and work from there.

iOS is optimized for PNG images. While JPEG greatly reduces the size of images for transmission, it is a much more complex format, so it does not surprise me that this rendering is taking a lot of time. People have said there is jpeg hardware assist on the device, but I do not know for sure and even if its there it maybe tuned for certain image types.
So - some suggestions. Devise a test where you take one jpeg you have now, and render it to a context, and baseline this time. Take the same image and open it in Preview, then save it with a slightly different quality value to another file, and try that (Preview will strip out unnecessary "junk" from the image, or even convert it first to a png then back to a jpeg. The idea here is to use an image output from Preview, which is going to be as clean an image as you are going to get. Is the image any better?
You can also try using libjpegturbo, and see if it can render your images faster. You can see that library in action in a github project, PhotoScrollerNetwork. You may find that project of use as it decodes the jpegs (using that library) in real time as they are received, and then supports zoomable viewing using CATiledLayers.

Related

iOS image quality improvement

Icons Are Pretty Right?
I'm working on an UI update in an iOS app, and trying to make things look a bit better with some new icons- but I seem to be incapable of determining how to save an image correctly so that it looks good in the interface!
As you can see from this image, if I include a white background with the image it looks great. If I take those same images and use an alpha background they look terrible! It appears that either the images aren't using the #2x correctly, or something else is going horribly wrong.
These images are either saved with GIMP as a png with alpha, or exported from inkscape, the originals are vector graphics. We get the same results from both avenues. I am using both a base imageName.png and imageName#2x.png for scaling.
Somehow, magically, I changed the a single image to greyscale in gimp, and changed the base size to 25px and it showed up with alpha correctly blended. Stock images from apple are also functioning correctly, so it absolutely seems to be something that I'm doing incorrectly when I'm saving the images.
The Setup in XCode
Basic Questions
Is there a certain bit depth, argb vs rgba format, or some other quirk that I need to know to get these images to show up correctly? Is there any way to verify that the program is loading the correct imageName#2x vs imageName? Is there some document that talks about integrated graphics (the iconography documentation isn't very helpful on technical details)
Actual Images
With Background:
Without Background:
I think you will find success if you just save the image at 4x the size you actually want and specify the size manually.

iOS app crashes because images use too much ram

I know this is a stupid problem, but this is my first real app that I have to make, I have no one to ask and I looked up this problem and found no other similar problems.
My app crashes on real devices with no exception. I saw in the simulator that uses too much RAM and after a while I got to the conclusion that the pictures I am using are to blame.
The app is structured in this way: it has 8 viewControllers for different things: for example, it starts with one which lets the user select the avatar with which he/she will play and here I have two pictures, next is a viewController which shows the stats for that avatar and here it is another picture and so on. The problem is that each picture uses 40MB of RAM to be displayed and things add up so the app uses more than 300MB of RAM when the user gets to the gameviewCOntroller where the game is. Because of this, on devices like iPAD 2 or iphone 4 it crashes, but not on iphone 5.
I tried to set the images both from "images.xcassets" and from a ".atlas" folder, but the result is exactly the same. The pictures have a dimension of no more than 1500x1999px, they are in png format.
Also, I saw that if the app were to start directly into the gaveViewController it would use 180MB so the other viewController remain in memory or something like that. Should I "clear" them or something similar?
//-------update-------
This is what I got from Instruments:
Memory is a big deal on mobile devices, there is not a clear answer to you question, but I can give you some advices:
If your images are plain colors or have symmetric axes use resizable images. You can just use one line of pixel multiplied by with or height to cover the entire screen using a small amount of memory
Image compression doens't have effects when the image is decompressed. So if you have a png that is 600kb and you are thinking that converting in a 300kb will lower memory usage is only true for "disk space" when an image is decompressed in memory the size is widthXheightXNumber_of_channelXbit_for_channel
resize images: if are loading a 2000px square image into memory and you show it inside an image view of 800 px square, resize before adding it.You will have just a peak while resizing, but later it will use less memory
If you need to use big images, use tiling techniques such as CATiledLayer
If you don't need an image anymore get rid of it. It's ok to have an array of path to images, but not an array of full uncompressed images
Avoid -imageNamed it caches images and even if Apple says that this cache is released under memory pressure, you don't have a lot of control on it and it could be too late to avoid a crash
Those are general advices, it's up to you if they fit your requirements.
You should definitely follow Andrea's advices.
Additionally you should consider setting the image size to exactly what your need is. You're saying that you've tried to set them from xcassets so you have full control over the images you're loading, which is great (compared to downloading an image that you cannot modify).
I highly suggest you read some documentation on using Asset catalog files. This will allow you to have high-resolution image for bigger screens that also have more memory, and smaller ones for older devices, which is what you want here.
Also, note that 1500x1999px is still a very big size for most mobile devices.
More links about screen-size:
https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html
http://www.paintcodeapp.com/news/iphone-6-screens-demystified

jpg or png for user profile pictures?

My app requires that each user has a profile picture of around 140*140px. Right now I am using jpgs, I am wondering if performance wise it will be better to use pngs. I read pngs are good for small UI elements and images, jpg for large images with detail such as photos. Obviously my profile pics are photos but they are small. Would it make much difference switching to png? Thanks
JPEG is best for small file sizes of photos, even for low resolutions.
PNG makes sense when there are many pixels of the exact same color next to each other. This is not the case with photos.
These should be helpful for you.
When to use PNG or JPG in iPhone development?
PNG vs. GIF vs. JPEG vs. SVG - When best to use?
Apple optimizes PNG images that are included in your iPhone app bundle. In fact, the iPhone uses a special encoding in which the color bytes are optimized for the hardware. XCode handles this special encoding for you when you build your project. So, you do see additional benefits to using PNG's on an iPhone other than their size consideration. For this reason it is definitely recommended to use PNG's for any images that appear as part of the interface (in a table view, labels, etc).
As for displaying a full screen image such as a photograph you may still reap benefits with PNG's since they are non-lossy and the visual quality should be better than a JPG not to mention resource usage with decoding the image. You may need to decrease the quality of your JPG's in order to see a real benefit in file size but then you are displaying non-optimal images.
File size is certainly a factor but there are other considerations at play as well when choosing an image format.

how to deal with big picture in UIScrollView

I have a big picture to show in UIScrollView.It's 28.1 MB.And it often crash the app.Is there some methods to deal with it so that app won't crash?
Here you go buddy,
Convert Large Image to tiles
Displaying tiled images in UIScrollview
Tiled Images in UIScrollView
If it's a static PNG image, run it through ImageAlpha and ImageOptim (Google these). You might be able to get it down by well over 50%. Bundle the resulting image with your app instead.
Note that ImageOptim is lossless whereas ImageAlpha is lossy. You can use them in conjunction.
Split up the image and load the pieces only when necessary.

iOS vs Photoshop JPEG Compression

I have a simple question for anyone who knows the answer to this... I am making a social photo sharing app and I want to save a large enough image in the app so that it can be used in a full screen website app moving forward. Think...Facebook.
I've been playing around with JPEG compression in iOS and also testing sizes and quality with Photoshop CS5. I get really different results with these two. In photoshop, even at high compression, the image is quite clear and retains lots of detail. In iOS, once the compression dips below about 0.5 it looks horrible and blocky. It almost seems like there's a point where the image quality just dips after a certain magic compression number.
With photoshop, I use the "Save for Web" option and with iOS I am using UIImageJPEGRepresentation(image, 0.6). Is there a huge difference in these two? Aren't all JPEGs use the same kind of compression?
I am really not that informed in this world of image processing. Can anyone advice me on what is a good way to have images compressed to a level that preserves quality and stay bandwidth friendly? I want my images to stay about 1280px in length.
Any advice on this or smarter ways to move JPEGS over the network is welcomed. Thank you.
If your app is producing images from an iOS device, you should continue to use UIImageJPEGRepresentation. I don't think it's productive comparing the UIKit JPEG compression to Photoshop's.
I would find a JPEG compression level you're happy with using the available UIKit APIs and go with that. When you're serving up 30+million images a second it might be worth looking at optimisations but until then leave it to UIKit.

Resources