I am developing an iPad paint application.In my application there is a sketch book on which the user can draw sketches in any page.I have done drawing sketches by identifying user touches.In the application,I am detecting swipe gesture (up/down) and turn pages one by one using curl page transition animation & I have successfully implemented it.
Now the problems are:
1.There comes a conflict between my drawing and turning pages as both of them are analyzing user finger touches.(For eg: When I draw with a pen from bottom to top,the swipe gesture detects the swipe event and immediately turns the page.Similarly when I try to swipe a page,sometimes the app draws a stroke on that portion).I would like both of them work at the same time.Is there any way?
2.A user can create N number of drawing in the application.And currently what I have done is,to save current drawing once the user turns the page or tries to navigate away from the screen(in order to use minimum memory).When the next/previous page is about to load,the app will get the proper image from cache directory and loads it.(Previously I kept each of these drawings in an array by fetching every drawings from cache directory.).I am maintaining a database to store the IDs of each drawing and uses this id to read the image from cache directory.The problem is that,After using a couple of minutes(say 5 or 10) the app still throws memory warning.Is there anyway to avoid that?
I have tried to compress images to resolve memory warnings,but the compression makes the images somewhat weird.I am using https://github.com/acerbetti/ACEDrawingView for normal paint strokes and there is another paint tool that uses OpenGL for specific type of stroke.
I think without code it will difficult to predict where the memory leak is . The best option is to profile your application. Go to product -> Profile (window + I). Now you can see which of the variable taking the memory .
Note : if you are using ARC then garbage collector will take care of releasing the memory .
If you are allocating the some memory using malloc or calloc its your responsibility to release that memory .
Related
I have been searching these threads and other sites but have not come across a way to do this both efficiently and memory friendly. And so, here is my story:
My IOS (iPad) app uses sprite sheets (a large image, such as 2k x 2k at 16 bpp, which is composed of many smaller sprites). I have created a sprite atlas class which manages these sheets, handle sprite animations, and other features.
The idea (in the load method) is to load in the sheet from the file system, split it apart into UIImages (one per sprite) using CGImageCreateWithImageInRect, and then dispose of the loaded sheet. Seems simple enough.
Note that the sheet is loaded into a UIImage by initWithContentsOfFile or imageNamed (more on that below).
The desire is to save file memory by using sprite sheets, and then save runtime memory by only retaining the actual sprites themselves as UIImages. In my experiments thus far this is what I find happening:
If I use initWithContentsOfFile I see (from Instruments) that it appears to do a file open, fstat64, and close for the file for EACH sprite in it. This takes an horrendous amount of time to load all the sprites. It actually seems to load the entire sheet, grab one sprite, close the sheet, then load the entire sheet again for the next sprite, until they are all created. Also it appears to consume lots of memory (proven by the "received memory warning" after just the second sheet loaded, as well as by Instruments allocations).
Next I tried imageNamed (which caches the sheet). The file loading occurs once and so it is MUCH faster. All seems good and in fact and I can go until many sheets are loaded. But eventually the dreaded "received memory warning" appears... and a few seconds later the app crashes. It appears to be the case that the cached image (even though the pointer is set to null after pulling out the sprites) never goes away. I have read several posts that also state this seems to be its behavior (although other posts say other things, so that is not conclusive - does anyone know definitively?).
And so it appears that neither method is what is needed. What I want is to load the file, pull out each sprite into its own UIImage, then have the UIImage for the big sheet file released completely.
I have read one site that talked about using the initWithContentsOfFile approach to set up their own caching system (rather than trust the IOS plan with imageNamed) so they can release the image when desired. However, I don't think they had sprite sheets in mind.
And so, I turn this over to the experts out there to see if there are some ideas on how to get both fast load times AND use minimal memory.
[and yes, I know that IOS 7 has SpriteKit. But this needs to also work on IOS 6.x.]
One interesting data point is that on an original iPad the imageNamed version actually works fine with no "received memory warning". It might have been IOS 5.x. But the app will crash on the iPad 2 device.
I am not including code here because what I am after is an understanding of the mechanics involved with how memory is used with these functions related to image handling.
And while I am at it, can someone please clarify this point:
True or False: When using CGImageCreateWithImageInRect, what it does is actually creates from new memory a bitmap the size of the rectangle specified and then COPIES from the original UIImage the pixels into this new memory (as opposed to setting up the bitmap format and having a pointer point into the original UIImage's pixel data). I think this is True, but want verification.
Thanks!
I'm working on an iPad-only iOS app that essentially downloads large, high quality images (JPEG) from Dropbox and shows the selected image in a UIScrollView and UIImageView, allowing the user to zoom and pan the image.
The app is mainly used for showing the images to potential clients who are interested in buying them as framed prints. The way it works is that the image is first shown, zoomed and panned to show the potential client if they like the image. If they do like it, they can decide if they want to crop a specific area (while keeping to specific aspect ratios/sizes) and the final image (cropped or not) is then sent as an email attachment to production.
The problem I've been facing for a while now, is that even though the app will only be running on new iPads (ie. more memory etc.), I'm unable to find a method of handling the images so that the app doesn't get a memory warning and then crash.
Most of the images are sized 4256x2832, which brings the memory usage to at least 40MB per image. While I'm only displaying one image at a time, image cropping (which is the main memory/crash problem at the moment) is creating a new cropped image, which in turn momentarily bumps the apps total RAM usage to about 120MB, causing a crash.
So in short: I'm looking for a way to manage very large images, have the ability to crop them and after cropping still have enough memory to send them as email attachments.
I've been thinking about implementing a singleton image manager, which all the views would use and it would only contain one big image at a time, but I'm not sure if that's the right way to go, or even if it'd help in any way.
One way to deal with this is to tile the image. You can save the large decompressed image to "disk" as a series of tiles, and as the user pans around pull out only the tiles you need to actually display. You only ever need 1 tile in memory at a time because you draw it to the screen, then throw it out and load the next tile. (You'll probably want to cache the visible tiles in memory, but that's an implementation detail. Even having the whole image as tiles may relieve memory pressure as you don't need one large contiguous block.) This is how applications like Photoshop deal with this situation.
I ended up sort of solving the problem. Since I couldn't resize the original files in Dropbox (the client has their reasons), I went ahead and used BOSImageResizeOperation, which is essentially just a fast, thread-safe library for quickly resizing images.
Using this library, I noticed that images that previously took 40-60MB of memory per image, now only seemed to take roughly half that. Additionally, the resizing is so quick that the original image gets released from memory so fast, that iOS doesn't execute a memory warning.
With this, I've gotten further with the app and I appreciate all the idea, suggestions and comments. I'm hoping this will get the app done and I can get as far away from large image handling as possible, heh.
I'm developing an enterprise map-based application and it needs to display info gathered from a large workforce and display it all on each worker's iPad. So the number of markers on the map can grow very very large (several thousands) quickly. In addition, each marker is going to be backed by an NSManagedObject subclass that's held in memory while the marker exists.
I'm using Google Maps iOS SDK, and the problem is, even without any markers, just panning and zooming around causes really large increases in memory usage. The application's dirty memory size is 100MB (using Allocations tool) upon launch. Little bit of panning and zooming quickly makes it shoot to up to 300, and the issue is when I stop panning and zooming, the memory never goes down. Similarly if I have a lot of markers on and I remove them, again, no drop in memory (when I remove markers, I make sure to not hold any references to any of the objects too). The only time memory goes down is when I change map types. If I pan/zoom a lot in street view, then switch to satellite view, there's a sudden 50MB+ drop in dirty memory.
So I was wondering if anyone has any tips in handling memory when using Google Maps, or any info on how Google Maps manages/releases memory?
I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.
We have a project that is up and coming that will require us to push texture image information to a the EAGLView of an iPad app. Being green to OpenGL in general, are there implications to having a surface wait for texture information? What will OpenGL do while it's waiting for the image data? Does OpenGL require constant updates to it's textures, or will it retain the data until we update the texture again? We're not going to be having a loop or anything in the view, but more like an observer pattern.
When you upload a texture, you hand it off to the GPU — so a copy is made, in memory you don't have direct access to. It's then available to be drawn with as many times as you want. So there's no need for constant updates.
OpenGL won't do anything else while waiting for the image data, it's a synchronous API. The call to upload the data will take as long as it takes, the texture object will have no graphic associated with it beforehand and will have whatever you uploaded associated with it afterwards.
In the general case, OpenGL objects, including texture objects, belong to a specific context and contexts belong to a specific thread. However, iOS implements share groups, which allow you to put several contexts into a share group, allowing objects to be shared between them subject to you having to be a tiny bit careful about synchronisation.
iOS provides a specific subclass of CALayer, CAEAGLLayer, that you can use to draw to from OpenGL. It's up to you when you draw and how often. So your approach is the more native one, if anything. A lot of the samples wrap
Obviously try the simplest approach of 'everything on the main thread' first. If you're not doing all that much then it'll likely be fast enough and save you code maintenance. However, uploading can cost more than you expect, since the OpenGL way around of working is that you specify the data and the format it's in, leaving OpenGL to rearrange it as necessary for the particular GPU you're on. We're talking amounts of the 0.3 of a second variety rather than 30 seconds, but enough that there'll be an obvious pause if the user taps a button or tries to move a slider at the same time.
So if keeping the main thread responsive proves an issue, I'd imagine that you'd want to hop onto a background thread, create a new context within the same share group as the one on the main thread, upload, then hop back to do the actual drawing. In which case it'll remain up to you how you communicate to the user that data has been received and is being processed as distinct from no data having been received yet, if the gap is large enough to justify doing so.