I'm developing a game in as3 for iPhone, and I've gotten it running reasonably well (consistanty 24fps on iPhone 3G), but I've noticed that when the "character" goes partly off the screen, the frame rate drops to 10-12fps. Does anyone know why this is and what I can do to remedy it?
Update - Been through the code pretty thoroughly, even made a new project just to test animations. Started a image offscreen and moved it across the screen and back off. Any time the image is offscreen, even partially, the frame rates are terrible. Once the image is fully on the screen, things pick back up to a solid 24fps. I'm using cacheAsBitmap, I've tried masking the stage, I've tried placing the image in a movieclip and using scrollRect. I would keep objects from going off the screen, except that the nature of the game I'm working on has objects dropping from the top down (yes, I'm using object pooling. No, I'm not scaling anything. Striclt x,y translations). And yes, I realize that Obj-C is probably the best answer, but I'd really like to avoid that if I can. AS3 is so much nicer to write in
Try and take a look at the 'blitmasking' technique: http://www.greensock.com/blitmask
From Doyle himself:
A BlitMask is basically a rectangular Sprite that acts as a high-performance mask for a DisplayObject by caching a bitmap version of it and blitting only the pixels that should be visible at any given time, although its bitmapMode can be turned off to restore interactivity in the DisplayObject whenever you want. When scrolling very large images or text blocks, BlitMask can greatly improve performance, especially on mobile devices that have weaker processorst
Related
I’m working on various iOS apps and I need an interface with the following capabilities:
I have a scrollview (covering most of the screen) which is scrollable both directions
This scrollable view contains a lot of rectangles. These rectangles are intractable. User can modify them, move them around, create and delete. So ideally they would be all CALayers or UIViews.
The problem is because there could be 100s or 1000s of those displayed at once, CALayers or UIViews may not be very efficient.
The scrollview could be 10-20 times bigger than the screen size itself. And fully covered with these shapes. So when the user scrolls it shouldn’t see any flickers or shapes appearing after the scrolling is done. e.g. If I use CATiledLayer and user scrolls, you can see things drawn after scrolling is done.
Smooth zooming. Zooming out is particularly challenging, because the shapes would need to be drawn on parts of the view which are going to become visible. Also, ideally I’d rather not use something like CGAffineTransform to perform scaling, I like to have a pixel accurate scaling.
I’ve tried various things, but I can’t seem to be able to get decent frame rates even on iPhone 6. Even tried drawing every frame, but it’s too expensive Core Graphics to handle it. Is there code examples someone trying to do a similar thing or an open source library? I’m trying not to use OpenGL, I feel like it’s an overkill, but I will try it if I have to. FYI, I have no experience in OpenGL yet.
Procreate for iPad does what I’m trying to do perfectly, it’s super responsive and zooming is pixel accurate. I know they use OpenGL and I’m not making drawing apps. The reason I mention it is because it shows what I’m trying to do is possible.
I think you need to move from UIKit to some 2d or 3d engines:
Cocos2D
Sparrow
Unity
OOlong
I am making a game, and it involves a sandstorm. I decided that the basic concept would be that I would make an image that looks roughly like a sandstorm, and then decorate it with some particles/whatever else it takes.
I ran into an issue at step one. I threw together a simple image for testing purposes:
I added that to my game, and the FPS dropped by 60%. I was surprised by the effect one image had, but I wasn't too worried about it. I cut the resolution of the image in half, and again, lots of lag.
Is spritekit/iOS really that bad at handling moderately sized images with alpha? I read on another question that the simulator is bad at rendering, but that can't be the entire problem.
Is there any hope for getting this to render without slicing my performance? The particles work well, everything else runs at 60fps just fine, but the addition of this image is apparently a severe drain on resources.
EDIT: I tested my game out on my phone, and I got no lag. So apparently, the simulator is just really bad at rendering after all. At the same time, I am curious as to how to speed up performance, as there is clearly some kind of lag going on.
I'm no expert on SpriteKit, but I had similar experiences with plain core animation and layering.
The issue is that an image with alpha, even for the "opaque" parts of it, it introduces a redrawing call on all the sublayers underneath it for every time it moves. First check if this is actually the problem, and then, try one of this, and see if it improves:
SKCropNode could prevent for rendering the underneath elements
Tile the image so only the border has alpha
Snapshot the underneath layers.
Reduce the amount of nodes being rendered, hide the ones that are "under the sandstorm".
And you should be using real devices to test performance of your game, you cannot rely on the simulator for that.
I'm developing a app that will showcase products. One of the features of this app is that you will be able to "rotate" the product, using your finger/Pan-Gesture.
I was thinking in implementing this by taking photos of the product from different angles so when you "drag" the image, all I would have to do is switch the image according. If you drag a little, i switch only 1 image... if you drag a lot, i will switch them in cadence making it look like a movie... but i have a concerns and a probable solution:
Is this "performatic"? Since its a art/museum product showcase, the photos will be quite large in size/definition, and loading/switching when "dragged a lot" might be a problem because it would cause "flickering"... And the solution would be: instead of loading pic-by-pic i would put them all inside one massive sheet, and work through them as if they were a sprite...
Is that a good ideia? Or should I stick with the pic-by-pic rotation?
Edit 1: There`s a complicator: the user will be able to zoom in/out and to rotate the product in any axis (X, Y and Z)...
My personal opinion, I don't think this will work the way you hope or the performance and/or aesthetics will not be what you want.
1) Taking individuals shots that you then try to keyframe to based on touch events won't work well because you will have inevitable inconsistencies in 'framing' the shots such that the playback won't be smooth
2) The best way to do this, I suspect, will be to shoot it with video and shoot it with some sort of rig that allows you to keep the camera fixed while rotating the object
3) I'm pretty sure this is how most 'professional' grade product carousel type presentations work
4) Even then you will have more image frames than you need -- not sure whether you plan to embed the images files in app or download on demand -- but that is also a consideration in terms of how much downsampling you'll need to do to reduce frames/file size
Suggestion
Look at shooting these as video (somewhat like described above) and downsampling and removing excess frames using a video editor. Then you could use AVFoundation for playback and use your gestures to 'scrub' into the video frames. I worked on something like this for HTML playback at a large company and I can assure you it was done with video.
Alternatively, if video won't work for you. Your sprite sheet solution might work (consider using SpriteKit). But then keep in mind what I said about trying to keyframe one off camera shots together -- it just won't work well. Maybe a compromise would be to shoot static images but do so by fixing the camera and rotating the objects at very specific increments. That could work as well I suppose but you will need to be very careful about light and other atmospehrics. It doesn't take much variation at all to be detectable to the human eye causing the whole presentation to seem strange. Good luck.
A coder from my company did something like this before using 360 images of an object and it worked just great but it didn't have zoom. Maybe you could add zoom by adding a pinch gesture recognizer and placing the image view into a scroll view to zoom in on the static image.
This scenario sounds like what you really need is a simple 3D model loader library or write it in OpenGL yourself. But this pan and zoom behavior is really basic when you make that jump to 3D so it should be easy to find lots of examples.
All depends on your situation and time constraints :)
I have a custom view (inherited from UIView) in my app. The custom view overrides
- (void) drawRect:(CGRect) rect
The problem is: the drawRect: executes many times longer on iPad 3 than on iPad 2 (about 0.1 second on iPad 3 and 0.003 second on iPad 2). It's about 30 times slower.
Basically, I am using some pre-created layers and draw them in the drawRect:. The last call
CGContextDrawLayerAtPoint(context, CGPointZero, m_currentLayer);
takes most of the time (about 95% of total time in drawRect:)
What might be slowing things so much and how should I fix the cause?
UPDATE:
There are no threads directly involved. I do call setNeedsDisplay: in one thread and drawRect: gets called from another but that's it. The same goes for locks (there are no locks used).
The view gets redrawn in response to touches (it's a coloring book app). On iPad 2 I get reasonable delay between a touch and an update of the screen. I want to achieve the same on iPad 3.
So, the iPad 3 is definitely slower in a lot of areas. I have a theory about this. Marco Arment noted that the method renderInContext is ridiculously slow on the new iPad. I also found this to be the case when trying to create a magnifying glass for a custom text view. In the end I had to forego renderInContext for custom Core Graphics drawing.
I've also been having problem hitting the dreaded wait_fences errors on my core graphics drawing here: Only on new iPad 3: wait_fences: failed to receive reply: 10004003.
This is what I've figured out so far. The iPad 3 obviously has 4 times the pixels to drive. This can cause problems in two place:
First, the CPU. All core graphics drawing is done by the CPU. In the case of rotational events, if the CPU takes too long to draw, it hits the wait_fences error, which I believe is simply a call that tells the device to wait a little longer to actually perform the rotation, thus the delay.
Transferring images to the GPU. The GPU obviously handles the retina resolution just fine (see Infinity Blade 2). But when core graphics draws, it draws its images directly to the GPU buffers to avoid memcpy. However, either the GPU buffers haven't changes since the iPad 2 or they just didn't make them large enough, because it's remarkably easy to overload those buffers. When that happens, I believe the CPU writes the images to standard memory and then copies them to the GPU when the GPU buffers can handle it. This, I think is what causes the performance problems. That extra copy is time consuming with so many pixels and slows things down considerably.
To avoid memcpy I recommend several things:
Only draw what you need. Avoid drawing anything offscreen at all costs. If you're drawing a large view, but only display part of that view (subviews covering it, for example) try to find a way to only draw what is visible.
If you have to draw a large view, consider breaking the view up in to parts either as subviews or sublayers (probably sublayers in your case). And only redraw what you need. Take the notability app, for example. When you zoom in, you can literally watch it redraw one square at a time. Or in safari you can watch it update squares as you scroll. Unfortunately, I haven't had to do this so I'm uncertain of the methodology.
Try to keep your drawings simple. I had an awesome looking custom core text view that had to redraw on every character entered. Very slow. I changed the background to simple white (in core graphics) and it sped up well. Even better would be for me to not redraw the background.
I would like to point out that my theory is conjecture. Apple doesn't really explain what exactly they do. My theory is just based on what they have said and how the iPad responds as well as my own experimentation.
UPDATE
So Apple has now released the 2012 WWDC Developer videos. They have two videos that may help you (requires developer account):
iOS App Performance: Responsiveness
iOS App Performance: Graphics and Animation
One thing they talk about I think may help you is using the method: setNeedsDisplayInRect:(CGRect)rect. Using this method instead of the normal setNeedsDisplay and making sure that your drawRect method only draws the rect given to it can greatly help performance. Personally, I use the function: CGContextClipToRect(context, rect); to clip my drawing only to the rect provided.
As an example, I have a separate class I use to draw text directly to my views using Core Text. My UIView subclass keeps a reference to this object and uses it to draw it's text rather than use a UILabel. I used to refresh the entire view (setNeedsDisplay) when the text change. Now I have my CoreText object calculate the changed CGRect and use setNeedsDisplayInRect to only change the portion of the view that contains the text. This really helped my performance when scrolling.
I ended up using approach described in #Kurt Revis answer for similar question.
I minimized number of layers used, added UIImageView and set its image to an UIImage wrapping my CGImageRef. Please read the mentioned answer to get more details about the approach.
In the end my application become even simpler than before and works with almost identical speed on iPad 2 and iPad 3.
few months ago I've found a really awesome sample code from Apple site. The sample is called "LargeImageDownsizing" the wonderful thing is that it explain a lot about how image are read from resources and then rendered on screen. Digging into that code I've found something that is disturbing me a little. The downsized image is passed to a view that has a CATiledLayer, but without giving a piece of image at each tile to improve memory performance, it just set the tile size and then load image (I'm making things simple to go to the concept). So my question basically is why?Why use a CATiledLayer if it is not feed in the right way, they could have used a normal UIImageView... So I made few tests to understand if I was right. Modifing the code simple adding a scrollview with an image view as subview and responding to the delegate scrollview for zoom. I went to those conclusions testing on device and sim:
-The memory impact and footprint is exactly the same, even during zooming scrolling operation and it doesn't surprise me at all, the image is decompressed in memory
-Time profile say that a tileview take more time to be drawn during scrolling zoom operation instead of a uiimageview and that doesn't surprise me at all again the uiimageview is already drawn
-If I send memory warning nothing change between the two solution(only on sim)
-Testing Core Animation performance I get the same results around 60FPS
So what's the deal between those two views/layers why should I pick one instead of the other in these specific case? UIImageView seems to win the battle.
I hope that someone could help me to understand that.
They might perform the same for small images because ghen the only difference in terms os performance is that CATiledLayer draws on a background thread. Depending on the tile size CATiledLayer would even be slower because it has to draw multiple tiles for one image.
BUT ...
the point of CATiledLayer is that you don't need to draw all tiles, especially when zooming into a very very large image. It is smart to know which parts are actually needed. It also is smart about evicting tiles that are not needed any more.
Or this mechanism to work you need to provide the individual parts of the image separately. We're talking a total size of an image that probably cannot be held in memory uncompressed.