I'm building a scrolling menu that generates new rows of buttons on the fly, and must generate each button from a large number of sprites. Because this is processor intensive, the menu sticks for about a quarter second each time it needs to load a new row of buttons. I realized I needed to add multi-threading so the button load could be handled in a different thread than the scroll animation, but when I do it crashes when it tries to load new buttons. Here is the code I'm using:
-(void)addRowBelow{
_rowIndex--;
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSMutableArray *row = [self addRow:_rowIndex];
[_buttonGrid addObject:row];
[self removeRow:[_buttonGrid objectAtIndex:0]];
});
_nextRowBelowPos += _rowHeight;
_nextRowAbovePos += _rowHeight;
}
Each time I test it I get a different error, sometimes it's a memory error or an assertion failure. I suspect it has to do with calling cocos2d functions asynchronously?
You are probably getting crashing issues because you are multithreading access to the cocos managed objects (sprites, layers, nodes, etc). Since the engine expects to use the internals of these objects for display, GPU operations, etc., and is NOT thread safe, you are probably not going to have good outcomes with multi-threading. You may be changing stuff right in the middle of when it is using it.
Creating/destroying sprites on the fly is probably the reason for your slow down. Cocos2d can display lots (I think it is on the order of 2k) objects on the screen at 60 fps...as long as you don't throttle it down by doing a lot of creation/destruction or AI.
I suggest you preload all your sprites before your scene goes on the stage. You can do this in an intro scene or in the init of the scene itself and let the sprites be owned by the scene. Then you can iterate over them during the update() call and change their positions, make the visible/invisible, etc.
For reference, I usually create different "sprite layers" that load up all their sprites on addition to the scene. If I am going to have dynamic objects, I try to allocate some up front and recycle them when possible. This also allows me to control the order of "what is in front of what" on the screen (see example here). Each layer also draws elements of specific "entity types", giving a nice "MVC" character to a lot of the display.
This is analogous to the way iPhone Apps recycle table cells.
Only create them the first time you need them and have a stash on hand before you need them at all.
Was this helpful?
The pattern you probably want to use is
Dispatch work to a background thread. (Note that the work must be safe to execute on a background thread.)
Dispatch back to the main thread to update your UI.
Here's an example of what that looks like in code:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Do work that is safe to execute in the background.
// For example, reading images from disk.
dispatch_async(dispatch_get_main_queue(), ^{
// Do work here that must execute on the main thread.
// For example, calling Cocos2D objects' methods.
NSMutableArray *row = [self addRow:_rowIndex];
[_buttonGrid addObject:row];
[self removeRow:[_buttonGrid objectAtIndex:0]];
});
});
Related
In my app I am rendering lots (200 to 400) UIImages on a background thread and then installing them inside on-screen UIImageView instances by dispatching a UI update block to the main thread.
Some code to show roughly what I'm doing…
dispatch_async( redrawQueue, ^{
// An array to stuff images in to for the views that have one.
NSMutableArray *const images = [NSMutableArray arrayWithCount: [activeViews count] value: [NSNull null]];
for(NSUInteger i=0; i<activeCount; ++i)
{
// Rendered content comes from a block in myState.
UIImage *const image = contentBlock(i);
if(image)
{
images[i] = image;
}
}
else
{
return;
}
// Update the UI now…
dispatch_async( dispatch_get_main_queue(), ^{
[images enumerateObjectsUsingBlock:^(UIImage *image, NSUInteger i, BOOL *stop) {
UIImageView *const view = [activeViews objectAtIndex:i];
[view setImage: [NSNull isNotNull: image] ? image : nil];
layoutBlock(view, i);
}];
});
});
This is working well, but I'm still getting dropped frames during rapid scrolling. It seems like this is happening because the work of setting the images in the views is overwhelming the main thread. My evidence for this is that if I take out just the code to actually set the rendered images in the views, scrolling is much smoother.
I'm wondering if an approach to solving this might be to also create the views in a background thread, assign images to them, and place them in to a container view. Then in the main thread, I would simply need to swap the container in to the on-screen scene. The result is a bit like a double buffered graphics context, I guess – update one while the other is displayed.
Can anyone suggest if this is unlikely to be thread safe?
I've done a small test of allocating off screen UIViews on a background thread and nesting them inside each other. It hasn't crashed yet :-) "It hasn't crashed yet" isn't a great "thread safety" guarantee though! It also doesn't say anything about what might happen in a future version of iOS.
An obvious answer to this is "Hey, you fool, why are you using hundreds of little views? Composite them to one a big image and have a single view you swap it in to." Unfortunately I need lots of little views because I need to move the individual little pieces about independently.
Another answer might be "Use sprite kit, dude", and you're probably right, but the little views have dynamic size and content and I'm not sure how optimal sprite kit is when there are lots of sprite updates occurring.
A third approach could be to throttle the UI updates on the main thread to prevent frames getting dropped. Is there a mechanism that does this? Some kind of dispatch queue run by the main thread that only calls stuff while it's got plenty of time left?
You asked:
A third approach could be to throttle the UI updates on the main
thread to prevent frames getting dropped. Is there a mechanism that
does this? Some kind of dispatch queue run by the main thread that
only calls stuff while it's got plenty of time left?
This is not something that's built in, but it's not that hard to envision how you might do this, but it's also non-trivial. You will probably need to manage your own array of images to deliver (including some means of protecting it from concurrent access), then add a CFRunLoopObserver (probably in the kCFRunLoopBeforeWaiting activity, since that's when the run loop is about to go to sleep) that, every time it's triggered, marks the start time, and then processes items from your array of images until some amount of time has passed (10ms is probably a decent amount of time).
Another thing you might consider would be rendering many of these little images into one CGImage (or some small number of images), and then setting the view's layer's contents to the big image, while setting the bounds such that each instance is clipped to just the portion corresponding to that view. This might reduce the number of GPU texture uploads (and hence overall overhead), since all the CALayers backing the views will have the same image as their contents. This would probably be my first stop.
I have a seemingly simple problem that I cannot for the life of me seem to figure out. In my iOS App, I have a UICollectionView that triggers network operation upon tapping it that can take a few seconds to complete. While the information is being downloaded, I want to display a UIView that fills the cell with a UIActivityIndicatorView that sits in the square until the loading is done, and the segue triggered. The problem is that it never appears. Right now my code looks like:
myLoadView.hidden = NO;
//Network Operation
myLoadView.hidden = YES;
The App simply stops for a couple seconds, and then moves on the the next view. I'd imagine Grand Central Dispatch has somthing to do with the solution, however please keep in mind that this code takes place in prepareForSegue, and the network info needs to be passed to the next View. For this reason not finishing the download before switching scenes has an obvious problem. Any help would be VASTLY appreciated. Thanks!
iOS commits changes in the interfaces after working out a routine. Hence you should perform your network operation in a background thread and then get back back on the main and perform the "show my view now thing". Have a look the below code for reference.
myLoadView.hidden = NO;
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
//Network Operation
dispatch_async(dispatch_get_main_queue(), ^{
myLoadView.hidden = YES;
});
});
Your network operation seems to be carried out on the main thread, aka UI thread. This blocks all further UI calls, including the call to unhide a view, until completion.
To resolve this, make your call asynchronous.
You should read this in full, if you haven't already.
As mentioned by other answers, the problem is that the UIView change doesn't happen until the current method finishes running, which is where you are blocking. Before GCD was available I would split methods in two and use performSelector:withObject:afterDelay (to run the second part also on the UI loop) or performSelectorInBackground:withObject: at the end of the first method. This would commit all the waiting animaations first, then do the actual tasks in the second method.
Well the better option for this type of indication is by using the custom HUD libraries like SVProgressHUD or MBProgressHUD
I need to mirror a UIWebView's CALayers to a smaller CALayer. The smaller CALayer is essentially a pip of the larger UIWebView. I'm having difficulty in doing this. The only thing that comes close is CAReplicatorLayer, but given the original and the copy have to have CAReplicatorLayer as a parent, I cannot split the original and copy on different screens.
An illustration of what I'm trying to do:
The user needs to be able to interact with the smaller CALayer and both need to be in sync.
I've tried doing this with renderInContext and CADisplayLink. Unfortunately there is some lag/stutter because it's trying to re-draw every frame, 60 times a second. I need a way to do the mirroring without re-drawing on each frame, unless something has actually changed. So I need a way of knowing when the CALayer (or child CALayers) become dirty.
I cannot simply have two UIWebView's because two pages may be different (timing is off, different background, etc...). I have no control over the web page being displayed. I also cannot display the entire iPad screen as there are other elements on the screen that should not show on the external screen.
Both the larger CALayer and smaller "pip" CALayer need to match smoothly frame-for-frame in iOS 6. I do not need to support earlier versions.
The solution needs to be app-store passable.
As written in comments, if the main needing is to know WHEN to update the layer (and not How), I move my original answer after the "OLD ANSWER" line and add what discussed in the comments:
First (100% Apple Review Safe ;-)
You can take periodic "screenshots" of your original UIView and compare the resulting NSData (old and new) --> if the data is different, the layer content changed. There is no need to compare the FULL RESOLUTION screenshots, but you can do it with smaller one, to have better performance
Second: performance friendly and "theorically" review safe...but not sure :-/
At this link http://www.lombax.it/documents/DirtyLayer.zip you can find a sample project that alert you every time the UIWebView layer becomes dirty ;-)
I try to explain how I arrived to this code:
The main goal is to understand when TileLayer (a private subclass of CALayer used by UIWebView) becomes dirty.
The problem is that you can't access it directly. But, you can use method swizzle to change the behavior of the layerSetNeedsDisplay: method in every CALayer and subclasses.
You must be sure to avoid a radical change in the original behavior, and do only the necessary to add a "notification" when the method is called.
When you have successfully detected each layerSetNeedsDisplay: call, the only remaining thing is to understand "which is" the involved CALayer --> if it's the internal UIWebView TileLayer, we trigger an "isDirty" notification.
But we can't iterate through the UIWebView content and find the TileLayer, for example simply using "isKindOfClass:[TileLayer class]" will sure give you a rejection (Apple uses a static analyzer to check the use of private API). What can you do?
Something tricky like...for example...compare the involved layer size (the one that is calling layerSetNeedsDisplay:) with the UIWebView size? ;-)
Moreover, sometimes the UIWebView changes the child TileLayer and use a new one, so you have to do this check more times.
Last thing: layerSetNeedsDisplay: is not always called when you simply scroll the UIWebView (if the layer is already built), so you have to use UIWebViewDelegate to intercept the scrolling / zooming.
You will find that method swizzle it's the reason of rejection in some apps, but it has been always motivated with "you changed the behavior of an object". In this case you don't change the behavior of something, but simply intercept when a method is called.
I think that you can give it a try or contact Apple Support to check if it's legal, if you are not sure.
OLD ANSWER
I'm not sure this is performance friendly enough, I tried it only with both view on the same device and it works pretty good... you should try it using Airplay.
The solution is quite simple: you take a "screenshot" of the UIWebView / MKMapView using UIGraphicsGetImageFromCurrentImageContext. You do this 30/60 times a second, and copy the result in an UIImageView (visible on the second display, you can move it wherever you want).
To detect if the view changed and avoid doing traffic on the wireless link, you can compare the two uiimages (the old frame and the new frame) byte by byte, and set the new only if it's different from the previous. (yeah, it works! ;-)
The only thing I didn't manage this evening is to make this comparison fast: if you look at the sample code attached, you'll see that the comparison is really cpu intensive (because it uses UIImagePNGRepresentation() to convert UIImage in NSData) and makes the whole app going so slow. If you don't use the comparison (copying every frame) the app is fast and smooth (at least on my iPhone 5).
But I think that there are very much possibility to solve it...for example making the comparison every 4-5 frames, or optimizing the NSData creation in background
I attach a sample project: http://www.lombax.it/documents/ImageMirror.zip
In the project the frame comparison is disabled (an if commented)
I attach the code here for future reference:
// here you start a timer, 50fps
// the timer is started on a background thread to avoid blocking it when you scroll the webview
- (IBAction)enableMirror:(id)sender {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); //0ul --> unsigned long
dispatch_async(queue, ^{
// 0.04f --> 25 fps
NSTimer __unused *timer = [NSTimer scheduledTimerWithTimeInterval:0.02f target:self selector:#selector(copyImageIfNeeded) userInfo:nil repeats:YES];
// need to start a run loop otherwise the thread stops
CFRunLoopRun();
});
}
// this method create an UIImage with the content of the given view
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
// the method called by the timer
-(void)copyImageIfNeeded
{
// this method is called from a background thread, so the code before the dispatch is executed in background
UIImage *newImage = [self imageWithView:self.webView];
// the copy is made only if the two images are really different (compared byte to byte)
// this comparison method is cpu intensive
// UNCOMMENT THE IF AND THE {} to enable the frame comparison
//if (!([self image:self.mirrorView.image isEqualTo:newImage]))
//{
// this must be called on the main queue because it updates the user interface
dispatch_queue_t queue = dispatch_get_main_queue();
dispatch_async(queue, ^{
self.mirrorView.image = newImage;
});
//}
}
// method to compare the two images - not performance friendly
// it can be optimized, because you can "store" the old image and avoid
// converting it more and more...until it's changed
// you can even try to generate the nsdata in background when the frame
// is created?
- (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{
NSData *data1 = UIImagePNGRepresentation(image1);
NSData *data2 = UIImagePNGRepresentation(image2);
return [data1 isEqual:data2];
}
I think your idea of using CADisplayLink is good. The main problem is that you're trying to refresh every frame. You can use the frameInterval property to decrease the frame rate automatically. Alternatively, you can use the timestamp property to know when the last update happened.
Another option that might just work: to know if the layers are dirty, why don't you have an object be the delegate of all the layers, which would get its drawLayer:inContext: triggered whenever each layer needs drawing? Then just update the other layers accordingly.
Modern user interfaces, especially MacOS and iOS, have lots of “casual” animation -- views that appear through brief animated sequences largely orchestrated by the system.
[[myNewView animator] setFrame: rect]
Occasionally, we might have a slightly more elaborate animation, something with an animation group and a completion block.
Now, I can imagine bug reports like this:
Hey -- that nice animation when myNewView appears isn't happening in the new release!
So, we'd want unit tests to do some simple things:
confirm that the animation happens
check the duration of the animation
check the frame rate of the animation
But of course all these tests have to be simple to write and mustn't make the code worse; we don’t want to spoil the simplicity of the implicit animations with a ton of test-driven complexity!
So, what is a TDD-friendly approach to implementing tests for casual animations?
Justifications for unit testing
Let's take a concrete example to illustrate why we'd want a unit test. Let's say we have a view that contains a bunch of WidgetViews. When the user makes a new Widget by double-clicking, it’s supposed to initially appear tiny and transparent, expanding to full size during the animation.
Now, it's true that we don't want to need to unit test system behavior. But here are some things that might go wrong because we fouled things up:
The animation is called on the wrong thread, and doesn't get drawn. But in the course of the animation, we call setNeedsDisplay, so eventually the widget gets drawn.
We're recycling disused widgets from a pool of discarded WidgetControllers. NEW WidgetViews are initially transparent, but some views in the recycle pool are still opaque. So the fade doesn't happen.
Some additional animation starts on the new widget before the animation finishes. As a result, the widget begins to appear, and then starts jerking and flashing briefly before it settles down.
You made a change to the widget's drawRect: method, and the new drawRect is slow. The old animation was fine, but now it's a mess.
All of these are going to show up in your support log as, "The create-widget animation isn't working anymore." And my experience has been that, once you get used to an animation, it’s really hard for the developer to notice right away that an unrelated change has broken the animation. That's a recipe for unit testing, right?
The animation is called on the wrong thread, and doesn't get drawn.
But in the course of the animation, we call setNeedsDisplay, so
eventually the widget gets drawn.
Don't unit test for this directly. Instead use assertions and/or raise exceptions when animation is on the incorrect thread. Unit test that the assertion will raise an exception appropriately. Apple does this aggressively with their frameworks. It keeps you from shooting yourself in the foot. And you will know immediately when you are using an object outside of valid parameters.
We're recycling disused widgets from a pool of discarded
WidgetControllers. NEW WidgetViews are initially transparent, but some
views in the recycle pool are still opaque. So the fade doesn't
happen.
This is why you see methods like dequeueReusableCellWithIdentifier in UITableView. You need a public method to get the reused WidgetView which is the perfectly opportunity to test properties like alpha are reset appropriately.
Some additional animation starts on the new widget before the
animation finishes. As a result, the widget begins to appear, and then
starts jerking and flashing briefly before it settles down.
Same as number 1. Use assertions to impose your rules on your code. Unit test that the assertions can be triggered.
You made a change to the widget's drawRect: method, and the new
drawRect is slow. The old animation was fine, but now it's a mess.
A unit test can be just timing a method. I often do this with calculations to ensure they stay within a reasonable time limit.
-(void)testAnimationTime
{
NSDate * start = [NSDate date];
NSView * view = [[NSView alloc]init];
for (int i = 0; i < 10; i++)
{
[view display];
}
NSTimeInterval timeSpent = [start timeIntervalSinceNow] * -1.0;
if (timeSpent > 1.5)
{
STFail(#"View took %f seconds to calculate 10 times", timeSpent);
}
}
I can read your question two ways, so I want to separate those.
If you are asking, "How can I unit test that the system actually performs the animation that I request?", I would say it's not worth it. My experience tells me it is a lot of pain with not a lot of gain and in this kind of case, the test would be brittle. I've found that in most cases where we call operating system APIs, it provides the most value to assume that they work and will continue to work until proven otherwise.
If you are asking, "How can I unit test that my code requests the correct animation?", then that's more interesting. You'll want a framework for test doubles like OCMock. Or you can use Kiwi, which is my favorite testing framework and has stubbing and mocking built in.
With Kiwi, you can do something like the following, for example:
id fakeView = [NSView nullMock];
id fakeAnimator = [NSView nullMock];
[fakeView stub:#selector(animator) andReturn:fakeAnimator];
CGRect newFrame = {.origin = {2,2}, .size = {11,44}};
[[[fakeAnimator should] receive] setFrame:theValue(newFrame)];
[myController enterWasClicked:nil];
You don't want to actually wait for the animation; that would take the time the animation takes to run. If you have a few thousand tests, this can add up.
More effective is to mock out the UIView static method in a category so that it takes effect immediately. Then include that file in your test target (but not your app target) so that the category is compiled into your tests only. We use:
#import "UIView+SpecFlywheel.h"
#implementation UIView (SpecFlywheel)
#pragma mark - Animation
+ (void)animateWithDuration:(NSTimeInterval)duration animations:(void (^)(void))animations completion:(void (^)(BOOL finished))completion {
if (animations)
animations();
if (completion)
completion(YES);
}
#end
The above simply executes the animation block immediately, and the completion block immediately if it's provided as well.
I am trying to update a UIProgressView progress bar that I have in a UIView during a long resource loading process. Let's say I'm loading a bunch of bitmaps from a NIB file (like maybe a hundred). After I load 10, I issue a .progress to the UIProgressView that is part of a UIView that is already being displayed. So, I issue:
myView.myProgressView.progress=0.2;
Then, I load another 10 bitmaps, and issue:
myView.myProgressView.progress=0.4;
etc., etc. When the app runs, the progress bar doesn't advance. It simply stays at its initial position. At the risk of sounding like a complete moron, do I have to load my resources on a separate thread so the OS can update the UI, or, is there an easier way? Thanks for any assistance.
Yes. Load them on a separate thread. Or just use something like performSelector:
[self performSelector:#selector(setProgressBar) withObject:nil afterDelay:0.0];
(and create a setProgressBar function which reads the current value from a member variable and updates the UI)
You could run a step of the runloop after each update of the UI:
SInt32 result;
do {
result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, TRUE);
} while(result == kCFRunLoopRunHandledSource);
It could have other bad consequences (for example, enable user to interact with the UI, execute delegates of view controller such as viewDidAppear before they should be executed, etc) so be very, very careful.