I need to mirror a UIWebView's CALayers to a smaller CALayer. The smaller CALayer is essentially a pip of the larger UIWebView. I'm having difficulty in doing this. The only thing that comes close is CAReplicatorLayer, but given the original and the copy have to have CAReplicatorLayer as a parent, I cannot split the original and copy on different screens.
An illustration of what I'm trying to do:
The user needs to be able to interact with the smaller CALayer and both need to be in sync.
I've tried doing this with renderInContext and CADisplayLink. Unfortunately there is some lag/stutter because it's trying to re-draw every frame, 60 times a second. I need a way to do the mirroring without re-drawing on each frame, unless something has actually changed. So I need a way of knowing when the CALayer (or child CALayers) become dirty.
I cannot simply have two UIWebView's because two pages may be different (timing is off, different background, etc...). I have no control over the web page being displayed. I also cannot display the entire iPad screen as there are other elements on the screen that should not show on the external screen.
Both the larger CALayer and smaller "pip" CALayer need to match smoothly frame-for-frame in iOS 6. I do not need to support earlier versions.
The solution needs to be app-store passable.
As written in comments, if the main needing is to know WHEN to update the layer (and not How), I move my original answer after the "OLD ANSWER" line and add what discussed in the comments:
First (100% Apple Review Safe ;-)
You can take periodic "screenshots" of your original UIView and compare the resulting NSData (old and new) --> if the data is different, the layer content changed. There is no need to compare the FULL RESOLUTION screenshots, but you can do it with smaller one, to have better performance
Second: performance friendly and "theorically" review safe...but not sure :-/
At this link http://www.lombax.it/documents/DirtyLayer.zip you can find a sample project that alert you every time the UIWebView layer becomes dirty ;-)
I try to explain how I arrived to this code:
The main goal is to understand when TileLayer (a private subclass of CALayer used by UIWebView) becomes dirty.
The problem is that you can't access it directly. But, you can use method swizzle to change the behavior of the layerSetNeedsDisplay: method in every CALayer and subclasses.
You must be sure to avoid a radical change in the original behavior, and do only the necessary to add a "notification" when the method is called.
When you have successfully detected each layerSetNeedsDisplay: call, the only remaining thing is to understand "which is" the involved CALayer --> if it's the internal UIWebView TileLayer, we trigger an "isDirty" notification.
But we can't iterate through the UIWebView content and find the TileLayer, for example simply using "isKindOfClass:[TileLayer class]" will sure give you a rejection (Apple uses a static analyzer to check the use of private API). What can you do?
Something tricky like...for example...compare the involved layer size (the one that is calling layerSetNeedsDisplay:) with the UIWebView size? ;-)
Moreover, sometimes the UIWebView changes the child TileLayer and use a new one, so you have to do this check more times.
Last thing: layerSetNeedsDisplay: is not always called when you simply scroll the UIWebView (if the layer is already built), so you have to use UIWebViewDelegate to intercept the scrolling / zooming.
You will find that method swizzle it's the reason of rejection in some apps, but it has been always motivated with "you changed the behavior of an object". In this case you don't change the behavior of something, but simply intercept when a method is called.
I think that you can give it a try or contact Apple Support to check if it's legal, if you are not sure.
OLD ANSWER
I'm not sure this is performance friendly enough, I tried it only with both view on the same device and it works pretty good... you should try it using Airplay.
The solution is quite simple: you take a "screenshot" of the UIWebView / MKMapView using UIGraphicsGetImageFromCurrentImageContext. You do this 30/60 times a second, and copy the result in an UIImageView (visible on the second display, you can move it wherever you want).
To detect if the view changed and avoid doing traffic on the wireless link, you can compare the two uiimages (the old frame and the new frame) byte by byte, and set the new only if it's different from the previous. (yeah, it works! ;-)
The only thing I didn't manage this evening is to make this comparison fast: if you look at the sample code attached, you'll see that the comparison is really cpu intensive (because it uses UIImagePNGRepresentation() to convert UIImage in NSData) and makes the whole app going so slow. If you don't use the comparison (copying every frame) the app is fast and smooth (at least on my iPhone 5).
But I think that there are very much possibility to solve it...for example making the comparison every 4-5 frames, or optimizing the NSData creation in background
I attach a sample project: http://www.lombax.it/documents/ImageMirror.zip
In the project the frame comparison is disabled (an if commented)
I attach the code here for future reference:
// here you start a timer, 50fps
// the timer is started on a background thread to avoid blocking it when you scroll the webview
- (IBAction)enableMirror:(id)sender {
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); //0ul --> unsigned long
dispatch_async(queue, ^{
// 0.04f --> 25 fps
NSTimer __unused *timer = [NSTimer scheduledTimerWithTimeInterval:0.02f target:self selector:#selector(copyImageIfNeeded) userInfo:nil repeats:YES];
// need to start a run loop otherwise the thread stops
CFRunLoopRun();
});
}
// this method create an UIImage with the content of the given view
- (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
// the method called by the timer
-(void)copyImageIfNeeded
{
// this method is called from a background thread, so the code before the dispatch is executed in background
UIImage *newImage = [self imageWithView:self.webView];
// the copy is made only if the two images are really different (compared byte to byte)
// this comparison method is cpu intensive
// UNCOMMENT THE IF AND THE {} to enable the frame comparison
//if (!([self image:self.mirrorView.image isEqualTo:newImage]))
//{
// this must be called on the main queue because it updates the user interface
dispatch_queue_t queue = dispatch_get_main_queue();
dispatch_async(queue, ^{
self.mirrorView.image = newImage;
});
//}
}
// method to compare the two images - not performance friendly
// it can be optimized, because you can "store" the old image and avoid
// converting it more and more...until it's changed
// you can even try to generate the nsdata in background when the frame
// is created?
- (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2
{
NSData *data1 = UIImagePNGRepresentation(image1);
NSData *data2 = UIImagePNGRepresentation(image2);
return [data1 isEqual:data2];
}
I think your idea of using CADisplayLink is good. The main problem is that you're trying to refresh every frame. You can use the frameInterval property to decrease the frame rate automatically. Alternatively, you can use the timestamp property to know when the last update happened.
Another option that might just work: to know if the layers are dirty, why don't you have an object be the delegate of all the layers, which would get its drawLayer:inContext: triggered whenever each layer needs drawing? Then just update the other layers accordingly.
Related
In my app I am rendering lots (200 to 400) UIImages on a background thread and then installing them inside on-screen UIImageView instances by dispatching a UI update block to the main thread.
Some code to show roughly what I'm doing…
dispatch_async( redrawQueue, ^{
// An array to stuff images in to for the views that have one.
NSMutableArray *const images = [NSMutableArray arrayWithCount: [activeViews count] value: [NSNull null]];
for(NSUInteger i=0; i<activeCount; ++i)
{
// Rendered content comes from a block in myState.
UIImage *const image = contentBlock(i);
if(image)
{
images[i] = image;
}
}
else
{
return;
}
// Update the UI now…
dispatch_async( dispatch_get_main_queue(), ^{
[images enumerateObjectsUsingBlock:^(UIImage *image, NSUInteger i, BOOL *stop) {
UIImageView *const view = [activeViews objectAtIndex:i];
[view setImage: [NSNull isNotNull: image] ? image : nil];
layoutBlock(view, i);
}];
});
});
This is working well, but I'm still getting dropped frames during rapid scrolling. It seems like this is happening because the work of setting the images in the views is overwhelming the main thread. My evidence for this is that if I take out just the code to actually set the rendered images in the views, scrolling is much smoother.
I'm wondering if an approach to solving this might be to also create the views in a background thread, assign images to them, and place them in to a container view. Then in the main thread, I would simply need to swap the container in to the on-screen scene. The result is a bit like a double buffered graphics context, I guess – update one while the other is displayed.
Can anyone suggest if this is unlikely to be thread safe?
I've done a small test of allocating off screen UIViews on a background thread and nesting them inside each other. It hasn't crashed yet :-) "It hasn't crashed yet" isn't a great "thread safety" guarantee though! It also doesn't say anything about what might happen in a future version of iOS.
An obvious answer to this is "Hey, you fool, why are you using hundreds of little views? Composite them to one a big image and have a single view you swap it in to." Unfortunately I need lots of little views because I need to move the individual little pieces about independently.
Another answer might be "Use sprite kit, dude", and you're probably right, but the little views have dynamic size and content and I'm not sure how optimal sprite kit is when there are lots of sprite updates occurring.
A third approach could be to throttle the UI updates on the main thread to prevent frames getting dropped. Is there a mechanism that does this? Some kind of dispatch queue run by the main thread that only calls stuff while it's got plenty of time left?
You asked:
A third approach could be to throttle the UI updates on the main
thread to prevent frames getting dropped. Is there a mechanism that
does this? Some kind of dispatch queue run by the main thread that
only calls stuff while it's got plenty of time left?
This is not something that's built in, but it's not that hard to envision how you might do this, but it's also non-trivial. You will probably need to manage your own array of images to deliver (including some means of protecting it from concurrent access), then add a CFRunLoopObserver (probably in the kCFRunLoopBeforeWaiting activity, since that's when the run loop is about to go to sleep) that, every time it's triggered, marks the start time, and then processes items from your array of images until some amount of time has passed (10ms is probably a decent amount of time).
Another thing you might consider would be rendering many of these little images into one CGImage (or some small number of images), and then setting the view's layer's contents to the big image, while setting the bounds such that each instance is clipped to just the portion corresponding to that view. This might reduce the number of GPU texture uploads (and hence overall overhead), since all the CALayers backing the views will have the same image as their contents. This would probably be my first stop.
I'm building a scrolling menu that generates new rows of buttons on the fly, and must generate each button from a large number of sprites. Because this is processor intensive, the menu sticks for about a quarter second each time it needs to load a new row of buttons. I realized I needed to add multi-threading so the button load could be handled in a different thread than the scroll animation, but when I do it crashes when it tries to load new buttons. Here is the code I'm using:
-(void)addRowBelow{
_rowIndex--;
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSMutableArray *row = [self addRow:_rowIndex];
[_buttonGrid addObject:row];
[self removeRow:[_buttonGrid objectAtIndex:0]];
});
_nextRowBelowPos += _rowHeight;
_nextRowAbovePos += _rowHeight;
}
Each time I test it I get a different error, sometimes it's a memory error or an assertion failure. I suspect it has to do with calling cocos2d functions asynchronously?
You are probably getting crashing issues because you are multithreading access to the cocos managed objects (sprites, layers, nodes, etc). Since the engine expects to use the internals of these objects for display, GPU operations, etc., and is NOT thread safe, you are probably not going to have good outcomes with multi-threading. You may be changing stuff right in the middle of when it is using it.
Creating/destroying sprites on the fly is probably the reason for your slow down. Cocos2d can display lots (I think it is on the order of 2k) objects on the screen at 60 fps...as long as you don't throttle it down by doing a lot of creation/destruction or AI.
I suggest you preload all your sprites before your scene goes on the stage. You can do this in an intro scene or in the init of the scene itself and let the sprites be owned by the scene. Then you can iterate over them during the update() call and change their positions, make the visible/invisible, etc.
For reference, I usually create different "sprite layers" that load up all their sprites on addition to the scene. If I am going to have dynamic objects, I try to allocate some up front and recycle them when possible. This also allows me to control the order of "what is in front of what" on the screen (see example here). Each layer also draws elements of specific "entity types", giving a nice "MVC" character to a lot of the display.
This is analogous to the way iPhone Apps recycle table cells.
Only create them the first time you need them and have a stash on hand before you need them at all.
Was this helpful?
The pattern you probably want to use is
Dispatch work to a background thread. (Note that the work must be safe to execute on a background thread.)
Dispatch back to the main thread to update your UI.
Here's an example of what that looks like in code:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Do work that is safe to execute in the background.
// For example, reading images from disk.
dispatch_async(dispatch_get_main_queue(), ^{
// Do work here that must execute on the main thread.
// For example, calling Cocos2D objects' methods.
NSMutableArray *row = [self addRow:_rowIndex];
[_buttonGrid addObject:row];
[self removeRow:[_buttonGrid objectAtIndex:0]];
});
});
I've been trying to figure out UIImage for some time now. I've been trying to figure out an approach to having one view the 'Main Game View' and showing either 2/3/4 different images depending on the 'level' variable. I'm just trying to be sure of logic. So for example level 1 would display 4 pictures and level 2 might display 3 different pictures. I don't want to hinder performance of the app but because the game is to be played offline and archiving won't make much of a difference all the images (several hundred optimised images) are being stored locally in the main app bundle.
I'm just wondering if my logic for trying to implement this so far is sound or not. For level 1 I would implement the 4 UIImageViews needed and initialise them with images, then display them on screen at set positions. I would then preload the next levels images using GCD. When a continue button is pressed I will set the UIImages and the UIImageViews to nil and display level 2's (or the next level) on screen.
I'm not confident in my approach and was wondering if there was something that would make it simpler or something I've missed or even if in practice it will work accordingly to the theory.
Thank you in advance for you time and any help.
Sorry if this is unclear.
I'm assuming you use a single view controller class for all your levels and load the levels from some .plist file or other format.
Don't bother using GCD. Loading 4 images once per level during loading costs practically nothing.
If all you need to do is have 2-4 images on the screen for each level (in addition to any other level elements), simply create 4 UIImageView instances and add them to your view. Then when you load a level, create new UIImage objects from the required image files and set your UIImageViews' image property to them. The UIImage objects of the old level will be released and deallocated at that moment, since you (and the UIImageViews) don't have a strong reference to them anymore.
Pseudo implementation:
- (void)loadLevel:(int)levelNumber {
// Assuming you have your image views in an array imageViews.
// Determine the image files required for this level.
// Put them into an array imageNames.
for (int i = 0; i < 4; i++) {
if (imageNames.count < i) {
UIImage *image = [UIImage imageNamed:imageNames[i]];
[self.imageViews[i] setImage:image];
} else {
[self.imageViews[i] setImage:nil];
}
}
Modern user interfaces, especially MacOS and iOS, have lots of “casual” animation -- views that appear through brief animated sequences largely orchestrated by the system.
[[myNewView animator] setFrame: rect]
Occasionally, we might have a slightly more elaborate animation, something with an animation group and a completion block.
Now, I can imagine bug reports like this:
Hey -- that nice animation when myNewView appears isn't happening in the new release!
So, we'd want unit tests to do some simple things:
confirm that the animation happens
check the duration of the animation
check the frame rate of the animation
But of course all these tests have to be simple to write and mustn't make the code worse; we don’t want to spoil the simplicity of the implicit animations with a ton of test-driven complexity!
So, what is a TDD-friendly approach to implementing tests for casual animations?
Justifications for unit testing
Let's take a concrete example to illustrate why we'd want a unit test. Let's say we have a view that contains a bunch of WidgetViews. When the user makes a new Widget by double-clicking, it’s supposed to initially appear tiny and transparent, expanding to full size during the animation.
Now, it's true that we don't want to need to unit test system behavior. But here are some things that might go wrong because we fouled things up:
The animation is called on the wrong thread, and doesn't get drawn. But in the course of the animation, we call setNeedsDisplay, so eventually the widget gets drawn.
We're recycling disused widgets from a pool of discarded WidgetControllers. NEW WidgetViews are initially transparent, but some views in the recycle pool are still opaque. So the fade doesn't happen.
Some additional animation starts on the new widget before the animation finishes. As a result, the widget begins to appear, and then starts jerking and flashing briefly before it settles down.
You made a change to the widget's drawRect: method, and the new drawRect is slow. The old animation was fine, but now it's a mess.
All of these are going to show up in your support log as, "The create-widget animation isn't working anymore." And my experience has been that, once you get used to an animation, it’s really hard for the developer to notice right away that an unrelated change has broken the animation. That's a recipe for unit testing, right?
The animation is called on the wrong thread, and doesn't get drawn.
But in the course of the animation, we call setNeedsDisplay, so
eventually the widget gets drawn.
Don't unit test for this directly. Instead use assertions and/or raise exceptions when animation is on the incorrect thread. Unit test that the assertion will raise an exception appropriately. Apple does this aggressively with their frameworks. It keeps you from shooting yourself in the foot. And you will know immediately when you are using an object outside of valid parameters.
We're recycling disused widgets from a pool of discarded
WidgetControllers. NEW WidgetViews are initially transparent, but some
views in the recycle pool are still opaque. So the fade doesn't
happen.
This is why you see methods like dequeueReusableCellWithIdentifier in UITableView. You need a public method to get the reused WidgetView which is the perfectly opportunity to test properties like alpha are reset appropriately.
Some additional animation starts on the new widget before the
animation finishes. As a result, the widget begins to appear, and then
starts jerking and flashing briefly before it settles down.
Same as number 1. Use assertions to impose your rules on your code. Unit test that the assertions can be triggered.
You made a change to the widget's drawRect: method, and the new
drawRect is slow. The old animation was fine, but now it's a mess.
A unit test can be just timing a method. I often do this with calculations to ensure they stay within a reasonable time limit.
-(void)testAnimationTime
{
NSDate * start = [NSDate date];
NSView * view = [[NSView alloc]init];
for (int i = 0; i < 10; i++)
{
[view display];
}
NSTimeInterval timeSpent = [start timeIntervalSinceNow] * -1.0;
if (timeSpent > 1.5)
{
STFail(#"View took %f seconds to calculate 10 times", timeSpent);
}
}
I can read your question two ways, so I want to separate those.
If you are asking, "How can I unit test that the system actually performs the animation that I request?", I would say it's not worth it. My experience tells me it is a lot of pain with not a lot of gain and in this kind of case, the test would be brittle. I've found that in most cases where we call operating system APIs, it provides the most value to assume that they work and will continue to work until proven otherwise.
If you are asking, "How can I unit test that my code requests the correct animation?", then that's more interesting. You'll want a framework for test doubles like OCMock. Or you can use Kiwi, which is my favorite testing framework and has stubbing and mocking built in.
With Kiwi, you can do something like the following, for example:
id fakeView = [NSView nullMock];
id fakeAnimator = [NSView nullMock];
[fakeView stub:#selector(animator) andReturn:fakeAnimator];
CGRect newFrame = {.origin = {2,2}, .size = {11,44}};
[[[fakeAnimator should] receive] setFrame:theValue(newFrame)];
[myController enterWasClicked:nil];
You don't want to actually wait for the animation; that would take the time the animation takes to run. If you have a few thousand tests, this can add up.
More effective is to mock out the UIView static method in a category so that it takes effect immediately. Then include that file in your test target (but not your app target) so that the category is compiled into your tests only. We use:
#import "UIView+SpecFlywheel.h"
#implementation UIView (SpecFlywheel)
#pragma mark - Animation
+ (void)animateWithDuration:(NSTimeInterval)duration animations:(void (^)(void))animations completion:(void (^)(BOOL finished))completion {
if (animations)
animations();
if (completion)
completion(YES);
}
#end
The above simply executes the animation block immediately, and the completion block immediately if it's provided as well.
I'm displaying lots of images loaded directly from my app (not downloaded). My table view is slow when I scroll it the first time. It becomes smooth after all my cell has been displayed. I don't really know why.
I have an array of UIImage that I'm loading in the viewDidLoad. Then in my tableview delegate I just get the image at a given index path and set it to an UIImageView of my cell.
Do you know how I can improve performances ?
just to share I have fixed and it worked very well Steps I followed.
1) Set the performSelectorInBAckground function with Cell as parameter passed that holds the scroll view or uiview to put many iamges.
2) In the background function load the image stored from application bundle or local file using imagewithContents of file.
3) Set the image to the imageView using this code.
//// Start of optimisation - for iamges to load dynamically in cell with delay , make sure you call this function in performSelectorinBackground./////
//Setting nil if any for safety
imageViewItem.image = nil;
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
UIImage *image = // Load from file or Bundle as you want
dispatch_sync(dispatch_get_main_queue(), ^{
//Set the image to image view not, wither cell.imageview or [cell add subviw:imageview later ]
[imageViewItem setImage:image];
[imageViewItem setNeedsLayout];
});
});
//// End of optimisation/////
This will load all images dynamically and also scroll the table view quite smoothly than previous slow and jerky behaviour.
All the best
You can read the answer I have just submitted here:
Loading image from CoreData at cellForRowAtIndexPath slows down scrolling
The basic idea is to use Grand Central Despatch to move your table-view-image-getting code to a separate thread, filling in your cells back on the main thread as the images become available. Your scrolling will be super-smooth even if there's a delay loading the images into memory from the filesystem.
What I understand from your question is that your images are all set to go and that they are loaded into RAM (stored in an array which is populated in viewDidLoad). In this case, I would want to see the exact code in cellForRowAtIndexPath in order to help out. My instinct tells me that something is being done there that shouldn't be done on the main thread (as He Was suggests). The thing is - if it's only a fetch from an NSArray (worst case O(log(n))), you shouldn't be seeing a performance hit.
I know you're not downloading the images but I would still recommend to do ANY non-UI operation on a background thread. I wrote something that might help you out.